2024 Tutorials

    • A Holistic Approach to RISC-V Processor Verification

      Processors using the open standard RISC-V instruction set architecture (ISA) are becoming increasingly common, with an estimated 30% of SoCs designed in 2023 containing at least one RISC-V core. Whether licensing RISC-V IP and adding custom instructions, using open-source RISC-V IP or building a RISC-V processor from scratch, verification of RISC-V processors is a task in the SoC project plan. With the variety of sources for the processor IP, the range of complexity and the span of use cases, a one-size-fits-all approach to RISC-V processor verification does not work.  

      This tutorial presents a holistic approach to RISC-V processor verification using various tools in the Synopsys portfolio. It will cover processor complexity from microcontrollers to application processors to arrays of processors for AI accelerators, different levels of integration from unit to individual processor to processing subsystem to SoC, and cover different scenarios depending on the source of the processor IP. Matching different technologies and methodologies to this multidimensional verification space is critical.   

      Figure 1 shows an overview of the technologies and products included in this holistic approach to processor verification.  These can be separated at a high level into formal and dynamic verification technology groups, however, even within those groups there are multiple technologies, methodologies and use cases.  For example, dynamic verification can include self-checking test, post-simulation trace-compare and lockstep continuous-compare methodologies, executed on RTL, hardware-assisted verification platforms or actual silicon.   

      The key metric for this holistic approach is functional coverage, driven by a comprehensive verification plan.  Continuing the example above, the verification plan might utilize relatively simple post-simulation trace-compare for basic instruction verification.  However, verification of asynchronous events such as interrupts, debug mode, privilege modes and more requires the lockstep continuous-compare flow (Figure 2), which utilizes the ImperasFPM RISC-V processor model, ImperasDV processor verification environment and ImperasFC functional coverage modules.  The verification plan might drill down into specific units in the processor, for example using formal verification (VC Formal FPV plus RISC-V ISA AIP assertions) for the floating point unit or for micro-architectural features such as the pipeline (especially an Out-of-Order pipeline).  It might also go to a higher level of integration, for example using PSS (VC PSS) for verification of high level caches in a multi-processor configuration.   

      The verification plan also needs to take into account processor complexity and the end use case.  A simple microcontroller, e.g. RV32IMAC, that is going to have internally-developed software running on it (a limited use case) will need less verification than that same processor that will be exposed to end users running software that may exercise every feature of the core.  Verifying custom instructions should be a task commensurate with the number and complexity of the custom instructions, however, there also needs to be some verification that adding the custom instructions did not add unexpected behaviors to the original processor.   

      Verification also takes a lot of cycles.  One estimate is that verification of an application processor takes 1015 cycles, or 10,000 RTL simulators running in parallel for 1 year.  Most teams do not have those resources and schedule available.  One way to accelerate the verification task is by using hardware-assisted verification platforms.  By running the RTL at millions of instructions per second, 1,000x faster than RTL simulators, a significant shift left in verification can be achieved.   

      This tutorial will elaborate different decisions that go into the verification plan for RISC-V processors, and review the different technologies and methodologies that are employed in a holistic approach to processor verification.   

      • Larry Lapides.jpg

        Synopsys

    • A detailed tour of IEEE standard P3164

      The tutorial addresses security concerns asset identification for Security Annotation during intellectual property (IP) integration, providing a short tour of IEEE P3164 in a uniform way. The new IEEE P3164 white paper extends Accellera’s Security Annotation for Electronic Design Integration (SA-EDI) standard by providing a methodology that identifies critical security-relevant hardware design elements (called assets) from the early stages of IP architecture development to implementation. The methodology helps identify vulnerabilities that relate to breaches of privacy and security properties in IP design elements. These breaches may relate to well-established security pillars, such as confidentiality, integrity, and availability, device authenticity, and non-repudiation, but also other desired properties, such as access control, anonymity, remote attestation, or unexpected behavior.

      • grammatikakis.jpg

        Hellenic Mediterranean University

      • Siemens EDA

    • “Calling All Engines” – Faster Coverage Closure with Simulation, Formal, and Emulation

      Logic simulation remains the dominant technology modern chip design flows rely on for functional verification. A combination of rising complexity and random stimulus testing made it necessary to measure verification coverage, and impose a coverage closure milestone in the design verification flow. But with silicon chips becoming entire systems, both hardware assisted verification and formal verification are playing a greater role in achieving verification levels adequate for safety critical and quality stringent high-volume applications.

      Despite these technologies providing payloads and test vectors orders of magnitude higher than those of simulation, their contribution to coverage often plays a muted role in reaching closure. In this tutorial, we will present a coverage closure framework that encompasses simulation, emulation, and formal; focusing on the type of coverage these verification tools can compute or extract.

      We will also look at coverage data formats and how much inter-operability it offers across the various 
      streams of EDA tools, but also some of the shortcomings that need to be addressed to make 
      inter-operability more accessible for all types of EDA verification.  We will enumerate the various scenarios where it makes sense to combine different types of coverage; and offer guidelines for deploying coverage merging techniques across emulation, simulation and formal.

      We will also highlight the challenges with simulation coverage tools when used to manage coverage, 
      then underline the benefits of a data driven collaborative verification environment in streamlining 
      coverage closure using existing coverage APIs and upcoming AI/ML algorithms.

      • Yassine Eben Aimine.jpg

        Siemens EDA

    • cocotb 2.0: How to get the best out of the new major version of the Python-based testbench framework

      cocotb is the most popular Python-based verification approach. For a good reason: By writing verification code in Python, verification engineers have access to all the goodness that makes software development productive and enjoyable. It allows developers to focus on the verification task itself, and tap into a huge ecosystem of existing code where it makes sense.

      With cocotb 2.0, the latest major version of the framework, writing testbenches got even easier. The programming interface got streamlined further, deprecated functionality was removed, and some long-standing quirks were ironed out.

      In this tutorial, we’ll show how to write testbenches the cocotb 2.0 way. We’ll also show how to update existing cocotb testbenches benefit from all cocotb 2.0 has to offer. 

      This tutorial is suited for both people who would like to get an introduction to cocotb, as well as for users who already have embarked on the cocotb journey and are looking for a guided tour to future-proof their testbenches. Basic Python programming skills are helpful.

      • Philipp Wagner.jpg

        IBM

      • Holger Horbach.jpg

        IBM Research & Development GmbH

    • Comprehensive Glitch Signoff – Learnings and experiences from industry use cases

      Vikas Sachdeva serves as the Head of Business Development for the APAC region at Real Intent, where he leads business development and product strategy for the company's key static signoff products. A graduate of the Indian Institute of Technology Delhi, Vikas is an entrepreneur and technologist with a deep expertise in EDA and semiconductors. He is passionate about technology, product innovation, and nurturing the next generation of talent in VLSI. Additionally, he is a best-selling author on Amazon with his book, "Becoming Irreplaceable."  

      • Vikas Sachdeva.JPG

        Real Intent

      • Vardan Vardanyan.jpg

        Real Intent

    • Developing Complex Systems using Model-Based Cybertronic Systems Engineering Methodology

      Nowadays the systems are becoming more and more software (SW) defined. They contain a high number of dedicated hardware (HW) accelerators, heterogeneous processor architectures and huge amounts of software (SW) distributed to multiple electronic units in the network to provide the required functionality and configure the HW to reach the maximum efficiency. The problems of developing such multidisciplinary systems, so called cybertronics challenge, exceed the capabilities of today’s development methodologies. 

      A new design methodology is needed to tackle the cybertronics challenge. It must be able to thread the requirements from the top-level to the final implementation, and to abstract the design to the level that all stakeholders can understand. It must be able to handle the domain specific implementation information too. Model-based methodologies are the most promising approaches, but none of the existing methodologies can handle the complexity alone. A Model-Based Cybertronics Systems Engineering (MBCSE) methodology borrows elements from multiple model-based system design methodologies. It provides a framework that can handle the system complexity, requirements threading, digitally assisted system decomposition, interfaces to domain specific implementation flows, and digitally threaded verification. This tutorial demonstrates the MBCSE methodology starting from system requirements going through multiple subsystem layers to System-on-Chip (SoC) design using a simple product example.

      • Petri_Solanti.jpg

        Siemens

    • Efficient AI: Mastering Shallow Neural Networks from Training to RTL Implementation

      In the dynamic arena of artificial intelligence, the distinction between deep and shallow neural networks is not just academic. As we navigate through the complexities of AI, understanding the nuanced differences between these two neural network architectures becomes crucial, especially when considering their unique implementation requirements. Shallow neural networks, often overshadowed by their deep counterparts' ability to handle intricate tasks, are perfectly suited for specific use cases such as regression neural networks. Shallow networks, with their streamlined layer structure, are not just easier to train but also significantly less demanding on computational resources, memory utilization, and more.  

      This tutorial will highlight the architectural and operational differences between deep and shallow neural networks, focusing on how the latter's efficiency and resourcefulness make them ideal candidates for direct implementation on ASICs or FPGAs without the need for a traditional processor structure or extensive memory. Furthermore, this approach ensures a very low-latency and low-power implementation. 

      Using an example of a battery management system to predict the state of charge, we will show all the necessary steps to: 

      • Train and validate a regression neural network in MATLAB.
      • Import the regression model into a Simulink block diagram.
      • Quantize the algorithm automatically using the Fixed-Point Tool.
      • Optimize the Simulink model for an efficient hardware implementation. 
      • Generate VHDL, Verilog, or SystemVerilog code for the neural network.
      • Verify the generated RTL code by interfacing HDL simulators with MATLAB/Simulink.
      • Test the algorithm on a prototyping hardware.

      The shown approach can also be adapted to more network types and use cases. It not only opens up new avenues for AI applications in resource-, latency-, and power-constrained environments but also showcases the versatility and potential of shallow networks in modern technology. Join us on this tutorial journey to learn what is possible in AI with minimalistic yet powerful solutions. 

      • Tom Richter.jpg

        MathWorks

    • Exploring the Next-Generation of Debugging with Verification Management and Integrated Development Environment

      This tutorial presents an overview of the recently unveiled, cutting-edge advancements in the next-generation Synopsys Verdi® platform. Learn about the power of AI-driven debug and new root cause analysis engines designed to speed-up bug finding, while experiencing enhanced usability through a refreshed graphical user interface. In addition, this session will cover how you can access an integrated development environment (IDE) and a robust verification management system, integrated to boost productivity your verification workflows. Dive into state-of-the-art verification with us as we showcase these groundbreaking capabilities poised to redefine the landscape of verification and debug methodologies.

      • Ionut Cirjan.jpg

        Synopsys

      • Noam Roth.jpeg

        Synopsys

      • Wener Kerscher.jpg

        Synopsys

    • G-QED for Pre-Silicon Verification

      With the increasing integration of diverse hardware accelerator and processor IP cores into digital systems, pre-silicon verification becomes increasingly complex. Our approach, Generalized Quick Error Detection (G-QED) [3], addresses these challenges by offering a robust verification framework that enhances productivity. G-QED, utilizing Bounded Model Checking (BMC) [4] techniques, introduces a novel method for any design with a notion of actions (for example, instructions in a processor core) and architectural states (for example, register files of a processor core).

      An industrial case study on production-ready AI hardware accelerators demonstrated the following outcomes:

      1. Significantly improved bug coverage of G-QED by uniquely detecting critical bugs that escaped industrial (simulation- and formal verification-based) verification flow (in addition to detecting all bugs detected by the industrial flow).
      2. Dramatically improved verification productivity of G-QED, from 52 person-weeks using industrial flow to only 3 person-weeks using G-QED -- an 18-fold boost.
      3. G-QED enabled short design-design verification loops with quick turnaround for rapidly evolving designs.

      This tutorial will equip design and verification engineers with the skills to implement G-QED for efficient pre-silicon verification of hardware designs. Participants will learn through practical demonstrations how to utilize G-QED in a commercial formal verification tool without needing extensive knowledge of formal methods. The session will cover:

      • Understanding and interpreting actions, architectural states, and idling in the hardware design-under-verification (DUV).
      • Building and utilizing the G-QED harness for interaction with DUV and performing essential checks.
      • Implementing the G-QED setup in Siemens OneSpin for robust verification.
      • Debugging techniques specific to G-QED and the DUV. 
      • Saranyu-Chattopdhyay.jpeg

        Robust Systems Group at Stanford

      • Mohammad-R-Fadiheh.jpg

        Stanford Robust Systems Group

      • Keerthi-Devarajegowda.PNG

        Siemens EDA

    • Breakthrough in CDC-RDC Verification Defining a Standard for Interoperable Abstract Model

      CDC analysis has evolved as an inevitable stage in RTL quality signoff in the last two decades. Over this period, the designs have grown exponentially to SOC’s having 2 trillion+ transistors and chiplet’s having 7+ SOC’s. Today CDC verification has become a multifaceted effort across the chips designed for clients, servers, mobile, automotives, memory, AI/ML, FPGA etc. with focus on cleaning up of thousands of clocks and constraints, integrating the SVA’s for constraints in validation environment to check for correctness, looking for power domain and DFT logic induced crossings, finally signing off with netlist CDC to unearth any glitches and missing crossings during synthesis. As the design sizes increased in every generation the EDA tools could not handle running flat and the only way of handling design complexity was through hierarchical CDC analysis consuming abstracts. Also, hierarchical analysis helps to enable the analysis in parallel with teams across the globe. Even with all these significant progress in capabilities of EDA tools the major bottleneck in CDC analysis of complex SOC’s and Chiplets is consuming abstracts generated by different vendor tools. Different vendor tool abstracts are seen because of multiple IP vendors , even in house teams might deliver abstracts generated with different vendors tools. The Accellera CDC Working- Group aims to define a standard CDC IPXact model to be portable and reusable regardless of the involved verification tool.

      As moving from monolithic designs to IP/SOC with IPs sourced from a small/select providers to sourcing IPs globally (to create differentiated products), the quality must be maintained as driving faster time-to-market. In areas where the standards (SystemVerilog, OVM/UVM, LP/UPF) are present, the integration is able to meet the above (quality, speed). However, in areas where standards (in this case, CDC) are not available, most options trade-off either quality, or time-to-market, or both. Creating a standard for inter-operable collateral addresses this gap.

      This tutorial aims to remind the definitions of CDC-RDC Basic Concepts and constraints, as well as the description of the reference verification flow, and addressing the goals and scope of the Accellera CDC Working Group in order to elaborate a specification of the standard abstract model.

      • Joachim Voges.JPG

        Infineon Technologies AG

      • P1060670_headShot.png

        STMicroelectronics, Grenoble

      • Picture1.jpg

        Bosch Sensortec

    • Modernizing the Hardware / Software Interface - Life beyond spreadsheets. How to bring your SoC register design into the 21st Century

      Advanced semiconductor designs have many components, including multi-core architectures, programmable peripherals, and purpose-built accelerators. These design elements require a pathway for embedded system software to communicate with them. This is the hardware/software interface (HSI) and it forms the foundation for the entire design project. There are many activities that need information about the HSI. These activities include device drivers and firmware, hardware design and verification, technical documentation, system diagnostics and application software. All of them need accurate, up-to-date HSI information in many different and specialized formats. A lack of unified, up-to-date information results in poor collaboration and an increased opportunity for design errors. This can lead to costly last-minute fixes or even design re-spins, impacting team productivity and compromising, the end quality of the SoC.

      Arteris addresses these challenges with CSRCompiler. CSRCompiler provides new technology for a better HSI solution with a scalable infrastructure that promotes a rapid, highly iterative design environment to specify, document, implement, and verify address maps for complex SoCs and FPGAs. 

      During this tutorial, we will explain how CSRCompiler has the features and flexibility to speed development of the largest and most complex designs

      • Tim.jpg
        Tim Schneider

        Arteris

        Sr. Manager of Field Application Engineering

      • Insaf.jpg
        Insaf Meliane

        Arteris

        Sr. Product Management and Marketing Manager

    • Novel Approach to Verification and Validation (V&V) for Multi-die Systems

      From the data center to the edge and deep within the web of smart everything, today’s advanced multi-die systems are achieving previously unheard-of levels of performance. Instead of one-size-fits-all monolithic silicon, multi-die systems are comprised of an array of heterogeneous dies (or “chiplets”), optimized for each functional component. But while multi-die systems offer new levels of flexibility and achievement in system power and performance, they also introduce a high degree of design complexity. 

      The Universal Chiplet Interconnect Express (UCIe) standard was introduced in March of 2022, and UCIe 1.1 in July 2023, to help standardize die-to-die connectivity in multi-die systems. UCIe can streamline interoperability between dies on different process technologies from various suppliers. In just 1 years we have seen rapid evolution of the specification with UCIe 1.1 released in July 2023 and UCIe 2.0 expected soon.  

      Experts believe that verification and validation (V&V) complexity is a double exponential function of design complexity. It is evident that traditional V&V approaches aren’t sufficient to fulfil this humongous need and new approaches are required to handle architecture, Simulation, Validation and testing of such heterogenous systems.  This session, through following topics, will present changes to the traditional approach and new elements which may help fulfil the current and future needs of Systems –  

      • Multi-die System Overview
      • Introduction and Evolution of UCIe
      • Early architecture exploration for Multi-die Systems
      • Kick starting with ready-to-use Design and Verification Collaterals
      • Quick System Verification Simulations and convergence
      • Software-first using Hardware Assisted Verification
      • Modular approach to System bring-up.
      • Welcoming your Partners into your Kitchen
      • Tim Kogel.JPG

        Synopsys

    • QEMU and SystemC (QBox) Tutorial

      I. QEMU DEEP DIVE

      QEMU is a long established open source emulator and Virtual Machine Monitor. When run as an emulator it executes fully multi-threaded JITed code that can run on all the current major shipping ISA architectures. The front-ends for a number of the major guest architectures are actively developed allowing OS designers to test and debug their code long before available hardware starts shipping. 

      It comes with a number of useful tools intended to make software development easier. There is a fully featured debug interface targeting gdb along with a full view of all system registers. The TCG plugin system allows the JITed guest code to be instrumented allowing for deeper runtime analysis. Support for architecturally defined semi-hosting allows for quick early bring up of code before drivers have been written. This can even be combined with a light-weight user-mode which avoids the overhead of a full system emulation which is useful for test cases that don't involve hardware access. 

      This tutorial is aimed at those who would like to learn more about QEMU and it's capabilities. We will give an overview of the projects history, architecture and features with a focus on things that will hopefully be of interest to QEMU users in the Design and Verification space. There is a long history of projects that have been built on top of QEMU and we will look at the gaps they have tried to fill, why they never got merged and what developments are happening in the upstream to better support these use cases. 

      II. QEMU AND SYSTEMC, CHALLENEGES AND CURRENT QBOX SOUTIONS.

      One of those libraries is QBox. This is an open source library intended to enable QEMU (including all the models and components) to be included in a ‘standard’ SystemC environment. This tutorial will focus on the technical challenges of memory management and time synchronization between QEMU and SystemC, how QBox currently deals with these issues and what is being done that will help within QEMU. 

      QBox can be considered as a staging ground for 3 separate bodies of code, code that is working around limitations in the way in which QEMU is currently ‘usable’ (that are slowly disappearing), Additions to the SystemC library which are being unstreamed to the SystemC Common Practices working group, and finally a (hopefully small) body of code that will bring these elements together to allow the simple construction of platforms.

      • markburton.jpeg
        General Chair
        Mark Burton

        Qualcomm

      • Alex Benee

        Linaro

    • Scalable HW Performance Assessment

      The assessment of HW components from different suppliers and evaluation of configurations options is a critical part in the successful creation of competitive products. A close integration of SW workloads combined with real execution data from the HW components is essential for taking informed decisions. However, this process is often tedious and limited to a few design options due to lengthy lead and bring-up times, different simulation and emulation environments, tool dependencies and lastly the required capacity on Tier-1 side. 

      Our tutorial puts focus on the SW infrastructure needed to efficiently integrate own and third-party models into large company setups. We will demonstrate the current state of APIs and DSLs used for running simulation setups and evaluations in early design phases. Using available SW methods to support the deployment of simulation solutions can significantly reduce the setup time of simulations and enable a continuous reproducible development process. Furthermore, we showcase and motivate the use of open description and exchange formats to increase the adoption of simulation and prototyping solutions.

      • Ingo Feldner.png

        Robert Bosch GmbH

    • Unleash the Full Potential of Your Waveforms: From Extra-functional Analysis to Functional Debug via Programs on Waveforms

      In the design phase, HDL simulation is the heart for functional and extra-functional verification. The HDL simulator produces a waveform for a simulation run. In case of a deviation from the expected behavior, the waveform has to be analyzed and understood. For this task, waveform viewers are utilized. However, they only allow for viewing signal relations visually which is a highly manual and tedious process. While advanced verification techniques have introduced automation and led to the generation of “better” waveforms (e.g. by employing formal methods, reducing the length of waveforms, or minimizing the signals involved in a failing trace), there has been almost no progress for automating the analysis of waveforms.  

      In this tutorial we bring automation to the analysis of waveforms and provide hands-on experience on the open-source Waveform Analysis Language (WAL); WAL allows to code your analysis tasks running on waveforms to answer questions like: 

      • What is the latency of my bus interfaces?
      • What throughput is my bus achieving?
      • When is the processor pipeline flushed or stalled during software execution?
      • Which software basic blocks are executed on my processor?
      • How can traditional waveform debugging be complemented with programmable waveform analysis?

      WAL (https://wal-lang.org and https://github.com/ics-jku/wal) has been realized as a Domain Specific Language (DSL). In comparison to other programming languages, WAL programs have direct access to all signal values of a waveform. Accessing signals in WAL is similar to accessing variables with the difference that the value returned depends on the loaded waveform and the time at which the signal is accessed. The reference implementation of WAL is provided open-source in Python. In addition, WAL can be used as an Intermediate Representation (IR); this has been demonstrated by the implementation of WAWK. WAWK is making complex waveform analysis as easy as searching in text files. Moreover, recently a “SystemVerilog-Assertion-to-WAL compiler” has been developed, called WSVA, to check SVAs on simulation traces. 

      WAL has been used to analyze performance metrics of industrial RISC-V cores, estimate AXI performance in an industrial setting, analyze cache performance of configurable RISC-V cores, generate control flow graph for software running on a CPU, visualize pipeline instruction flows, and visualization of RISC-V programs on CPU diagrams in educational settings. 

      • daniel_grosse.jpg
        Academic Chair
        Daniel Große

        Johannes Kepler Universität Linz

      • lucas_klemmer.jpg

        Johannes Kepler University Linz, Austria

    • USF-based FMEDA-driven Functional Safety Verification

      As the degree of automation continues to increase, the intelligent SoC content in automobiles is growing faster than the general semiconductor market. As a result, the traditional automotive electronics suppliers are being rapidly joined in the market by many other established semiconductor companies and startups, as well as OEMs seeking differentiation through increased vertical integration. This ever-growing set of players all need to meet the functional safety requirements specified in standards such as ISO 26262. A process called Failure Modes, Effects, and Diagnostic Analysis (FMEDA) is critical to meeting these requirements. FMEDA so far has largely been the preserve of consultants using spreadsheet-based approaches. But now, automotive semiconductor design projects are looking for tools offering greater automation and consistency, as well as greater ownership and integration with the design and verification flow, of the FMEDA process. We believe the inherent ambiguity of functional safety standards will finally converge into a consolidated EDA standard to enable automation, results consistency, and information exchange between integrators into a product development lifecycle, analogous to industry standards for hardware description languages, timing, and power constraints.

      In this workshop, Cadence will present the Midas™ Safety Platform, and how it uses the Unified Safety Formal (USF) – a formalized method for safety analysis and specification of safety mechanisms – to provide a holistic and automated approach to FMEDA at the architectural, design, and diagnostic coverage verification stages. We will demonstrate how USF provides the links between the FMEDA process and digital design and verification flows for both digital and analog/mixed signal, to provide a complete and consistent approach to meeting functional safety requirements.

      • Francesco_Lertora.png

        Cadence Design Systems

      • Frederico_Ferlini.jpg
        Frederico Ferlini

        Cadence Design Systems