27 - 28 October, 2020

Virtual Conference

Paper abstracts

Paper abstracts

Paper 1.1 Does it pay off to add Portable Stimulus Layer on top of UVM IP block test bench?
Xia Wu, Jacob Sander Andersen - Syosil Aps
Ole Kristoffersen - Ericsson

The work presented is the outcome of deploying portable stimulus to an existing UVM block level test bench. The target is to verify a highly configurable filter chain system which has extensively used generics and flexible run-time configuration. Accellera Portable Stimulus Standard Domain Specific Language (PSS-DSL) is used to create the abstract PSS model. The paper investigates the effort to create a PSS based solution on a traditional UVM test environment, and if it contributes to better verification quality. We present five concrete challenges that we have experienced when creating a PSS model: Compile-time parameters, Run-time configuration, Inheritance, Partial Configuration and Semantics equivalence, and solutions are proposed to overcome or mitigate these challenges. In the end we conclude a few guidelines and consideration for future projects use.


Paper 1.2 Make your Testbenches Run Like Clockwork!
Markus Brosch, Salman Tanvir, Martin Ruhwandl - Infineon Technologies AG

This paper outlines a SystemVerilog/Specman UVC architecture that enables efficient driving/monitoring of all clocks within a testbench. Handling of such signals can be a complex and time-consuming task, especially when dealing with IP’s that rely on several clocks that may or may not be related to each other. Hence a generic and reusable verification IP can provide many advantages. We first discuss the various features that such a UVC should offer. These include configuration of timing characteristics e.g. period, duty cycle and jitter, handling of asynchronous and synchronous clocks and synchronous reconfiguration. Additionally, permanent monitoring of the clock frequency and checks e.g. whether the clocks are running as expected, are useful. We then describe a prototype UVC and show how the aforementioned features can be implemented. Finally, we conclude with the results of this work.


Paper 1.3 A Comprehensive Verification Platform for RISC-V based Processors
Emre Karabulut, Berk Kisinbay, Abdullah Yildiz, Rifat Demircioglu - Yonga Technology Microelectronics R&D

Although there has been growing demand for RISC-V based verification solutions due to the outstanding performance and the trend of RISC-V based SoCs, there is no standard verification method for the RISC-V based SoCs. This paper presents a comprehensive UVM based verification platform for this purpose. The platform provides prosperous test vectors with standard test suites and custom test sets. The wide range of test vectors provide system level and block level verification of all RISC-V based SoCs. Besides, the platform has functional simulators and robust tests for branch predictor and cache blocks, which are the two important components of any modern processor today.
Another contribution is an automated verification environment that is integrated with other environments and test suites. Also, the verification environment enriches itself with code, functional and instruction coverage metrics. The verification platform can support all RISC-V standard ISA and extensions. We evaluate our verification platform on a customized RISCV-BOOM based SoC and obtained 81% functional coverage, 80% code coverage, and 82% instruction coverage.


Paper 1.4 Mutable Verification Environments through Visitor and Dynamic Register map Configuration
Matteo Barbati, Alberto Allara - STMicroelectronics

Creating Mutable Verification Environment able to dynamically change its behavior according to the Register Map configuration allowed us to simplify the scenarios generation while creating more and more complex tests. Extension of existing VIP to add Synchronization Mechanism with Register Model and new capabilities to allow dynamic VIP reconfiguration represents the typical solution to this problem. In this paper we propose an approach based on UVM Register Map Dynamic Configuration and UVM Visitor to create powerful mutable verification environment in a simpler way.


Paper 1.5 Facilitating Transactions in VHDL and SystemVerilog
Rich Edelman, Mentor, A Siemens Business

This paper presents transaction recording and debug to the VHDL and SystemVerilog user via many different techniques: including traditional SV coding with bind, VHDL coding with bind and VHDL integration with UVVM. Transaction modeling will also be discussed for IP developers and System modeling.


Paper 1.6 Lean Verification Techniques: Executable SystemVerilog UVM Defect Table For Simulations
Kamel Belhous, Paul Ulrich - Teradyne, Inc
Steve Burchfiel, Kevin Schott - CorrectDesigns

This paper presents a novel approach for an 'executable' defect table to track and work around open  issues in the design and verification environments. The defect table is source code, using Systemverilog UVM that is built into the verification environment which tracks the state of an issue, affecting the behavior of the testbench based on the defect state.


Paper 2.1 A Methodology to Verify Functionality, Security, and Trust for RISC-V Cores
Nicolae Tusinschi, Wei Wei Chen - OneSpin Solutions

Modern processor designs present some of the toughest hardware verification challenges. These challenges are especially acute for RISC-V processor core designs, with a wide range of variations and implementations available from a plethora of sources. This paper describes a verification methodology available to both RISC-V core providers and system-on-chip (SoC) teams integrating these cores. It spans functional correctness, including compliance, detection of security vulnerabilities, and trust verification that no malicious logic has been inserted. Detailed examples of design bugs found in actual RISC-V core implementations are included.


Paper 2.2 Build Reliable and Efficient Reset Networks with a Comprehensive Reset Domain Crossing Verification Solution
Wanggen Shi - Big Fish Semiconductor Ltd
Yuxin You, Kurt Takara - Mentor, A Siemens Business

Reset-related challenges and risks in design and implementation can compromise the intent of reliable  and efficient reset networks in modem SoCs as the size and complexity of these SoCs continue to increase. These risks  may not be immediately apparent and can result in unexpected chip behavior. Sequential elements reset by different  asynchronous resets cause reset domain crossing (RDC) paths. Incorrect designs through these RDC paths will cause
metastability and prevent designs from resetting to a known good state, which then results in unreliable behavior. This paper summarizes these challenges and risks, as well as introduces a methodology to build a structural reset network  to address these challenges and risks. Finally this paper presents an effective and comprehensive verification solution,  based on Mentor Questa RDC and used on real projects, to do reset checking statically, exhaustively and quickly,  resulting in chips with efficient and reliable reset networks.


Paper 2.3 Model-based Automation of Verification Development for automotive SOCs
Aljoscha Kirchner, Jan-Hendrik Oetjens - Robert Bosch GmbH
Oliver Bringmann - Eberhard Karls Universität Tübingen

Current technical trends in the automotive industry lead to a demand for more complex and at the same time secure systems, also in the area of SoC development. This is in contrast to the goal of achieving ever shorter and more efficient development cycles. These challenges are particularly evident in verification, which takes up a considerable part of system development due to increasing requirements. To address these challenges this paper presents a novel method for model-driven automation of verification. The method includes the formalization and modeling of the SoC specification using SysML as well as the generation of SystemVerilog Assertions based on the modeled specification. Thus the effort can be achieved by minimizing manual transformation as well as by reducing errors due to ambiguous specifications.


Paper 2.4 Discovering Deadlocks in a Memory Controller IP
Jef Verdonck, Emrah Armagan, Khaled Nsaibia, Slava Bulach - u-blox AG
Pranay Gupta, Anshul Jain, Chirag Agarwal, Roger Sabbagh - Oski Technology, Inc

The risk of deadlocks is one of the areas that is not well addressed by dynamic testing. Simulation does not provide the tools to target deadlocks directly, so finding deadlock scenarios generally happens by chance. On the other hand, formal verification is particularly well suited to verifying a wide range of forward progress properties of designs, such as absence of deadlock, live-lock, and starvation. In this paper, we present a formal verification methodology that has been shown to predictably discover deadlock in RTL designs. The methodology is applicable in the early phases of IP development and design integration. We share results from the application of the method on an industrial memory controller IP.


Paper 2.5 How To Verify Encoder And Decoder Designs Using Formal Verification
Jin Hou - Mentor, A Siemens Business

Encoders and decoders are used a lot in communication systems such as telecommunication, networking, etc. Encoders encrypt data before transmission, and decoders decrypt data at the receiver side. Traditionally encoder and decoder circuits are verified with simulation. Verification engineers have to write lengthy simulation testbenches for generating data vectors, which can be time-consuming. No matter how good the testbenches are, it is impossible to cover all vectors in simulation. Considering the encoder and decoder that can correct random faults, it is even harder for simulation to verify the error-correction function since it cannot cover all errors happening at any bit at any random time. The traditional simulation method is not sufficient for verifying encoders and decoders.
This paper will discuss two new methods based on formal verification for verifying encoders and decoders. One way is using formal property checking, and the other is using formal sequential logic equivalence checking (SLEC). Formal verification is based on a mathematical representation and exhaustive algorithmic techniques. For using formal verification tools, users only need to write simple setup scripts to run formal verification tools and don’t need simulation testbenches. Formal tools can automatically consider all possible input values and sequences. By applying the non-deterministic technique, formal tools can consider all random faults to verify the correction function of encoders and decoders.
This paper will use the BCH encoder and decoder IP core in the Opencore public domain as an example to demonstrate how to use two different formal methods to verify encoders and decoders. The BCH encoder and decoder IP core can detect and correct up to 2-bit random errors. When random 1-bit or 2-bit errors happen on the transmission line from ENC to DEC, it should detect the error and set output error_detected to 1, and correct the error such that the output data dout should be the same as the original data din. The structure of the circuit is as follows.


Paper 2.6 Using Formal to Prevent Deadlocks
Abdelouahab Ayari, Mark Eslinger, Joe Hupcey - Mentor, A Siemens Business

System deadlock is very difficult to detect with RTL VHDL or Verilog simulations. In fact these are the most difficult type of bugs to find! At best, you monitor key signal(s) to see if they have not changed for a long period of time -- but how long should you observe these signals before concluding that deadlock will not occur? Historically the solution has been to check the signal(s) with local and global watchdog timers; which can be hard to cleanly include. The truth is that with simulation-based approaches you can’t differentiate between situations where your system is truly locked up vs. a case where the right stimulus hasn’t come along to move the design to the next state. Additionally, simulation is dependent on the designer knowing the "right stimulus" to trigger a specific deadlock scenario. In contrast, Formal analysis has the ability to exhaustively find deadlock scenarios in your design. However, the traditional iterative approach using written "liveness" and "safety" properties in combination with manually written constraints can be time consuming and error prone even for formal experts. While there are academic assertion languages that can be used, these are designed for academic practitioners and are not useful for RTL-aware design and verification engineers who use industry-standard System Verilog Assertions (SVA). Hence, in this paper we will show how using normal SVA liveness properties along with academic concepts allows for RTL engineers to achieve the benefit of formal deadlock analysis without the iterative component or learning a non-standard assertion language. This simplifies the process of finding and preventing deadlock in your design.


Paper 3.1 Boosting Mixed-signal Design Productivity with FPGA-based Methods Throughout the Chip Design Process
Gabriel Rutsch, Simone Fontanesi, Steven Tan Hee Yeng, Andrea Possemato, Gaetano Formato, Wolfgang Ecker - Infineon Technologies AG
Steven Herbst, Mark Horowitz - Stanford University

In this paper, we show the impact on overall design productivity when employing FPGA-based prototyping methods throughout the mixed-signal chip design process. We made use of an open source modeling framework for generating synthesizable functional models of analog behavior that can be mapped on Xilinx FPGAs from an abstract model specification. In order to increase automation, ease-of-use and configurability when creating and working with FPGA-based prototypes, we used an open source FPGA automation framework. Two FPGA-based prototypes for a smart magnetic sensor used in automotive applications were developed and applied during product definition, pre-silicon verification, post-silicon verification and design-in activities. One of the prototypes is intended to be part of a Hardware-in-the-Loop setup; it is real-time capable and small enough to be integrated into a customer’s system. The other prototype serves as a computing platform and enables conducting parameter sweeps, regressions and interactive simulations. Compared to state-of-the-art CPU-based simulation, we were able to reduce regression runtime from roughly a month to hours. Additionally, activities related to product definition, customer design-in and post-silicon verification were conducted earlier and with more confidence in the application.


Paper 3.2 SOBEL FILTER: Software Implementation to RTL using High Level Synthesis
Bhavna Aggarwal, Umesh Sisodia, Snigdha Tyagi - CircuitSutra Technologies Pvt. Ltd

This document discusses the effort of migrating an open source image processing algorithm to a synthesizable design using the High-Level synthesizable library and HLS tool by Mentor, called Catapult. It will inspect the software simulation timings & lines of code for Synthesizable C and Synthesizable SystemC designs against the original C implementation. This paper intends to attract the attention of all System/IP designers and Verification engineers towards the benefits of High-Level Synthesis.


Paper 3.3 Single Source System to Register-Transfer Level Design Methodology Using High-Level Synthesis
Petri Solanti - Mentor, A Siemens Business
Thomas Arndt - COSEDA Technologies GmbH

A System-on-Chip (SoC) is a combination of different technology domains on one piece of silicon. With modern semiconductor processes it is possible to manufacture chips that have large analog part, massive digital logic and multiple processors with complex memory architecture on the same die. Yet, the design methodologies of the different domains are often almost orthogonal. Digital hardware (HW) implementation is written at Register-Transfer Level (RTL), analog design is done at transistor level and the software (SW) is written in object oriented C++ or Java classes using abstract datatypes. The growing complexity and shorter design cycles increase the importance of a design entry at higher abstraction level and better collaboration between the design teams. Another new dimension for the design process is the growing demand for virtual prototypes and fully functional simulation models that can be used in automotive system simulations or digital twin models which are requiring a lot of additional work.
The design teams are facing another challenge due to the wide variety of languages used throughout the design process. Even the digital design flow may use 4 different languages, which leads to 4 models to be updated, when the specification changes. Validation of the individual domains in the system context is difficult and slow.
A new design methodology that addresses all technology domains and reduces the number of models is needed. High-Level Synthesis (HLS) can be used to increase the abstraction level of digital hardware description, but it doesn’t tackle the whole challenge. This study presents a design methodology that is based on a single modelling language and can handle analog and digital hardware as well as software design aspects.


Paper 3.4 Mixed Electronic System Level Power/Performance Estimation using SystemC/TLM2.0 Modeling and PwClkARCH library
Antonio Genov, Loic Leconte - NXP Semiconductors
François Verdier - University of Cote d’Azur, CNRS, LEAT

The Hardware/Software (HW/SW) architectural  exploration has become a key component of System on Chip (SoC) design modeling. The insufficient power and timing analysis capabilities at early stages of the design flow limits the optimized modeling. Pushed by the need to improve the methodology of early stages of design flow and inspired by the numerous studies on Electronic System Level (ESL) modeling, we introduce a novel ESL methodology that combines power and performance estimation in one unified framework. In this paper, we present our new approach applied and tested on an NXP proprietary switch matrix/interconnect system used in i.MX8 series of SoCs. Our model is based on SystemC-TLM2.0 and make use of PwClkARCH library for the power management. This framework allows us to develop a SoC transaction level model (TLM) written exclusively in C++/SystemC-TLM2.0 and to extract power consumption and performance metrics after the simulation. This modeling approach allows a strong separation between the functional SystemC/TLM model and its power intent description. Only a few pieces of code need to be added to performance model to hook power model to it. Moreover, it makes the code easier to debug and maintain.


Paper 3.5 Timing-Aware High Level Power Estimation of Industrial Interconnect Module
Amal Ben Ameur, François Verdier - University of Cote d’Azur, CNRS, LEAT
Antonio Genov, Loic Leconte - NXP Semiconductors

The semiconductor industry is developing smaller transistors and succeeding in increasing their on-chip integration density. Therefore, the computing power of modern Integrated Circuits (IC) is constantly increasing and their application domains are becoming countless. However, the increasing complexity leads to higher power consumption and more challenging designs. In order to address these issues and to differentiate themselves in the market, manufacturers and System-On-Chip (SoC) engineers are devoting tremendous effort to researching new development strategies. Numerous studies have shown that one of the essential steps to be taken is to review the early stages of the design flow and in particular to integrate simulation-based modeling and verification at higher level of abstraction.
In this paper we address this gap and present a proof of concept of an academic power estimation and management methodology, called PwClkARCH, on an NXP intellectual property (IP). Memory power estimation has been improved using DRAMPower. The results prove that with PwClkARCH, we are able to perform mixed performance/power/energy modeling on Approximately Timed (AT) SystemC models, which are widely used for architecture exploration and optimization. Our methodology allows to dynamically extract power metrics and allows to apply power management and reduction strategies, while considering the functional model activity, the power management and reduction strategies and the memory systems consumption.


Paper 3.6 Clock Controller Unit Design Metrics: Area, Power, Software flexibility and Congestion Impacts at System Level
Michele Chilla, Leonardo Gobbi – Qualcomm Ireland

The aim of this paper is to highlight how same complex Multi-Clock Digital IP System is impacted by different RTL design strategies of its dedicated Clock Control Unit (CCU). Area, Power, Software Flexibility and Routing Congestion have been analyzed and compared. Digital Front-End stage has entirely covered from design to logic synthesis by using DC (Design Compiler), DV (Design Vision) tools and PC (Power Compiler). Outcome results of this paper provide metrics and guidelines to CCU digital designers. The quality of being novel is the fact that current literature shows how clocks impact the design, but all analysis has been made with assumptions and mathematical models have been used to shape design features. This work, instead, picks a real example of edge-cutting Digital Sub-System with +300k flops and all design features such as bus infrastructure linking complex internal cores. RTL also includes DFT logic used to test all the faults of the chip on silicon.


Paper 4.1 Enhancing Quality and Coverage of CDC Closure in Intel’s SoC Design
Rohit Sinha - Intel

In the non-Intel ARM based architecture SoC design, it is often challenging to ensure that all the asynchronous design challenges are covered with utmost quality while keeping the schedules on track. Since in the non-IA architecture design, all the IPs are mainly sourced from external vendors and hence there is no standardized TFM which ensures the quality of the CDC or RDC closure at the SoC levels.
As a result of this, late design cycle bugs often occur in the SoC design and at times it costs an entire respin due to meta-stability issues or due to glitches in the clock-reset paths. Therefore, in order to handle challenges due to multiple vendor and multiple TFMs in the SoC design integration, there is an absolute need to revamp the CDC signoff methodologies with series of initiatives which would ensure zero Si escapes in the design.


Paper 4.2 Static Analysis of SystemC/SystemC-AMS System and Architectural Level Models
Karsten Einwich, Thilo Voertler - COSEDA Technologies GmbH

This paper introduces a generic concept and a framework for the static analysis of SystemC / SystemC AMS based system- and architectural level models based on SystemC sc_attributes. The framework permits the
realization of analyzer for properties like power, area, resource utilization, gain, noise or IP2/IP3, whereby the properties can depend on the current system state.


Paper 4.3 Bit Density-based Pre-characterization of RAM Cells for Area Critical SOC Design
Dilip Ajay- QT Technologies Ireland

Memories constitutes a significant percentage of almost all SOC’s die area. For a given logical depth, logical width and cycle time requirement, we get many memory configurations with different values for access time, physical width, physical depth and area. Today the decision to pick the right RAM cell for a given design is iterative and manual. Existing techniques for this mostly involve designer feedback which is a slow, reactive and incremental approach. In multi-million gate design, prediction and mitigation of design issues as early as possible in the design cycle helps avoid many iterative fixes which subsequently have a huge impact on design convergence and design turnaround time. In this paper, Pre-characterization of RAMs (random access memory) for automatic area efficient design is modeled and we use pre-characterization scores during synthesis. Using this approach, one can be sure about the tool picking the most area efficient RAM cell for the design, while meeting other design constraints. This allows designers to resolve the area problems caused by RAM cells before going to the layout design phase. We haven't applied this flow to a large-scale high-level synthesis production design because it is mostly a concept at this stage.


Paper 4.4 Temporal Assertions in SystemC
Mikhail Moiseev, Leonid Azarenkov, Ilya Klotchkov - Intel Corporation

We propose temporal assertions in SystemC language which look similar to SystemVerilog assertions (SVA). These assertions can be declared in SystemC module scope as well as in clocked thread process function. Temporal assertion contains pre-condition expression, time parameter and post-condition, which is checked to be true if pre-condition was true at specified time in the past. Assertion expressions are checked every time the event occurs, this event is specified explicitly or taken from the current process sensitivity. The temporal assertions are automatically converted into SVA by our SystemC-to-Verilog Compiler tool during high level synthesis (HLS).


Paper 4.5 Accelerating and Improving FPGA Design Reviews Using Analysis Tools
Anna Tseng, Kurt Takara, Abdelouahab Ayari - Mentor, A Siemens Business

Design reviews against program requirements and prescribed methodologies must be conducted as part of program risk management designed to ensure execution to schedule and budget requirements, and prevent
functional issues in system deployment. This paper presents a process improvement that yields significant review improvement in coverage, effort and duration.


Paper 4.6 Accelerating Automotive Ethernet validation by leveraging Synopsys Virtualizer with TraceCompass
Ashish Gandhi, Praveen Kumar Kondugari, Sam Tennent - Synopsys

Ethernet enables high-bandwidth and cost-effective data exchange and is therefore critical to cope with the ever-increasing demand for vehicle communications. To also provide the required reliability for in-vehicle communications, Ethernet has been enhanced with sophisticated protocols like Audio Video Bridging (AVB) and Time Sensitive Networking (TSN). This complexity substantially increases the effort to validate and test these software stacks. Automotive Tier-1s and OEMs are looking to Virtual Prototyping for enabling a shift-left of their products’ time-tomarket with early architecting, validation and reduced-cost regression frameworks. Debugging an Ethernet path on a virtual platform is challenging as there is so far no tool with a holistic view of the traversal of Ethernet transactions across multiple in-vehicle hops. This paper proposes a solution to simplify and enhance the validation and performance analysis of complex Ethernet scenarios for in-vehicle networks, through deep integration of Synopsys’ Virtualizer and TraceCompass, an Eclipse-based plugin. This integration provides a comprehensive view across multiple Ethernet nodes, whilst allowing correlation of the Ethernet traffic with other hardware and software events in the individual ECUs.