Modern verification methodologies incorporate multiple coverage solutions. These range from functional to structural coverage, leverage various coverage models, and operate using varied technologies in both the simulation and formal process. The main purpose of these coverage solutions is to establish a signoff metric that indicates when enough verification has been performed. However, as coverage approaches have evolved, new use models have emerged for these tools that increase their value in the verification process.
Unifying Mixed-Signal and Low-Power Verification
Electronics design has long included digital, analog, and power. It’s enabled us to create the phenomenal array of devices that permeate our lives. More than ever, these design elements are unified on a single chip and the system depends on their integrated functionality. Achieving the performance, quality, and safety metrics needed for commercial and safety-dependent applications will require new technologies and methodologies.
The FPGA or ASIC SoC require a robust pre-silicon hardware/software co-verification platform. Developing device drivers in pure HDL/RTL simulation environment would be counterproductive and developing or testing embedded operating system and application stack impossible. Virtual platforms and virtual machines have been used by software developers as a high-speed simulation vehicle but they are only appropriate with standard components like CPU, memory, timers and the like. The challenge emerges when custom IP-core is added to the design.
Automotive has evolved into one the fastest growing parts of the worldwide semiconductor industry, and automotive semiconductor content is exploding, driven by the LED headlights to the many advanced SoCs powering autonomous drive, infotainment, and vehicle communication systems. The “traditional” automotive electronics are not standing still either, with advanced drivetrains and sophisticated safety and ADAS systems creating demand for even-larger and more integrated SoCs
The purpose of this tutorial is to describe verification process flow especially to be used in safety critical ASIC/FPGA designs. Verification of a design consist of two main goals. First one is to verify if the design behaves as described in the requirements. In this process of verification, each requirement is tested and full legal input space is explored. The second goal is to ensure that design does not do anything it is not supposed to do. In general, this process of verification makes sure that every component is tested and illegal conditions are handled.
With intelligence being added to vehicles at an unprecedented rate, the opportunities for automotive silicon have never been greater. These autonomous systems require both functional verification and functional safety verification. While many semiconductor teams have established methodologies for functional verification, functional safety verification methodology is new or in development. As such, these teams need to learn about the importance of fault injection for semiconductors based on the ISO 26262 functional safety standard for road vehicles.
The algorithms needed to teach a computer to “see, understand and make decisions” for ADAS and Autonomous Drive systems require a significant amount of parallel compute performance executing at the lowest possible power. Often the technology used to implement these functions employs Deep Neural Networks that demand even more high-performance parallel compute resources and inference solutions that are also low power.
New demanding applications like ADAS, infotainment, and sensor fusion are driving growth in automotive semiconductors. These innovative features require both high compute power as well as predictability, which can only be delivered by heterogeneous architectures. We observe a trend towards multi-SoC Electronic Control Units (ECUs), where a high performance compute cluster, often running multiple embedded OSes, is integrated along with a traditional MCU cluster, running a real-time OS to deliver on predictability and timing constraints.
Application-specific adaptability of electronic systems demand for new design solutions. On the rise are automated firmware-based methodologies. From the application perspective this allows for flexible adaption to meet today’s multiple conflicting requirements, such as performance and power. This tutorial discusses techniques targeting the optimization of memory subsystems as well as the HW/SW interface under timing and power budgets. Then, it focuses on efficient VP-based simulation techniques complemented by formal verification approaches taking the firmware into account.
RISC-V (pronounced “risk-five”) is a free and open ISA enabling a new era of processor innovation through open standard collaboration. Founded in 2015, the RISC-V Foundation comprises more than 100 members building the first open, collaborative community of software and hardware innovators powering innovation at the edge forward. Born in academia and research, RISC-V ISA delivers a new level of free, extensible software and hardware freedom on architecture, paving the way for the next 50 years of computing design and innovation.