We are witnessing a radical change of the way products are created, produced, and utilized. As a consequence of continuing digitalization, leading technology companies are pursuing the idea of a complete, high fidelity “digital twin” that makes the boundaries between the design process of mechanical parts, electronics, embedded software, sensors and specialized IC and sensor technology are disappearing.
If you don’t measure, you don’t know. Verification planning and coverage metrics are crucial to track progress and achieve signoff. In particular, mutation coverage provides an accurate, intuitive account of the verification status, while detecting unverified, well-hidden error conditions missed by other formal and simulation coverage metrics. But how can we leverage mutation coverage to maximize the return on investment (ROI) of our formal verification effort?
Unifying Mixed-Signal and Low-Power Verification
Electronics design has long included digital, analog, and power. It’s enabled us to create the phenomenal array of devices that permeate our lives. More than ever, these design elements are unified on a single chip and the system depends on their integrated functionality. Achieving the performance, quality, and safety metrics needed for commercial and safety-dependent applications will require new technologies and methodologies.
The FPGA or ASIC SoC require a robust pre-silicon hardware/software co-verification platform. Developing device drivers in pure HDL/RTL simulation environment would be counterproductive and developing or testing embedded operating system and application stack impossible. Virtual platforms and virtual machines have been used by software developers as a high-speed simulation vehicle but they are only appropriate with standard components like CPU, memory, timers and the like. The challenge emerges when custom IP-core is added to the design.
Automotive has evolved into one the fastest growing parts of the worldwide semiconductor industry, and automotive semiconductor content is exploding, driven by the LED headlights to the many advanced SoCs powering autonomous drive, infotainment, and vehicle communication systems. The “traditional” automotive electronics are not standing still either, with advanced drivetrains and sophisticated safety and ADAS systems creating demand for even-larger and more integrated SoCs
The purpose of this tutorial is to describe verification process flow especially to be used in safety critical ASIC/FPGA designs. Verification of a design consist of two main goals. First one is to verify if the design behaves as described in the requirements. In this process of verification, each requirement is tested and full legal input space is explored. The second goal is to ensure that design does not do anything it is not supposed to do. In general, this process of verification makes sure that every component is tested and illegal conditions are handled.
With intelligence being added to vehicles at an unprecedented rate, the opportunities for automotive silicon have never been greater. These autonomous systems require both functional verification and functional safety verification. While many semiconductor teams have established methodologies for functional verification, functional safety verification methodology is new or in development. As such, these teams need to learn about the importance of fault injection for semiconductors based on the ISO 26262 functional safety standard for road vehicles.
The algorithms needed to teach a computer to “see, understand and make decisions” for ADAS and Autonomous Drive systems require a significant amount of parallel compute performance executing at the lowest possible power. Often the technology used to implement these functions employs Deep Neural Networks that demand even more high-performance parallel compute resources and inference solutions that are also low power.
New demanding applications like ADAS, infotainment, and sensor fusion are driving growth in automotive semiconductors. These innovative features require both high compute power as well as predictability, which can only be delivered by heterogeneous architectures. We observe a trend towards multi-SoC Electronic Control Units (ECUs), where a high performance compute cluster, often running multiple embedded OSes, is integrated along with a traditional MCU cluster, running a real-time OS to deliver on predictability and timing constraints.
Application-specific adaptability of electronic systems demand for new design solutions. On the rise are automated firmware-based methodologies. From the application perspective this allows for flexible adaption to meet today’s multiple conflicting requirements, such as performance and power. This tutorial discusses techniques targeting the optimization of memory subsystems as well as the HW/SW interface under timing and power budgets. Then, it focuses on efficient VP-based simulation techniques complemented by formal verification approaches taking the firmware into account.