2025 Tutorials

  • Sponsored Tutorials

    • Cadence Verisium - Optimizing Regressions and Debug with AI

      Hans Zander and Matt Graham 

      While Agentic AI promises to revolutionize the ways in which chips are designed and verified, the fundamental building blocks, AI Agents, are available today to optimize regressions and shorten total debug. In this session we will highlight how the Verisium platform’s industry-leading verification planning, management and debug capabilities have been upleveled and enhanced with AI agents offering automation for everything from regression optimization to failure root cause analysis.  Topics will include automation and optimization for regression orchestration, testcase selection, verification planning, coverage closure, failure triage, bug prediction and waveform analysis, all enhanced with AI.

    • Cybersecurity: A Model-Based Systems Engineering Approach to Risk Analysis and Mitigation (hands-on)

      Marco Bimbi and Cristian Macario 

      Hands-on session, attendees can bring their own laptop and go through the presented examples. Cyberattacks are on the rise, targeting electronic systems
      across various industries. For example, vulnerabilities in several car brands have allowed anything from remotely controlling the brakes during driving to theft of a vehicle within 30 seconds. Semiconductor devices often play a critical role in these attack scenarios, emphasizing the need for robust cybersecurity measures. Consequently, many industries have established cybersecurity standards such as ISO/SAE 21434, IEC 62443, DO-356, and the Cyber Resilience
      Act. These standards require a structured security risk analysis to anticipate weaknesses during early development and define security features that protect these systems from attacks.

      A wealth of metadata including threats, attack vectors, and impact rating must be captured, analyzed to deduce risk factors, and kept consistent throughout the development phase. This data must also be updated over the whole product life cycle in response to security incidents after production.

      This hands-on tutorial walks the attendees through a full workflow that manages security risks efficiently and consistently with Model-Based Approach. Participants will learn:

      • Fundamentals of Model-Based Design in the context of security risk analysis
      • Asset and threat identification (STRIDE method)
      • Feasibility estimation (attack potential method)
      • Severity assessment (attack simulation method)
      • Integration with safety data such as FHA and FMEA
      • Countermeasure definition, goal allocation, and residual risk calculation
      • Verification and validation of security goals
      • Change analysis to track design changes and keep risk data consistent

      Participants are encouraged to bring their laptops. We will provide access to the necessary tools to ensure a practical, hands-on experience.

    • Data-Driven Approach to Accelerate Coverage Closure on Highly Configurable ASIC Designs

      Tulio Pereira Bitencourt, Rasadhi Attale, Samuel Man-Shun Wong, Po-Shao Cheng and Anton Tschank

      During the last few decades, the semiconductor industry has experienced an ever-growing complexity in integrated circuit (IC) designs. The requirements for chips have risen and with more tasks now being allocated into devices that must be area efficient, low power and operate at high frequencies, it becomes challenging to ensure that those marvels of technology work flawlessly before they go into production.

      As the industry moves towards Intellectual Property (IP)-based approaches, where independent parts of chips are purchased and embedded to compose more robust and application-specific integrated circuit (ASIC) designs, Verification Engineers must now cope with increased complexity of highly configurable ICs while ensuring acceptable time-to-market. Moreover, relying on robust and straightforward Electronic Design Automation (EDA) tools and effective data-driven methodologies ensure that coverage targets can be met even in the most complex scenarios.

      This tutorial aims to propose a novel data-driven approach to accelerate coverage closure on highly configurable ASIC designs. During the tutorial, the following topics will be covered:

      Challenges of working with highly configurable IPs: Discusses the different aspects of verifying fully parameterized ASIC designs that leverage randomized parameters in thoroughly compatible testbenches and coverage metrics.

      Requirement management and verification planning: Showcases the need for well-defined, mappable and machine-readable specifications, the creation of parameter-based verification plans, and the importance of end-to-end traceability for data-driven verification.

      Early bug detection: Focuses on the importance of continuously monitoring metrics to identify gaps or issues early in the process. Introduces live simulations and alternatives to track historical regression data for optimized debugging.

      Traffic light system & Unreachability: Presents the application of a traffic light system (i.e., green, amber and red waivers) for assessing coverage reachability under highly configurable environments.

      Structure for effective regressions: Introduces a powerful and well-defined regression system capable of randomizing parameters, generating data (e.g., coverage, failures, etc.) and collecting failure signatures.

      Accumulated coverage structure: Proposes a robust and straightforward procedure to generate and accumulate coverage metrics across all supported and randomized configurations.

      Real-time and interactive dashboards: Highlights the value of real-time and interactive dashboards for acquiring an overall view of design/verification health and regression performance. Its advantages become clear when there is a need for scalable and efficient identification of issues in large permutations of IP configurations.

      Powerful tools to enhance time-to-market: Wraps up the presented methodology and describes how to effectively use a mix of graphic, analytic and artificial intelligence/machine learning (AI/ML) options to accelerate Coverage Closure.

      Therefore, the herein presented data-driven approach aims at effectively accelerating the coverage closure process of highly configurable ASIC designs by leveraging the presented state-of-the-art methodology and robust EDA options. The tutorial is structured to showcase a new methodology to cope with very complex real-world scenarios when dealing with highly configurable designs, as well as to present thorough details on how one can reproduce it.

    • Liberating Functional Verification from Boolean Shackles

      Vikas Sachdeva

      Functional verification remains a major resource-intensive task in modern semiconductor design. According to industry studies, verification engineers dedicate their entire focus to ensuring functional correctness, with clock-related issues and logic bugs being leading causes of silicon re-spins. As designs grow in complexity—particularly in clocking, power management, and resets—static and formal verification methods have become indispensable.

      This tutorial explores the paradigm shift in functional verification, emphasizing the role of static signoff in enabling early defect detection, improved design quality, faster time-to-market, and cost reduction. The session covers:

      • Early Functional Verification Tools that generate signoff-quality results even before traditional simulation or formal methods.
      • Core Requirements for RTL Signoff, including speed, capacity, and comprehensive coverage.
      • Comparison of Static, Dynamic, and Formal Verification, highlighting how static signoff overcomes Boolean limitations, achieving results that are 10-100x faster and scalable to billions of gates.

      Real-world case studies will demonstrate how static signoff effectively resolves design re-spin issues, including logic bugs, clocking challenges, and reset failures. Key static signoff categories covered include:

      • Established RTL Static Signoff – Early bug detection via linting and multi-policy checks.
      • Enhanced RTL Static Signoff – Advanced DFT, glitch detection, and other verification pain points.
      • Expanded RTL Static Signoff – Addressing NoC validation, connectivity verification, and hardware security assessments.

      The tutorial concludes with best practices for static signoff, equipping RTL designers, verification engineers, SoC architects, and chip designers with the tools to implement a shift-left verification approach. Attendees will gain practical insights into optimizing functional verification, reducing rework, and accelerating design cycles.

    • Next-Gen Verification Technologies for Processor-Based Systems

      Aimee Sutton, Synopsys Inc.
      Jin Zhang, Synopsys Inc.

      Today’s complex processor-based systems enable technological advances in many market segments, such as AI, high performance computing, and automotive. However, verification of these systems introduces new challenges, spanning architectural verification of a custom RISC-V processor to memory coherency in a system containing thousands of Arm or RISC-V cores. As the complexity of the design increases, so does the need for new tools and methods beyond simulation and UVM testbenches.  

      In this tutorial, we will focus on RISC-V processors and present next-generation verification techniques that span the verification journey from a single RISC-V processor to complex systems with many RISC-V cores.  

      To accommodate the flexible and evolving nature of the RISC-V ISA, as well as privilege mode features, out-of-order pipelines, interrupts and debug mode, RISC-V processor verification requires innovation in stimulus generation, comparison, and checking. We will cover dynamic and formal approaches to verifying RISC-V cores, with topics including, but not limited to: ISA compliance verification and functional coverage, data path validation, functional verification of critical blocks, and security verification.  

      Multi-core designs introduce a new set of challenges, such as ensuring fair access to shared resources and cache and memory coherence. This tutorial will present solutions designed to address these issues and prevent costly bug escapes to silicon.

      The size of multi-core designs and multi-processor SoCs means that a simulation-only verification strategy is impractical. Hardware-assisted verification becomes essential to ensuring correct operation in the multi-core designs of today and the future. This tutorial will demonstrate how Synopsys’ next-generation processor verification tools and techniques combine with HAV platforms to create a powerful and effective solution.  

      Whether you are designers or verification engineers of these complex processor-based systems, you will walk away with new ideas on how to improve your verification flow by embracing these next generation solutions.

       

    • Pre-Silicon Performance Benchmarking with Emulation Hybrids

      Leonard Drucker

      As the semiconductor industry is experiencing an explosion in design size and complexity, it is accompanied by a need to deliver software readiness when the silicon is back in the lab. One of the key targets for software readiness is achieving targeted performance specifications with real software applications stressing the hardware design. There are 2 key elements to validating performance pre-silicon – speed of model execution and the ability to execute a full software stack. Combining a fast virtual prototype of the CPU sub-system with the RTL of the remaining SoC running on an emulator typically produces a 10x speed-up over fully-RTL emulation setups. Recent advances in both virtual prototyping and emulation now yield another leap in hybrid performance, which enables pre-silicon execution of entire software stack. Similarly, dynamic metrics collection need to be efficient enough to handle these large workloads and be able to address industry and custom KPI’s. In this workshop, we will first review the latest state-of-the-art of hybrid emulation technologies and use-cases. We will then illustrate the application of hybrid emulation for pre-silicon benchmarking execution and optimization.

    • Property Generator: simple generation of Formal Assertion IP

      Tobias Ludwig and Osama Ayoub

      The Property Generator is a proprietary tool developed by LUBIS EDA to simplify and accelerate formal verification. It results in correct-by-construction assertions, which are human-readable and remove the need for assertion review.

      The verification journey begins with writing a SystemC model that expresses functional intent. This model can be simulated using standard C++ and SystemC techniques, helping designers and verification engineers validate functionality early. The Property Generator then analyzes the model to auto-generate assertions that comprehensively describe the expected behavior.

      After generation, the assertions are loaded together with the design into your formal tool of choice or a simulator. In a typical setup, these assertions must be manually bound to RTL design signals—a process that is time-consuming and error-prone. To address this, the Property Generator integrates an AI-Refinement add-on, powered by Large Language Models (LLMs). These models can interpret both the abstract model and RTL structure, performing the refinement step with high accuracy and minimal user intervention.

      What makes our generations special is that we generate assertions to ease proof complexity by generating a large number of simple, focused properties, rather than a few complex ones. These simpler properties are easier to prove individually. Once the design is verified, only the abstract model needs review, significantly reducing sign-off time.

      By using our tool, we enable more people to use formal verification and kick-start their formal journey.

    • Scalable Virtual Platforms for Automotive and Beyond

      Lukas Jünger, Matthias Berthold and Nils Bosbach Nils Bosbach

      The shift toward software-defined vehicles demands a radical rethinking of how automotive software is developed and tested. Virtual Platforms (VPs) – or Level 4 Virtual Electronic Control Units (L4 vECUs) – have emerged as a key enabler in this transformation, offering scalable, flexible, and hardware-independent environments for early and continuous software validation. This tutorial explores the practical and technical foundations of using VPs in automotive software testing, highlighting their role in accelerating development cycles, improving test coverage, and reducing reliance on physical prototypes. Participants will gain insights into building VPs using open-source technologies, integrating them with modern testing and automation toolchains, and leveraging detailed simulation models to represent complex SoCs and IP components. Furthermore, the tutorial delves into performance optimization strategies, drawing from both industry practices and academic research to address the challenges of simulation speed and scalability. By bridging the gap between hardware abstraction and software validation, VPs are reshaping the automotive development landscape. This session equips engineers, researchers, and toolchain architects with the knowledge to effectively deploy and scale virtual testing environments in real-world automotive workflows.

    • Unleashing the Potential of AI Within Functional Verification

      Darron May and Joseph Hupcey

      In the rapidly evolving landscape of semiconductor design, the complexity and scale of digital circuits continue to grow exponentially. Traditional methods of Register-Transfer Level (RTL) functional verification are increasingly challenged by these advancements, necessitating innovative approaches to ensure robust and efficient verification processes. This technical workshop aims to explore the integration of Artificial Intelligence (AI) and Machine Learning (ML) into RTL functional verification workflows, showcasing new products and methodologies that leverage these cutting-edge technologies.

      The workshop will feature a comprehensive overview of the current state of RTL functional verification, highlighting the limitations and bottlenecks faced by verification engineers. We will introduce a suite of new AI/ML-powered tools designed to enhance verification efficiency, accuracy, and coverage. These tools employ analytical, predictive, and generative AI to automate pattern recognition, detect anomalies, generate "right by construction" artifacts like code and assertions, significantly reducing the time and effort required for verification tasks.

      Via the following presentations on "smart" automation for design and testbench creation, debug and regression acceleration, engine optimizations, and collaborative coverage analysis, participants will be able to map their needs to these new AI/ML-accelerated flows using faster engines, enabling faster engineers and optimizing resources with fewer workloads:

      1. Introduction to AI/ML in RTL Verification: Understanding the basics of AI/ML and their applicability to RTL verification.
      2. New AI/ML-Driven Verification Tools: Demonstrations of the latest products incorporating AI/ML engines, including their features, benefits, and use cases.
      3. Case Studies and Real-World Applications: Insights from industry leaders on successful implementations of AI/ML in RTL verification, showcasing tangible improvements in verification outcomes.
      4. Future Trends and Challenges: Exploring the future potential of AI/ML in verification and addressing the challenges associated with their adoption.

      By the end of the workshop, attendees will have a deeper understanding of how AI/ML can transform their overall approach to RTL functional verification, driving innovation and efficiency in their verification processes.

      Join us to stay ahead of the curve and harness the power of AI/ML to tackle the complexities of modern RTL D&V!

    • Will it Blend? - Verifying the Hardware / Software Interface of complex SoCs

      Tim Schneider, Insaf Meliane and Alvin Santos

      Verification of modern System on Chip (SoC) designs involve many components. Hardware Description Languages (VHDL, System Verilog), Unified Power Format (UPF), Software Languages (C#/C++), Interconnect standards (IP-XACT, AMBA), and specialty purpose-built layers such as the Universal Verification Methodology (UVM) and System Verilog Assertions (SVA). This tutorial explores using Arteris SoC Integration technologies to "blend" these components together by proposing a more efficient methodology to increase productivity and help ensure first-time SoC project success.

  • Tutorials

    • CDC-RDC Standardization: Concepts & Status

      Jean-Christophe Brignone

      CDC-RDC analysis has evolved as an inevitable stage in RTL quality signoff in the last two decades. Over this period, the designs have grown exponentially to SOCs having 2 trillion+ transistors and chiplets having 7+ SOCs. Today CDC verification has become a multifaceted effort across the chips designed for clients, servers, mobile, automotives, memory, AI/ML, FPGA, etc., with focus on cleaning up thousands of clocks and constraints, integrating the SVAs for constraints in validation environment to check for correctness, looking for power domain and DFT logic induced crossings, finally signing off with netlist CDC to unearth any glitches and corrupted synchronizers during synthesis.

      As the design sizes increased in every generation, the EDA tools could not handle running flatly and the only way of handling design complexity was through hierarchical CDC-RDC analysis consuming abstracts. Also, hierarchical analysis helps to enable the analysis in parallel with teams across the globe. Even with all these significant progress in capabilities of EDA tools, the major bottleneck in CDC-RDC analysis of complex SOCs and Chiplets is consuming abstracts generated by different vendor tools. Different vendor tool abstracts are seen because of multiple IP vendors; even in-house teams might deliver abstracts generated with different vendor tools.

      The Accellera CDC Working Group aims to define a standard CDC-RDC IP-XACT / TCL model to be portable and reusable regardless of the involved verification tool.

      As moving from monolithic designs to IP/SOC with IPs sourced from small/select providers to sourcing IPs globally (to create differentiated products), the quality must be maintained as driving faster time-to-market. In areas where the standards (SystemVerilog, OVM/UVM, LP/UPF) are present, the integration is able to meet the above (quality, speed). However, in areas where standards (in this case, CDC-RDC) are not available, most options trade-off either quality, or time-to-market, or both :-( Creating a standard for interoperable collateral addresses this gap.

      This tutorial aims to remind the definitions of CDC-RDC Basic Concepts and constraints, as well as the description of the reference verification flow, and addressing the goals, scope, structure & deliverables of the Accellera CDC Working Group in order to elaborate a specification of the standard abstract model.

      A status related to the last LRM version open to public review by Q2 will be presented.

    • Cocotb and PyUVM tests powered with pytest

      Tymoteusz Blazejczyk, Abhiroop Bhowmik, Ronahi Halitoglu and Shrinivas Naik

      Audience target: RTL verification engineers who are currently using or want to start using Cocotb, optionally with PyUVM. This tutorial is for those looking for significant improvements in their current Way-of-Working (WoW), especially with discovering, managing, and running a large number of tests.

      Cocotb and PyUVM already provide significant Quality-of-Life (QoL), workflow, ecosystem, and productivity improvements to the RTL verification landscape. They overcome limitations and flaws from classic approaches like using SystemVerilog + UVM and in-house tooling and flows.

      Currently, Cocotb encourages using pytest, the most popular Python testing framework, to manage, build, and run HDL simulations with Cocotb using Python. This is an alternative to custom in-house solutions, mostly based on Makefiles, TCL, or shell scripts.

      Pytest is a very popular and powerful testing framework available in Python. It allows you to easily discover, introspect, and manage a large number of tests. Pytest's built-in plugin facility allows you to extend pytest and verification environment capabilities by creating new plugins or using existing ones. These can range from generating HTML reports to CI integration with custom or external flows. It also extends test re-usability and provides a clean setup/teardown context solution for tests by introducing a modern approach with fixtures. Pytest parametrization eliminates duplicate code for testing multiple sets of input and output. Rewritten assert statements provide detailed output for causes of failure.

      Unfortunately, current documented Cocotb examples show pytest usage in a very narrow, specific use case: to only build and run HDL simulations without using all available pytest capabilities directly in Cocotb or PyUVM tests to extend the overall verification experience.

      At Qblox, we strive to use tools to their fullest to boost our development productivity and deliver top-class, high-quality products to our customers in the quantum field, which are constrained by tight requirements and high expectations. We want to share our experience with the industry and verification community on how to achieve that by leveraging Python + Cocotb + PyUVM + pytest together.

    • Creating a Co-Simulation Enivronment for Questa Simulator Using QEMU

      Telat Işık, Faruk Karaahmet and Melike Karabalkan

      This document gives information and step-by-step details on creating a co-simulation environment using QEMU for Questa Advanced Simulator.

    • Efficient SoC Modeling, Architectural Exploration, and Result Analysis using TLM2 based IPs

      Rocco Jonack, Matthias Jung and Jean-Blaise Pierrès

      The advancement of semiconductor technology, characterized by shrinking feature sizes and the increasing viability of die-to-die and chip-to-chip interconnects, continues to drive the integration of complex electronic systems onto a single chip, giving rise to the paradigm of System-on-Chip (SoC) design. The ability to model the behavior of these intricate systems before committing to physical implementation is paramount. This tutorial addresses the critical need for efficient modeling, comprehensive architectural exploration, and insightful result analysis within the constraints of typical project lifecycles.

      The increasing availability and maturity of pre-verified Intellectual Property (IP) blocks, such as CPUs, DRAM controllers, and interconnect fabrics, present an opportunity to shift the focus from low-level component modeling to system integration. By leveraging these readily available IPs, designers can concentrate on the critical aspects of system architecture, inter-component communication, and overall system behavior.

      This tutorial aims to bridge the gap between theoretical modeling techniques and practical SoC design workflows. We will explore strategies for accelerating the initial model generation process, enabling designers to rapidly prototype and evaluate different architectural configurations.

      A key aspect of this tutorial is the usage of architectural exploration techniques. This will involve the use of simulation-based techniques to evaluate the impact of different architectural choices, such as the number and type of key initiators like CPUs, video processing units or accelerators, the memory hierarchy configuration, and the interconnect topology.

      Generating tangible and actionable results from model simulations is crucial for both internal design optimization and effective customer communication. This tutorial will provide practical guidance on how to extract meaningful insights from simulation data, including performance metrics, power consumption profiles, and resource utilization statistics specifically in an open-source environment. We will explore techniques for visualizing and analyzing simulation results, enabling designers to identify bottlenecks, optimize resource allocation, and validate system performance. We will also discuss strategies for generating flexible reports that can be tailored to the specific needs of different stakeholders, including internal design teams, management, and customers.

      The tutorial will be structured around a series of practical examples and case studies. We will begin by discussing the requirements for common system components, such as CPUs, DRAM controllers, and interconnects. We will then explore techniques for integrating these components into a unified platform, focusing on the challenges of inter-component communication and synchronization. We will present several illustrative platform examples, showcasing different SoC architectures and their associated use models. Finally, we will delve into the analysis of results from model simulations, demonstrating how to extract meaningful insights and generate actionable reports. Specifically, the tutorial will cover the following key topics:

      • Defining requirements for essential system components: This section will focus on the process of specifying the functional, performance, and power requirements for individual IPs, including CPUs, DRAM controllers, and interconnect fabrics.
      • Seamless integration of components into a unified platform: This section will explore techniques for assembling and configuring a system-level model from individual IP components. We will discuss the challenges of inter-component communication and synchronization and explore different approaches for modeling these interactions.
      • Illustrative platform examples and their associated use models: This section will present several real-world SoC architectures, showcasing different design choices and their impact on system performance.
      • Analysis of model simulation results: This section will focus on the techniques for extracting meaningful insights from simulation data. We will discuss methods for visualizing and analyzing performance metrics and resource utilization statistics. We will also explore strategies for generating flexible reports that can be tailored to the specific needs of different stakeholders.

      By attending this tutorial, participants will gain a comprehensive understanding of the challenges and opportunities associated with SoC modeling, architectural exploration, and result analysis. They will acquire practical skills and knowledge that can be immediately applied to their own SoC design projects, enabling them to accelerate development cycles, enhance product quality, and reduce time-to-market.

    • Expediting Coverage Closure in Digital Verification with the Portable Stimulus Standard (PSS)

      Tulio Pereira Bitencourt, Nikolaos Ilioudis, Ahmed Abd-Allah, Daniel Waszczak, Anton Tschank and Tom Fitzpatrick

      The semiconductor industry is in an ever-growing and evolving process, where complexity is increasing to cope with the necessity for optimized integrated circuits (IC) that are more robust and reliable. Among many requirements, the need for systems that can execute different tasks and interconnect with multiple other devices emerges as critical, which compromises testing capabilities as one shall now deal with complicated designs.

      Portable Stimulus Standard (PSS) from Accellera tackles this complexity by allowing test cases to be executed at different levels of abstraction, such as during the verification procedure using SystemVerilog and UVM (Universal Verification Methodology) or when prototyping a Register-Transfer Level (RTL) circuit in a field-programmable gate array (FPGA). PSS was released by Accellera in 2018 and has been added into Electronic Design Automation (EDA) tools ever since, with a rising number of companies using it.

      Specifically for the digital verification phase for application-specific integrated circuits (ASICs), PSS can be used on top of UVM to enhance its capabilities and allow for dynamic test cases, the so-called scenarios, to be randomly created prior to simulations. PSS relies on a task-based system where independent operations can be defined as either UVM sequences or virtual sequences, and a PSS-compatible tool generates scenarios based upon rules described in a PSS model file.

      Using PSS combined with UVM allows for many test cases to be effortlessly randomized solely based upon three critical files: PSS test, PSS model, and list of tasks. This approach significantly optimizes bug finding and coverage closure times, as random scenarios can swiftly create a wide range of stimuli to RTL designs.
       

    • Introduction to the Apheleia Verification Library

      Andy Bond

      In the 20 years since UVM was introduced, bringing a common structure and methodology to re-usable verification environments, much has changed in both the designs under verification and the technology available. AVL (Apheleia Verification Library) is an open-source Python library focusing on the user experience and efficiency, while retaining the scalability and re-use mentality of UVM.

      This tutorial will walk participants through their first AVL test-bench, introducing the features, key differences and potential efficiency gains vs. UVM. In addition, the tutorial can be run on any mid-level laptop, with no requirement for commercial tools or licenses demonstrating the potential benefits to students, hobbyists and early-stage innovators who wish to focus on innovation without requiring significant up-front investment.

    • Standardization of Multi-Physics Interfaces and Model Exchange Mechanisms

      Sumit Adhikari, Heiko Schick and Daniel Hedberg

      The increasing prevalence of tightly-coupled, multi-physics systems across domains such as electronics, electromagnetics, thermal, mechanical, and fluidic design necessitates standardized methods for model interchange and tool interoperability. Current industrial workflows rely on heterogeneous, proprietary interfaces and loosely defined integration schemes, leading to inefficiencies in co-simulation fidelity, model portability, and system verification. To address this gap, we propose a workshop to define the scope and objectives of a prospective Accellera Working Group dedicated to the standardization of interface definitions and model exchange formats for multi-physics design environments. The workshop will convene stakeholders from industry, academia, and tool vendors to: 1) Identify core interoperability challenges in multi-physics workflows; 2) Examine existing solutions (e.g., FMI, Modelica, SystemC, SystemC-AMS) and their limitations in heterogeneous domain coupling; 3) Delineate candidate requirements for a unified interface abstraction supporting cross-domain semantics, co-simulation control, and metadata exchange; 4) Formulate initial charter language, use case classes, and a phase-wise deliverable roadmap for the proposed working group; 5) Assess opportunities for alignment with existing Accellera standards and external consortia.

    • Tackling the cyber-physical system design challenges with MBSE and SystemC

      Karsten Einwich, Rolf Meyer and Petri Solanti

      Mastering the interdisciplinary complexity of cyber-physical systems requires a holistic design methodology that can handle the specific needs of all implementation domains involved. Model-based systems engineering methodology backed by a versatile simulation technology provided by SystemC can tackle the design and verification challenges of cyber-physical systems. This tutorial demonstrates a holistic methodology for cyber-physical systems design.

    • Teaching Analog CMOS Chip Design using Open Source Tools

      Ted Johansson

      Microelectronics are essential for all products with just a little bit of technology inside. The COVID-19 pandemic exposed vulnerabilities in global supply chains, particularly in the automotive sector, which faced production disruptions due to a shortage of critical chips, mainly produced in Asia. EU and USA Chips Act programs have regained the focus on fabrication of semiconductors, new devices, circuit design, and the supply chains. In the next couple of years, huge amounts of money, shared between government and industry, will be spent on rebuilding the semiconductor industry in Europe and USA.

      However, during the last 25 years, this area has not attracted students. We need to educate a new generation of engineers and researchers on semiconductors and circuit design. To meet the foreseen need of education and training, Uppsala University is developing new courses in electronics, semiconductors, circuit and chip design, and manufacturing technology. These courses will range from continuing education to engineering programs and doctoral studies.

      In the tutorial, I will describe how we created a PhD course (which will also be offered to master programs in 2025/26) on analog IC design, addressing the design chain, tools, and everything you need to know to design and have chips fabricated at a foundry using only open-source tools and PDKs.

    • Tutorial on the Improvement introduced by IEEE 1801-2024 (UPF4.0) Standard for the Specifications, Implementation and Verification of Low Power intent

      Joshua Ong, Nathalie Meloux, Gabriel Chidolue and Shaun Durnan

      As advanced low-power architectures have become more pervasive in industry, the complexity of these architectures has driven new methodologies for the verification, implementation, and reuse of power intent specifications. Modern low-power designs place requirements that span from enabling more flexible IP design reuse to providing well-defined interfaces between analog and digital components in simulation. The IEEE 1801-2024 (UPF 4.0) standard provides several key enhancements that are required to keep pace with these innovations in low-power design. The tutorial is based on the DVCON 2025 US Workshop titled "Introduction of IEEE 1801-2024 (UPF4.0) Improvements for the Specification, Implementation and Verification of Low-Power Intent". The tutorial will provide an overview of the enhancements to the standard from both conceptual and command levels. New concepts such as virtual supply nets, refinable macros, and UPF libraries will be introduced, as well as re-architected features with respect to interfacing between analog and digital simulation and advanced state retention modeling for enhanced semantics. While the new IEEE 1801-2024 standard provides numerous detailed clarifications and enhancements to the previous version, this workshop will focus on the key changes that will impact most designers and changes that enable new functionality.

    • UVM Masterclass Tutorial - bringing the next generation up-to-speed

      Mark Litterick and Andre Winkelmann

      Although verification using UVM is an established technology within the industry, this does not mean it is easy. The sheer scope of the verification challenge and the variety of techniques and skills that must be learned to master this craft go way beyond the level of expertise acquired at university or training classes. This tutorial is intended to assist emerging experts by reminding them that they are not alone in the challenges they face, in addition to providing direct pragmatic advice on various UVM and SV topics of interest.

    • Verification Vision Tutorial - a personal perspective on the state of the art

      Mark Litterick

      This discourse style tutorial provides a personal perspective on the main verification problems and challenges within the industry, including technical aspects, manpower, and scope. A subjective view of the current solutions and trends is also discussed, including the current state of emerging technologies. The observations are based on personal experience of the presenter and colleagues on many different projects within various clients.