International Symposium on Quality Electronic Design (ISQED)

ISQED'20 Keynotes

March 25 -26

Active Learning for Fast, Comprehensive SPICE Verification

Jeff Dyck Jeff Dyck

Jeff Dyck, Director of Engineering - Mentor, a Siemens Business

The scope of SPICE-level verification has increased massively with new requirements for safety critical applications, statistical timing characterization, wider FinFET voltage domains, and tighter product margins. We now have many more PVT corners to verify against, and many types of IP need to be verified to high sigma, requiring millions or billions of Monte Carlo samples. Simulation budgets have ballooned and brute-force simulation methods no longer deliver the coverage required within production runtime constraints. For the past 14 years, Solido (now part of Mentor, a Siemens Business) has been using active learning technologies to accelerate SPICE verification by 10X to 1MX, while maintaining SPICE-level accuracy. This talk reviews Solido's active learning techniques used within tools. It provides a deeper dive into the all-new High-Sigma Verifier technology, as well as provides production usage updates from ML Characterization Suite Analytics and Generator.


About Jeff Dyck

Jeff Dyck is a Director of Engineering at Mentor, a Siemens Business, responsible for two software product development groups in the integrated circuit verification solutions (ICVS) division. Prior to joining Mentor, Jeff was VP of Engineering at Solido Design Automation, where he led Solido's R&D teams, managed Solido’s product lines, and co-invented Solido’s machine learning technologies. Solido was acquired by Mentor, a Siemens Business in 2017. Jeff is now working on evolving the active learning technology in Solido's products, as well as developing new disruptively differentiated tools within the Mentor analog mixed signal product line.

Re-Engineering Computing with Neuro-Inspired Learning: Devices, Circuits, and Systems

Kaushik Roy Kaushik Roy

Prof. Kaushik Roy - Edward G. Tiedemann Jr. Distinguished Professor,Purdue University

Advances in machine learning, notably deep learning, have led to computers matching or surpassing human performance in several cognitive tasks including vision, speech and natural language processing. However, implementation of such neural algorithms in conventional "von-Neumann" architectures are several orders of magnitude more area and power expensive than the biological brain. Hence, we need fundamentally new approaches to sustain exponential growth in performance at high energy-efficiency beyond the end of the CMOS roadmap in the era of ‘data deluge’ and emergent data-centric applications. Exploring the new paradigm of computing necessitates a multi-disciplinary approach: exploration of new learning algorithms inspired from neuroscientific principles, developing network architectures best suited for such algorithms, new hardware techniques to achieve orders of improvement in energy consumption, and nanoscale devices that can closely mimic the neuronal and synaptic operations of the brain leading to a better match between the hardware substrate and the model of computation. In this presentation, we will discuss our work on spintronic device structures consisting of single-domain/domain-wall motion based devices for mimicking neuronal and synaptic units. Implementation of different neural operations with varying degrees of bio-fidelity (from "non-spiking" to "spiking" networks) and implementation of on-chip learning mechanisms (Spike-Timing Dependent Plasticity) will be discussed. Additionally, we also propose probabilistic neural and synaptic computing platforms that can leverage the underlying stochastic device physics of spin-devices due to thermal noise. System-level simulations indicate ~100x improvement in energy consumption for such spintronic implementations over a corresponding CMOS implementation across different computing workloads. Complementary to the above device efforts, we have explored different learning algorithms including stochastic learning with one-bit synapses that greatly reduces the storage/bandwidth requirement while maintaining competitive accuracy, saliency-based attention techniques that scales the computational effort of deep networks for energy-efficiency and adaptive online learning that efficiently utilizes the limited memory and resource constraints to learn new information without catastrophically forgetting already learnt data.


About Kaushik Roy

Kaushik Roy received B.Tech. degree in electronics and electrical communications engineering from the Indian Institute of Technology, Kharagpur, India, and Ph.D. degree from the electrical and computer engineering department of the University of Illinois at Urbana-Champaign in 1990. He was with the Semiconductor Process and Design Center of Texas Instruments, Dallas, where he worked on FPGA architecture development and low-power circuit design. He joined the electrical and computer engineering faculty at Purdue University, West Lafayette, IN, in 1993, where he is currently Edward G. Tiedemann Jr. Distinguished Professor. He also the director of the center for brain-inspired computing (C-BRIC) funded by SRC/DARPA. His research interests include neuromorphic and emerging computing models, neuro-mimetic devices, spintronics, device-circuit-algorithm co-design for nano-scale Silicon and non-Silicon technologies, and low-power electronics. Dr. Roy has published more than 700 papers in refereed journals and conferences, holds 18 patents, supervised 85 PhD dissertations, and is co-author of two books on Low Power CMOS VLSI Design (John Wiley & McGraw Hill). Dr. Roy received the National Science Foundation Career Development Award in 1995, IBM faculty partnership award, ATT/Lucent Foundation award, 2005 SRC Technical Excellence Award, SRC Inventors Award, Purdue College of Engineering Research Excellence Award, Humboldt Research Award in 2010, 2010 IEEE Circuits and Systems Society Technical Achievement Award (Charles Desoer Award), Distinguished Alumnus Award from Indian Institute of Technology (IIT), Kharagpur, Fulbright-Nehru Distinguished Chair, DoD Vannevar Bush Faculty Fellow (2014-2019), Semiconductor Research Corporation Aristotle award in 2015, and best paper awards at 1997 International Test Conference, IEEE 2000 International Symposium on Quality of IC Design, 2003 IEEE Latin American Test Workshop, 2003 IEEE Nano, 2004 IEEE International Conference on Computer Design, 2006 IEEE/ACM International Symposium on Low Power Electronics & Design, and 2005 IEEE Circuits and system society Outstanding Young Author Award (Chris Kim), 2006 IEEE Transactions on VLSI Systems best paper award, 2012 ACM/IEEE International Symposium on Low Power Electronics and Design best paper award, 2013 IEEE Transactions on VLSI Best paper award. Dr. Roy was a Purdue University Faculty Scholar (1998-2003). He was a Research Visionary Board Member of Motorola Labs (2002) and held the M. Gandhi Distinguished Visiting faculty at Indian Institute of Technology (Bombay) and Global Foundries visiting Chair at National University of Singapore. He has been in the editorial board of IEEE Design and Test, IEEE Transactions on Circuits and Systems, IEEE Transactions on VLSI Systems, and IEEE Transactions on Electron Devices. He was Guest Editor for Special Issue on Low-Power VLSI in the IEEE Design and Test (1994) and IEEE Transactions on VLSI Systems (June 2000), IEE Proceedings -- Computers and Digital Techniques (July 2002), and IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2011). Dr. Roy is a fellow of IEEE.

Spintronic Devices for Memory, Logic, and Neuromorphic Computing

Joseph S. Friedman Joseph S. Friedman

Joseph S. Friedman - Assistant Professor, Director of the NeuroSpinCompute Laboratory,
Department of Electrical & Computer Engineering , The University of Texas at Dallas

Given the impending end of CMOS scaling and its accompanying performance improvements, future advances in computing are contingent on the development of emerging technologies. Spintronics, in which electron spin is manipulated in addition to electron charge, is particularly intriguing due to the availability of non-volatility and the wide range of spintronic switching mechanisms. This presentation will provide an overview of spintronic devices and their application to memory, logic, and neuromorphic computing. The non-volatility inherent to many ferromagnetic structures has been exploited in magnetoresistive random-access memory based on spin-transfer torque, with recent progress towards improved endurance and switching energy through spin-orbit torque switching. Additionally, several approaches have been proposed to use spintronic devices to modulate the switching behavior of other spintronic devices, enabling integrated logic circuits with the potential for energy-efficient high-performance beyond-CMOS computing systems. Finally, non-volatile spintronic devices can be used to emulate the behavior of synapses and neurons in neuromorphic systems modeled on neurobiological structures, providing the opportunity to directly implement neural networks for artificial intelligence and machine learning applications.


About Joseph S. Friedman

Dr. Joseph S. Friedman is an assistant professor of Electrical & Computer Engineering at The University of Texas at Dallas and director of the NeuroSpinCompute Laboratory. He holds a Ph.D. and M.S. in Electrical & Computer Engineering from Northwestern University and undergraduate degrees from Dartmouth College. He was previously a summer faculty fellow at the U.S. Air Force Research Laboratory, a visiting professor at Politenico di Torino, a CNRS research associate with Université Paris-Saclay, a guest scientist at RWTH Aachen University, and worked on logic design automation at Intel Corporation. He is a member of the editorial board of the Microelectronics Journal, the technical program committees of DAC, SPIE Spintronics, NANOARCH, GLSVLSI, and ICECS, and the ISCAS review committee. He has also been a member of the organizing committee of NANOARCH 2019 and DCAS 2018, and was awarded a Fulbright Postdoctoral Fellowship.