International Symposium on Quality Electronic Design (ISQED)
CFP
Call for Papers

Program at a Glance

Register
Sponsor

Press Room
Press Room

SVPTI

ISQED'21 Embedded Tutorials

 

Chair & Moderators:
José Pineda de Gyvez - NXP Semiconductors (Chair)
Yu Pu - Alibaba(Co-Chair)


Tutorial 1
 Wednesday, April 7, 11:50AM-12:50PM

Semiconductors for the next wave in Automotive

Presenter:
Clara Otero Perez, NXP Semiconductors

Clara Otero Perez Clara Otero Perez

Abstract: Clara will share some insights on the NXP vision on mobility of the future: NXP is advancing intelligent transport systems and electrification in order to shape a future in which zero emissions, zero road fatalities and maximum convenience will become reality. ADAS – Advanced Driver Assistance Systems – will come within everyone's reach and provide the safe and increasingly autonomous experiences that will reshape our relationship to transport. The technologies of the Automated Driving Domain will soon allow passengers to experience the ultimate in personalized and connected convenience as vehicles seamlessly Sense, Think, and Act on real-time road situations.

 

About Clara Otero Perez
Clara Otero Pérez is Director Systems Innovations for Automotive at NXP Semiconductor in Eindhoven (NL). She is responsible for scouting new systems and applications where NXP’s products would play a role and build proof of concepts and demonstrators for those new applications such as cooperative connected car, autonomous driving and UAV. After graduating in Physics by the University of Santiago de Compostela in Spain, she started to work for Philips Research in Eindhoven (NL) as a research scientist in the field of real-time systems and multimedia processors. In 2006 she moved to NXP and started working on automotive and secure connectivity related projects. In 2008 she became department manager driving innovation activities both internal as well as with partners in subsidy projects in the areas of IoT, connected car and cooperative mobility.


Tutorial 2
 Thursday, April 8, 12:35PM-13:35PM

Large-Scale Quantum Computers: The need for Cryo-CMOS

Presenter:
Dr. Fabio Sebastiano, Delft University of Technology, Delft, The Netherlands

Fabio Sebastiano Fabio Sebastiano

Abstract: Quantum computers hold the promise to change our everyday lives in this century in the same radical way as the classical computer did in the last century, by efficiently solving problems that are intractable today, such as large number factorization and simulation of quantum systems. Quantum computers operate by processing information stored in quantum bits (qubits), which must typically operate at cryogenic temperature. Today the qubits are mostly controlled by conventional electronics working at room temperature. This thermal gap can be readily bridged by a few wires since today’s quantum computers employ only a few qubits. However, practical quantum computers will require more than thousands of qubits, making this approach impractical. A solution is to build the qubit electrical interface using CMOS integrated circuits operate at cryogenic temperature (cryo-CMOS), hence very close to the qubits. This talk will give a brief introduction to quantum computers and their operations, followed by a description of their hardware implementation and their requirements in terms of electronic control and read-out, including the need for modeling the quantum/classical interface. Next, we will review the behavior of commercial CMOS devices and the available cryogenic device models required for circuit design. The demonstration of several state-of-the-art cryo-CMOS circuits and systems, both for qubit drive and readout, and their verification with qubits will be described, highlighting challenges and opportunities. Finally, we will outline the prospects towards qubit/electronics integration to enable the large-scale quantum computers required to address future world-changing computational problems.

 

About Fabio Sebastiano
Fabio Sebastiano holds degrees in Electrical Engineering from University of Pisa, Italy (BSc, 2003; MSc, 2005) from Sant’Anna school of Advanced Studies, Pisa, Italy (MSc, 2006) and from Delft University of Technology, The Netherlands (PhD, 2011). From 2006 to 2013, he was with NXP Semiconductors Research in Eindhoven, The Netherlands. In 2013, he joined Delft University of Technology, where he is currently an Associate Professor and the Research Lead for the Quantum Computing Division of QuTech. He has authored or co-authored one book, 11 patents, and over 70 technical publications. His main research interests are sensor read-outs, frequency references, cryogenic electronics, and quantum computing. Dr. Sebastiano was the co-recipient of the best student paper award at ISCAS in 2008, the best paper award at IWASI in 2017, and the best IP award at DATE in 2018. He is a senior member of IEEE, a TPC member for RFIC and IMS, and has served as Distinguished Lecturer of the IEEE Solid-State Circuit Society.


Tutorial 3
 Friday, April 9, 9:00AM-10:00AM

Putting AI on Diet: TinyML and Efficient Deep Learning

Presenter:
Prof. Song Han, MIT EECS

Song Han Song Han

Abstract: Machine learning on tiny IoT devices based on microcontroller units (MCU) is appealing but challenging: the memory of microcontrollers is 2-3 orders of magnitude less even than mobile phones. We propose MCUNet, a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine), enabling ImageNet-scale inference on microcontrollers. TinyNAS adopts a two-stage neural architecture search approach that first optimizes the search space to fit the resource constraints, then specializes the network architecture in the optimized search space. TinyNAS can automatically handle diverse constraints (i.e. device, latency, energy, memory) under low search costs. TinyNAS is co-designed with TinyEngine, a memory-efficient inference library to expand the design space and fit a larger model. TinyEngine adapts the memory scheduling according to the overall network topology rather than layer-wise optimization, reducing the memory usage by 2.7x, and accelerating the inference by 1.7-3.3x compared to TF-Lite Micro and CMSIS-NN. MCUNet is the first to achieves >70% ImageNet top1 accuracy on an off-the-shelf commercial microcontroller, using 3.6x less SRAM and 6.6x less Flash compared to quantized MobileNetV2 and ResNet-18. On visual&audio wake words tasks, MCUNet achieves state-of-the-art accuracy and runs 2.4-3.4x faster than MobileNetV2 and ProxylessNAS-based solutions with 2.2-2.6x smaller peak SRAM. Our study suggests that the era of always-on tiny machine learning on IoT devices has arrived.

 

About Song Han
Song Han is an assistant professor in MIT EECS. His research focuses on efficient deep learning computing. He has proposed “deep compression” and “efficient inference engine” that first exploited model compression and weight sparsity in deep learning accelerators, which has been integrated into many commercial AI chips/frameworks. Recently he is interested in efficient and small NN design with AutoML and NAS. He is a recipient of NSF CAREER Award and MIT Technology Review Innovators Under 35. He earned a PhD in electrical engineering from Stanford University. synthesis algorithms.


Tutorial 4
 Friday,April 9, 10:00AM-11:00AM

Security challenges and opportunities at the Intersection of Architecture and ML/AI

Presenter:
Prof. Nael Abu-Ghazaleh , University of California, Riverside

Nael Abu-Ghazaleh Nael Abu-Ghazaleh

Abstract: Machine learning is an increasingly important computational workload as data-driven deep learning models are becoming increasingly important in a wide range of application spaces. Computer systems, from the architecture up, have been impacted by ML in two primary directions: (1) ML is an increasingly important computing workload, with new accelerators and systems targeted to support both training and inference at scale; and (2) ML supporting architecture decisions, with new machine learning based algorithms controlling systems to optimize their performance, reliability and robustness. In this talk, I will explore the intersection of security, ML and architecture, identifying both security challenges and opportunities. Machine learning systems are vulnerable to new attacks including adversarial attacks crafted to fool a classifier to the attacker’s advantage, membership inference attacks attempting to compromise the privacy of the training data, and model extraction attacks seeking to recover the hyperparameters of a (secret) model. Architecture can be a target of these attacks when supporting ML, but also provides an opportunity to develop defenses against them, which I will illustrate with three examples from our recent work. First, I show how ML based hardware malware detectors can be attacked with adversarial perturbations to the Malware and how we can develop detectors that resist these attacks. Second, I will also show an example of a microarchitectural side channel attacks that can be used to extract the secret parameters of a neural network and potential defenses against it. Finally, I will also discuss how architecture can be used to make ML more robust against adversarial and membership inference attacks using the idea of approximate computing. I will conclude with describing some other potential open problems.

 

About Nael Abu-Ghazaleh
Nael Abu-Ghazaleh is a Professor with joint appointment in the CSE and ECE departments at the University of California, Riverside, and the director of the Computer Engineering program. His research interests include architecture support for security, high performance computing architectures, and networking and distributed systems. His group’s research has lead to the discovery of a number of vulnerabilities in modern architectures and operating systems which have been reported to companies and impacted commercial products. He has published over 200 papers, several of which have been nominated or recognized with best paper awards. He is an ACM distinguished member, and IEEE distinguished visitor.


ISQED