Skip to content

RSS2: Workshop on Robustness and Safe Software 2.0

This workshop is in conjunction with ASPLOS 2021  

Friday, April 16, 2021 | Full Day

Introduction

Welcome to the Workshop on Robustness and Safe Software 2.0 (RSS2).  

Unlike Software 1.0 (conventional programs) that is manually coded with hardened parameters and explicit logics, Software 2.0 programs, usually manifested as and enabled by Deep Neural Networks (DNN), have learnt parameters and implicit logics. Software 2.0 is found in a diverse set of applications in today’s society, ranging from autonomous machines, Augmented/Virtual Reality devices, to smart-city infrastructures.  

While the systems and architecture communities have focused, rightly so, on the efficiency of DNNs, Software 2.0 exposes a unique set of challenges for robustness, safety, and resiliency, which are major roadblocks before Software 2.0 becomes a pervasive computing paradigm. For instance, small perturbations to inputs could easily “fool” DNNs to produce incorrect results, giving rise to the so-called adversarial attacks. Similarly, while DNNs are generally resilient to hardware faults, few have studied the worst-case resiliency of DNNs to hardware faults, which usually dictated the safety of mission-critical systems.  

Improving the robustness, safety, and resiliency of Software 2.0 is necessarily a cross-layer task, just like how algorithms, programming language, architecture, and circuits communities came together in the Software 1.0 era. It is also critical to not hyper-optimize individual system components; rather, we must take a whole-of-system approach that understands the requirements and constraints of end-to-end systems, which are usually multi-chip and span both client, edge, and cloud.  

To that end, the workshop is meant to foster an interactive discussion about computer systems and architecture research’s role of robust, safe, and resilient Software 2.0. Ultimately, the workshop is meant to lead to new discussions and insights on algorithms, architectures, and circuit/device-level design as well as system-level integration and co-design.

Organizer

Agenda (All Times are EST)

Opening and Welcome : 12:00 ~ 12:10

Session 1: Algorithms & Applications: What are the new challenges for robustness, safety and resiliency?

Time Speaker Content
12:10 ~ 12:35 Alfred Chen (UC Irvine) Towards Secure and Robust Autonomy Software in Autonomous Driving
12:35 ~ 13:00 Helen Li (Duke University) Advancing the Design of Adversarial Machine Learning Methods
13:00 ~ 13:25 Hima Lakkaraju (Havard University) Explainable ML: Challenges and Opportunities
13:25 ~ 13:45 Panel
13:45 ~ 13:55 Break

Session 2: Architecture & Systems: How can we build resilient and safe systems and hardware?

Time Speaker Content
13:55 ~ 14:20 Yanjing Li (University of Chicago) Resilient Deep Learning Accelerators
14:20 ~ 14:45 Farinaz Koushanfar (UC San Diego) Hardware in the Loop: ML for security and security for ML
14:45 ~ 15:10 Anand Raghunathan (Purdue University) Topic 3
15:10 ~ 15:30 Panel
15:30 ~ 15:40 Break

Session 3: Circuits & Devices: How can we harness emerging devices and their characteristics for extreme robustness?

Time Speaker Content
15:40 ~ 16:05 Kaushik Roy (Purdue University) Rethinking Computing with Neuro-Inspired Learning: Devices, Circuits, and Systems
16:05 ~ 16:30 Suman Datta (University of Notre Dame) Coupled Oscillators Based Few Shot Unsupervised Learning
16:30 ~ 16:55 Shimeng Yu (Georgia Institute of Technology) Challenges and Opportunities of Deep Neural Network Training or Inferencing with Synaptic Devices
16:55 ~ 17:15 Panel

Closing: 17:15 ~ 17:25

Talk 1: Towards Secure and Robust Autonomy Software in Autonomous Driving

Abstract: Autonomous Driving (AD) technology has always been an international pursuit due to its significant benefit in driving safety, efficiency, and mobility. Over 15 years after the first DARPA Grand Challenge, its development and deployment are becoming increasingly mature and practical, with some AD vehicles already providing services on public roads (e.g., Google Waymo One in Phoenix and Baidu Apollo Go in China). In AD technology, the autonomy software stack, or the AD software, is highly security critical: it is in charge of safety-critical driving decisions such as collision avoidance and lane keeping, and thus any security problems in it can directly impact road safety. In this talk, I will describe my recent research that initiates the first systematic effort towards understanding and addressing the security problems in production AD software. I will be focusing on two critical modules: perception and localization, and talk about how we are able to discover novel and practical sensor/physical-world attacks that can cause end-to-end safety impacts such as crashing into obstacles or driving off road. Besides AD software, I will also briefly talk about my recent research on autonomy software security in smart transportation in general, especially those enabled by Connected Vehicle (CV) technology. I will conclude with a discussion on defense and future research directions.

Speaker Bio: Qi Alfred Chen is an Assistant Professor in the Department of Computer Science at the University of California, Irvine. His research interest spans software security, systems security, and network security. Currently, his research focuses on security problems in autonomous CPS and IoT systems (e.g., autonomous driving and intelligent transportation). His works have high impacts in both academic and industry with over 30 research papers in top-tier venues in security, mobile systems, transportation, software engineering, and machine learning; a nationwide USDHS US-CERT alert, and multiple CVEs; over 50 news articles by major news media such as Forbes, Fortune, and BBC News; and vulnerability report acknowledgments from USDOT, Apple, Microsoft, Comcast, Daimler, etc. Recently, his research triggered over 20 autonomous driving companies such as Tesla, GM, Baidu, and Daimler to start security vulnerability investigations; some confirmed to work on fixes. He serves as reviewers for various top-tier venues such as Usenix Security, ACM CCS, TIFS, TDSC, T-ITS, etc., and co-found the AutoSec workshop (co-located with NDSS’21). His group won the 1st place in the first AutoDriving Security CTF (part of BCTF) in 2020. Chen received his Ph.D. from the University of Michigan in 2018.

Talk 2: Advancing the Design of Adversarial Machine Learning Methods

Abstract: It has become clear that deep neural networks (DNNs) have an immense potential to learn and perform complex tasks. It is also evident that DNNs have many vulnerabilities with the potential to render them useless in complex and extended operating environments. The purpose of our research is to investigate ways in which DNN models are vulnerable to “adversarial attacks,” while also leveraging such adversarial techniques to construct more robust and reliable deep learning-based systems. We explore the potential weaknesses of DNN models by developing advancedfeature space-based adversarial attacks, which create adversarial directions that are generally effective for data distribution. The learned distributions can also be used to analyze layer-wise and model-wise transfer properties and gain insights into how feature distributions evolve with layer depth and architecture. Alternatively, we investigate the ensemble methods against transfer attacks. Our approach (namely, DVERGE) isolates the adversarial vulnerability in each sub-model by distilling non-robust features. It then diversifies the adversarial vulnerability to induce diverse outputs against a transfer attack. New challenges for developing robust DNN models will be discussed at the end of the talk.

Speaker Bio: Hai “Helen” Li is Clare Boothe Luce Professor and Associate Chair of the Department of Electrical and Computer Engineering at Duke University. She received her B.S and M.S. from Tsinghua University and Ph.D. from Purdue University. At Duke, she co-directs Duke University Center for Computational Evolutionary Intelligence and NSF IUCRC for Alternative Sustainable and Intelligent Computing (ASIC). Her research interests include machine learning acceleration and security, neuromorphic circuits and systems for brain-inspired computing, conventional and emerging memory, and software and hardware co-design. She received the NSF CAREER Award, the DARPA Young Faculty Award, TUM-IAS Hans Fischer Fellowship from Germany, ELATE Fellowship, night best paper awards, and another nine best paper nominations. Dr. Li is a fellow of IEEE and a distinguished member of ACM. For more information, please see her webpage at here.

Talk 3: Explainable ML: Challenges and Opportunities

Abstract: As machine learning is increasingly being deployed in real world applications, it has become critical to ensure that stakeholders understand and trust these models. End users must have a clear understanding of the model behavior so they can diagnose errors and potential biases in these models, and decide when and how to employ them. However, most accurate models that are deployed in practice are not interpretable, making it difficult for users to understand where the predictions are coming from, and thus, difficult to trust. Recent work on explanation techniques in machine learning offers an attractive solution: they provide intuitive explanations for “any” machine learning model by approximating complex machine learning models with simpler ones. In this talk, I will discuss several popular post hoc explanation methods, and shed light on their advantages and shortcomings. I will conclude the tutorial by highlighting open research problems in the field.

Speaker Bio: Hima Lakkaraju is an Assistant Professor at Harvard University focusing on explainability, fairness, and robustness of machine learning models. She has also been working with various domain experts in criminal justice and healthcare to understand the real world implications of explainable and fair ML. Hima has recently been named one of the 35 innovators under 35 by MIT Tech Review, and has received best paper awards at SIAM International Conference on Data Mining (SDM) and INFORMS. She has given invited workshop talks at ICML, NeurIPS, AAAI, and CVPR, and her research has also been covered by various popular media outlets including the New York Times, MIT Tech Review, TIME, and Forbes. For more information, please visit here

Talk 4: Resilient Deep Learning Accelerators

Abstract: Resilience to hardware failures is a key challenge as well as a top priority for deep learning (DL) accelerators, which have been deployed in a wide range of application domains, from edge computing, self-driving cars, to cloud servers. DL accelerates are susceptible to various hardware failure sources, including temporary/transient errors (such as soft errors and dynamic variations) and permanent failures (such as early-life failures, circuit aging, and manufacturing defect escapes). Although DL workloads exhibit certain tolerance to errors, such tolerance alone cannot guarantee that a DL accelerator will meet the resilience requirement of a target application in the presence of hardware errors. In this talk, I will first present a resilience analysis framework, which takes advantage of the architectural properties of DL accelerators to accurately and quickly analyze the behavior of hardware errors in these accelerators. Next, using this framework, we perform a large-scale resilience study to thoroughly understand the resilience properties of DL accelerators/workloads. The key findings of this study will be discussed. Finally, I will share our insights on how to design resilient DL accelerators.

Speaker Bio: Yanjing Li is an Assistant Professor in the Department of Computer Science (Systems Group) at the University of Chicago. Prior to joining University of Chicago, she was a senior research scientist at Intel Labs. Professor Li received her Ph.D. in Electrical Engineering from Stanford University, a M.S. in Mathematical Sciences (with honors) and a B.S. in Electrical and Computer Engineering (with a double major in Computer Science) from Carnegie Mellon University.

Professor Li has received various awards, including Google research scholar award, NSF/SRC energy-efficient computing: from devices to architectures (E2CDA) program award, Intel Labs Gordy academy award (highest honor in Intel Labs) and several other Intel recognition awards, outstanding dissertation award (European Design and Automation Association), and multiple best paper awards (ACM Great Lakes Symposium on VLSI, and IEEE VLSI Test Symposium, and IEEE International Test Conference).

Talk 5: Hardware in the Loop: ML for security and security for ML

Abstract: Hardware has a crucial role in both ML and security. On the one hand, recent advances in Deep Learning (DL) fueled by platform capabilities have enabled a paradigm shift to include machine intelligence in a wide range of autonomous tasks. As a result, a largely unexplored surface has opened up for attacks jeopardizing the integrity of DL models and hindering their ubiquitous deployment across various intelligent applications. On the other hand, DL-based algorithms are also being employed for identifying several security vulnerabilities on long streams of multi-modal data and logs, including hardware logs. In this talk, I will discuss how end-to-end automated frameworks based on algorithm/hardware co-design help with both (1) realizing accelerated low- overhead shields against DL attacks, and (2) enabling low overhead and real-time intelligent security monitoring.

Speaker Bio: Farinaz Koushanfar is a professor and Henry Booker Faculty Scholar in the Electrical and Computer Engineering (ECE) department at University of California San Diego (UCSD), where she is also the co-founder and co-director of the UCSD Center for Machine-Integrated Computing & Security (MICS). Her research addresses several aspects of efficient computing and embedded systems, with a focus on hardware and system security, real-time/energy-efficient big data analytics under resource constraints, design automation and synthesis for emerging applications, as well as practical privacy-preserving computing. Dr. Koushanfar is a fellow of the Kavli Foundation Frontiers of the National Academy of Engineering and a fellow of IEEE. She has received a number of awards and honors including the Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, the ACM SIGDA Outstanding New Faculty Award, Cisco IoT Security Grand Challenge Award, MIT Technology Review TR-35 2008, as well as Young Faculty/CAREER Awards from NSF, DARPA, ONR and ARO.

Talk 6: Rethinking Computing with Neuro-Inspired Learning: Devices, Circuits, and Systems

Abstract: Advances in machine learning, notably deep learning, have led to computers matching or surpassing human performance in several cognitive tasks including vision, speech and natural language processing. However, implementation of such neural algorithms in conventional "von-Neumann" architectures are several orders of magnitude more area and power expensive than the biological brain. Hence, we need fundamentally new approaches to sustain exponential growth in performance at high energy-efficiency beyond the end of the CMOS roadmap in the era of ‘data deluge’ and emergent data-centric applications. Exploring the new paradigm of computing necessitates a multi-disciplinary approach: exploration of new learning algorithms inspired from neuroscientific principles, developing network architectures best suited for such algorithms, new hardware techniques to achieve orders of improvement in energy consumption, and nanoscale devices that can closely mimic the neuronal and synaptic operations of the brain leading to a better match between the hardware substrate and the model of computation. In this talk, I will focus on our recent works on neuromorphic computing with spike based learning and the design of underlying hardware that can lead to quantum improvements in energy efficiency with good accuracy.

Speaker Bio: Kaushik Roy received B.Tech. degree in electronics and electrical communications engineering from the Indian Institute of Technology, Kharagpur, India, and Ph.D. degree from the electrical and computer engineering department of the University of Illinois at Urbana-Champaign in 1990. He was with the Semiconductor Process and Design Center of Texas Instruments, Dallas, where he worked on FPGA architecture development and low-power circuit design. He joined the electrical and computer engineering faculty at Purdue University, West Lafayette, IN, in 1993, where he is currently Edward G. Tiedemann Jr. Distinguished Professor. He also the director of the center for brain-inspired computing (C-BRIC) funded by SRC/DARPA. His research interests include neuromorphic and emerging computing models, neuro-mimetic devices, spintronics, device-circuit-algorithm co-design for nano-scale Silicon and non-Silicon technologies, and low-power electronics. Dr. Roy has published more than 800 papers in refereed journals and conferences, holds 28 patents, supervised 91 PhD dissertations, and is co-author of two books on Low Power CMOS VLSI Design (John Wiley & McGraw Hill).

Dr. Roy received the National Science Foundation Career Development Award in 1995, IBM faculty partnership award, ATT/Lucent Foundation award, 2005 SRC Technical Excellence Award, SRC Inventors Award, Purdue College of Engineering Research Excellence Award, Outstanding Mentor Award in 2021, Humboldt Research Award in 2010, 2010 IEEE Circuits and Systems Society Technical Achievement Award (Charles Desoer Award), IEEE TCVLSI Distinguished Research Award in 2021, Distinguished Alumnus Award from Indian Institute of Technology (IIT), Kharagpur, Fulbright-Nehru Distinguished Chair, DoD Vannevar Bush Faculty Fellow (2014-2019), Semiconductor Research Corporation Aristotle award in 2015, and best paper awards at 1997 International Test Conference, IEEE 2000 International Symposium on Quality of IC Design, 2003 IEEE Latin American Test Workshop, 2003 IEEE Nano, 2004 IEEE International Conference on Computer Design, 2006 IEEE/ACM International Symposium on Low Power Electronics & Design, and 2005 IEEE Circuits and system society Outstanding Young Author Award (Chris Kim), 2006 IEEE Transactions on VLSI Systems best paper award, 2012 ACM/IEEE International Symposium on Low Power Electronics and Design best paper award, 2013 IEEE Transactions on VLSI Best paper award. Dr. Roy was a Purdue University Faculty Scholar (1998-2003). He was a Research Visionary Board Member of Motorola Labs (2002) and held the M. Gandhi Distinguished Visiting faculty at Indian Institute of Technology (Bombay) and Global Foundries visiting Chair at National University of Singapore. He has been in the editorial board of IEEE Design and Test, IEEE Transactions on Circuits and Systems, IEEE Transactions on VLSI Systems, and IEEE Transactions on Electron Devices. He was Guest Editor for Special Issue on Low-Power VLSI in the IEEE Design and Test (1994) and IEEE Transactions on VLSI Systems (June 2000), IEE Proceedings -- Computers and Digital Techniques (July 2002), and IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2011). Dr. Roy is a fellow of IEEE.

Talk 7: Coupled Oscillators Based Few Shot Unsupervised Learning

Abstract: Today’s machine learning (ML) algorithms operate by constructing a model with parameters that are learned from a large input dataset such that the trained model can then make predictions about similar data. In this domain of narrow AI, much of the success comes from deterministic ML models such as feed-forward neural networks where the globally calculated error is backpropagated through multiple layers for synaptic weight update. This requires repeated trial and error process which is problematic for real-world applications such as learning from noisy data or infer latent correlations within data. Restricted Boltzmann machines (RBMs) are useful generative energy-based models with applications ranging across pattern analysis and generation. However, unsupervised training of RBM involves computationally demanding gradient calculations or sampling-based approximations with large sample sizes. In this talk, we will explore a novel pathway to reducing the sampling cost by implementing a physical compute fabric of coupled oscillators whose natural dynamics correspond to drawing uncorrelated samples from a desired RBM distribution. Leveraging the continuous-time dynamics markedly improves training speed and stability, along with faster converged relative entropy (KL divergence) compared to conventional discrete-time Gibbs sampling.

Speaker Bio: Suman Datta is the Stinson Professor of Engineering at the University of Notre Dame. He was a Professor of Electrical Engineering at The Pennsylvania State University, University Park, from 2007 to 2011. From 1999 till 2007, he was with the Advanced Transistor Group at Intel Corporation, Hillsboro, where he developed several generations of high-performance logic transistor technologies including high-k/metal gate, Tri-gate and non-silicon channel CMOS transistor technologies. His research group focuses on emerging device concepts that can enable novel computational models. He is a recipient of the Intel Achievement Award (2003), the Intel Logic Technology Quality Award (2002), the Penn State Engineering Alumni Association (PSEAS) Outstanding Research Award (2012), the SEMI Award for North America (2012), IEEE Device Research Conference Best Paper Award (2010, 2011), the PSEAS Premier Research Award (2015) and IEEE VLSI Symposium on Technology Best Paper Award (2020). He is a Fellow of IEEE and the National Academy of Inventors (NAI). He has published over 390 journal and refereed conference papers and holds 185 patents related to device technologies. He is the Director of a multi-university microelectronics research center, the ASCENT, sponsored by the Semiconductor Research Corporation (SRC) and the Defense Advanced Research Projects Agency (DARPA).

Talk 8: Challenges and Opportunities of Deep Neural Network Training or Inferencing with Synaptic Devices

Abstract: Hardware acceleration of deep neural networks drives new research and development of emerging memory devices to function as synaptic weight elements. Despite the recent progresses in synaptic devices, challenges arise due to the non-ideal effects such as asymmetry and nonlinearity in the conductance tuning, device-to-device variations and cycle-to-cycle variations, which may degrade the training accuracy. Analog-to-digital converter's limited precision and offset as well as the conductance drift may also introduce the inference accuracy loss. On the other hand, these non-ideal effects may also bring opportunities as a countermeasure mechanism for security vulnerabilities such as model reverse engineering and adversarial attack. In this talk, we will present a software-hardware co-evaluation approach to evaluate the impact of non-ideal device properties on the algorithm-level performance.

Speaker Bio: Shimeng Yu is currently an associate professor of electrical and computer engineering at Georgia Institute of Technology. He received the B.S. degree in microelectronics from Peking University in 2009, and the M.S. degree and Ph.D. degree in electrical engineering from Stanford University in 2011 and 2013, respectively. From 2013 to 2018, he was an assistant professor at Arizona State University. Prof. Yu’s research interests are the semiconductor devices and integrated circuits for energy-efficient computing systems. His research expertise is on the emerging non-volatile memories for applications such as deep learning accelerator, in-memory computing, 3D integration, and hardware security. Among Prof. Yu’s honors, he was a recipient of NSF Faculty Early CAREER Award in 2016, IEEE Electron Devices Society (EDS) Early Career Award in 2017, ACM Special Interests Group on Design Automation (SIGDA) Outstanding New Faculty Award in 2018, Semiconductor Research Corporation (SRC) Young Faculty Award in 2019, ACM/IEEE Design Automation Conference (DAC) Under-40 Innovators Award in 2020, and IEEE Circuits and Systems Society (CASS) Distinguished Lecturer for 2021-2022, etc. He is a senior member of the IEEE.