Home

Tutorial on Hardware Architectures for Deep Neural Networks

Sponsored by the MIT Center for Integrated Circuits and Systems and the Microsystems Technology Laboratories at MIT

Date: March 27, 2017
Time: 1:00–5:00pm
Location: Rooms 36-428/462
Registration: Free, Limited to 70

Joel Emer
Joel Emer
MIT/NVIDIA


Vivienne Sze
MIT


Yu-Hsin Chen
MIT

Email: eyeriss at mit dot edu

Overview

Deep neural networks (DNNs) are currently widely used for many Artificial Intelligence (AI) applications including computer vision, speech recognition, robotics, etc. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, designing efficient hardware architectures for deep neural networks is an important step towards enabling the wide deployment of DNNs in AI systems.

In this tutorial, we will provide an overview of DNNs, discuss the tradeoffs of the various architectures that support DNNs including CPU, GPU, FPGA and ASIC, and highlight important benchmarking/comparison metrics and design considerations. We will also describe recent techniques that reduce the computation cost of DNNs from both the hardware architecture and network algorithm perspective.

Agenda:

  • Background of Deep Neural Networks
  • Survey of DNN Development Resources
  • Survey of DNN Hardware
  • DNN Accelerator Architectures
  • Network and Hardware Co-Design

Participant Takeaways:

  • Understand the key design considerations for DNN
  • Be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics
  • Understand the tradeoffs between various architectures and platforms
  • Assess the utility of various optimization approaches
  • Understand recent implementation trends and opportunities
Register Now

Registration is free. Attendance is limited to 70 participants and will close when capacity is reached.

Learn more about the eyeriss project here.