Tutorial on Hardware Architectures for Deep Neural Networks
Date: March 27, 2017
Location: Rooms 36-428/462
Registration: Free, Limited to 70
Deep neural networks (DNNs) are currently widely used for many Artificial Intelligence (AI) applications including computer vision, speech recognition, robotics, etc. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, designing efficient hardware architectures for deep neural networks is an important step towards enabling the wide deployment of DNNs in AI systems.
In this tutorial, we will provide an overview of DNNs, discuss the tradeoffs of the various architectures that support DNNs including CPU, GPU, FPGA and ASIC, and highlight important benchmarking/comparison metrics and design considerations. We will also describe recent techniques that reduce the computation cost of DNNs from both the hardware architecture and network algorithm perspective.
- Background of Deep Neural Networks
- Survey of DNN Development Resources
- Survey of DNN Hardware
- DNN Accelerator Architectures
- Network and Hardware Co-Design
- Understand the key design considerations for DNN
- Be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics
- Understand the tradeoffs between various architectures and platforms
- Assess the utility of various optimization approaches
- Understand recent implementation trends and opportunities
Registration is free. Attendance is limited to 70 participants and will close when capacity is reached.
Learn more about the eyeriss project here.