Tutorial on Hardware Architectures for Deep Neural Networks
Date: March 27, 2017
Location: Rooms 36-428/462
In-class participation is full, but registration for WebEx is still open.
Deep neural networks (DNNs) are currently widely used for many Artificial Intelligence (AI) applications including computer vision, speech recognition, robotics, etc. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, designing efficient hardware architectures for deep neural networks is an important step towards enabling the wide deployment of DNNs in AI systems.
In this tutorial, we will provide an overview of DNNs, discuss the tradeoffs of the various architectures that support DNNs including CPU, GPU, FPGA and ASIC, and highlight important benchmarking/comparison metrics and design considerations. We will also describe recent techniques that reduce the computation cost of DNNs from both the hardware architecture and network algorithm perspective.
Note that there will be a break from 3–3:15pm during the workshop.
- Background of Deep Neural Networks [ slides ]
- Survey of DNN Development Resources [ slides ]
- Survey of DNN Hardware [ slides ]
- DNN Accelerator Architectures [ slides ]
- DNN Model and Hardware Co-Design [ slides ]
- Entire Tutorial [ slides ]
- Understand the key design considerations for DNN
- Be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics
- Understand the tradeoffs between various architectures and platforms
- Assess the utility of various optimization approaches
- Understand recent implementation trends and opportunities
In-class participation is full, but registration for WebEx is still open and required.
Learn more about the eyeriss project here.