Deep neural networks (DNNs) are widely used in many artificial intelligence applications; many specialized DNN-inference accelerators have been proposed. However, existing DNN accelerators rely heavily on certain types of DNN operations (such as Conv, FC, and ReLU, etc.), which are either less used or likely to become out of date in future, posing challenges of flexibility and compatibility to existing work. This paper designs a flexible DNN accelerator from a more generic perspective rather than speeding up certain types of DNN operations. Our proposed Nebula exploits the width property of DNNs and gains a significant improvement in system throughput and energy efficiency over multi-branch architectures. Nebula is a first-of-its-kind framework for multi-branch DNNs.
Loading....