With the development of mobile edge computing (MEC), more and more intelligent services and applications based on deep\nneural networks are deployed on mobile devices to meet the diverse and personalized needs of users. Unfortunately, deploying and\ninferencing deep learning models on resource-constrained devices are challenging. The traditional cloud-based method usually\nruns the deep learning model on the cloud server. Since a large amount of input data needs to be transmitted to the server through\nWAN, it will cause a large service latency. This is unacceptable for most current latency-sensitive and computation-intensive\napplications. In this paper, we propose Cogent, an execution framework that accelerates deep neural network inference through\ndevice-edge synergy. In the Cogent framework, it is divided into two operation stages, including the automatic pruning and\npartition stage and the containerized deployment stage. Cogent uses reinforcement learning (RL) to automatically predict pruning\nand partition strategies based on feedback from the hardware configuration and system conditions so that the pruned and\npartitioned model can better adapt to the system environment and user hardware configuration. Then through containerized\ndeployment to the device and the edge server to accelerate model inference, experiments show that the learning-based hardwareaware\nautomatic pruning and partition scheme can significantly reduce the service latency, and it accelerates the overall model\ninference process while maintaining accuracy.
Loading....