Different libraries allow performing computer vision tasks, e.g., object recognition, in almost every mobile device that has a computing capability. In modern smartphones, such tasks are compute-intensive, energy hungry computation running on the GPU or the particular Machine Learning (ML) processor embedded in the device. Task offloading is a strategy adopted to move compute-intensive tasks and hence their energy consumption to external computers, in the edge network or in the cloud. In this paper, we report an experimental study that measure under different mobile computer vision set-ups the energy reduction when the inference of an image processing is moved to an edge node, and the capability to still meet real-time requirements. In particular, our experiments show that offloading the task - in our case real-time object recognition - to a possible next-to-the-user node allows saving about the 70% of battery consumption while maintaining the same frame rate (fps) that local processing can achieve.
2021, 2021 IEEE/ACM 25th International Symposium on Distributed Simulation and Real Time Applications (DS-RT), Pages 1-7
A study on real-time image processing applications with edge computing support for mobile devices (04b Atto di convegno in volume)
Proietti Mattia Gabriele, Beraldi Roberto
Gruppo di ricerca: Distributed Systems