AI on Edge computing
Submission deadline: 2024-06-30
Section Collection Editors

Section Collection Information

Dear Colleagues,

 

With the development of Internet of Things (IoT), the extensive of mobile terminals and IoT devices is increasing at exponent level. Meanwhile, they product the huge volumes of multi-modal data. These data are generated at network edge and needs to be processed in real time. If these data are delivered to the cloud for processing, the wireless communication will face traffic overload and the corresponding service delay will be significantly increased. The edge computing has enabled extensive computing capabilities for processing these data at the network edge. In addition, the artificial intelligence (AI) is widely used to process the big data and bring preferable effect. But, the AI model training and inferring need to consume large computing resources. The edge computing nodes has limited computing resources. So, how to deploy the AI model on edge for training and inferring with the real data raises new challenges.

 

Thus, we are interested in the discussion and highlight original research in the field of “AI on Edge computing” to promote the development of the next generation IoT and provide a timely venue for engineers, researcher, and industrial professionals to showcase their latest solutions, new emerging trends, and state-of-the-art applications in engineering and any other IT fields which may benefit from the edge computing paradigm and AI. Research articles and reviews in this area of study are welcome.

 

We look forward to receiving your contributions.

 

Dr. Jingpan Bai

Section Editor

Keywords

Edge Computing; Artificial Intelligence; Inference Algorithms on Edge, Model Training on Edge; Federated learning on Edge; Model Optimization and Acceleration on Edge; Edge Resource Scheduling and Managing for AI

Published Paper