Sample_IEEE_Paper
Sample_IEEE_Paper
Abstract
Edge–cloud collaborative computing has emerged as a promising paradigm to improve the
efficiency of task processing in resource-constrained environments. This paper proposes a
reinforcement learning-based framework for optimizing task offloading in edge–cloud
systems. The proposed approach dynamically learns optimal offloading strategies, resulting
in reduced latency and improved resource utilization. Experimental results demonstrate
significant performance gains compared to existing baselines. The framework's adaptability
makes it suitable for real-time cloud computing environments.
Index Terms
Cloud computing, Edge computing, Task offloading, Reinforcement learning
I. Introduction
With the rapid proliferation of IoT devices and data-intensive applications, cloud computing
faces significant challenges in meeting stringent latency and bandwidth requirements. Edge
computing, by placing computational resources closer to end-users, offers a promising
solution. In this paper, we explore reinforcement learning techniques to optimize task
offloading decisions in an edge–cloud environment.
References
[1] K. Zhang, Y. Mao, S. Leng, et al., “Energy-efficient offloading for mobile edge computing in
5G heterogeneous networks,” IEEE Access, vol. 4, pp. 5896–5907, 2016.
[2] M. Chen, Y. Hao, L. Hu, et al., “Task offloading for mobile edge computing in software
defined ultra-dense network,” IEEE J. Sel. Areas Commun., vol. 36, no. 3, pp. 587–597, 2018.