0% found this document useful (0 votes)
6 views

Sample_IEEE_Paper

This paper presents a reinforcement learning-based framework for optimizing task offloading in edge-cloud systems, addressing challenges posed by IoT devices and data-intensive applications. The proposed deep Q-learning approach significantly reduces latency by 18% compared to existing methods through dynamic learning of optimal offloading strategies. Future work will focus on scalability and integration with federated learning.

Uploaded by

atomicayush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Sample_IEEE_Paper

This paper presents a reinforcement learning-based framework for optimizing task offloading in edge-cloud systems, addressing challenges posed by IoT devices and data-intensive applications. The proposed deep Q-learning approach significantly reduces latency by 18% compared to existing methods through dynamic learning of optimal offloading strategies. Future work will focus on scalability and integration with federated learning.

Uploaded by

atomicayush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Optimized Task Offloading in Edge–

Cloud Systems Using Reinforcement


Learning
Ayush Srivastava, XYZ University, [email protected]

Abstract
Edge–cloud collaborative computing has emerged as a promising paradigm to improve the
efficiency of task processing in resource-constrained environments. This paper proposes a
reinforcement learning-based framework for optimizing task offloading in edge–cloud
systems. The proposed approach dynamically learns optimal offloading strategies, resulting
in reduced latency and improved resource utilization. Experimental results demonstrate
significant performance gains compared to existing baselines. The framework's adaptability
makes it suitable for real-time cloud computing environments.

Index Terms
Cloud computing, Edge computing, Task offloading, Reinforcement learning

I. Introduction
With the rapid proliferation of IoT devices and data-intensive applications, cloud computing
faces significant challenges in meeting stringent latency and bandwidth requirements. Edge
computing, by placing computational resources closer to end-users, offers a promising
solution. In this paper, we explore reinforcement learning techniques to optimize task
offloading decisions in an edge–cloud environment.

II. Related Work


Several studies have explored task offloading in cloud and edge environments. Prior
methods primarily rely on heuristic algorithms or static models. Our work builds on these
efforts by incorporating adaptive learning techniques that better respond to dynamic
system states.
III. Proposed Methodology
We propose a deep Q-learning approach to optimize offloading strategies. Our model
observes system states such as network bandwidth, computational load, and task size, and
selects actions that minimize task completion time.

IV. Experimental Results


Simulations were conducted using real-world datasets. Our approach achieved an average
latency reduction of 18% compared to state-of-the-art methods.

V. Conclusion and Future Work


This paper presents a reinforcement learning-based framework for task offloading in edge–
cloud systems. Future work will explore scalability to larger network topologies and
integration with federated learning.

References
[1] K. Zhang, Y. Mao, S. Leng, et al., “Energy-efficient offloading for mobile edge computing in
5G heterogeneous networks,” IEEE Access, vol. 4, pp. 5896–5907, 2016.
[2] M. Chen, Y. Hao, L. Hu, et al., “Task offloading for mobile edge computing in software
defined ultra-dense network,” IEEE J. Sel. Areas Commun., vol. 36, no. 3, pp. 587–597, 2018.

You might also like