Joint Distributed Computation Offloading and Radio Resource Slicing Based on Reinforcement Learning in Vehicular Networks

Citation

Alaghbari, Khaled A. and Lim, Heng Siong and Zarakovitis, Charilaos C. and Latiff, N. M. Abdul and Ariffin, Sharifah Hafizah Syed and Chien, Su Fong (2025) Joint Distributed Computation Offloading and Radio Resource Slicing Based on Reinforcement Learning in Vehicular Networks. IEEE Open Journal of the Communications Society, 6. pp. 1231-1245. ISSN 2644-125X

[img] Text
Joint_Distributed.pdf - Published Version
Restricted to Repository staff only

Download (7MB)

Abstract

Computation offloading in Internet of Vehicles (IoV) networks is a promising technology for transferring computation-intensive and latency-sensitive tasks to mobile-edge computing (MEC) or cloud servers. Privacy is an important concern in vehicular networks, as centralized system can compromise it by sharing raw data from MEC servers with cloud servers. A distributed system offers a more attractive solution, allowing each MEC server to process data locally and make offloading decisions without sharing sensitive information. However, without a mechanism to control its load, the cloud server’s computation capacity can become overloaded. In this study, we propose distributed computation offloading systems using reinforcement learning, such as Q-learning, to optimize offloading decisions and balance computation load across the network while minimizing the number of task offloading switches. We introduce both fixed and adaptive low-complexity mechanisms to allocate resources of the cloud server, formulating the reward function of the Q-learning method to achieve efficient offloading decisions. The proposed adaptive approach enables cooperative utilization of cloud resources by multiple agents. A joint optimization framework is established to maximize overall communication and computing resource utilization, where task offloading is performed on a small-time scale at local edge servers, while radio resource slicing is adjusted on a larger time scale at the cloud server. Simulation results using real vehicle tracing datasets demonstrate the effectiveness of the proposed distributed systems in achieving lower computation load costs, offloading switching costs, and reduce latency while increasing cloud server utilization compared to centralized systems.

Item Type: Article
Additional Information: Computation offloading, radio resource slicing, reinforcement learning, Q-learning, distributed system, mobile-edge computing (MEC), cloud computing, Internet of Vehicles.
Subjects: T Technology > TK Electrical engineering. Electronics Nuclear engineering > TK5101-6720 Telecommunication. Including telegraphy, telephone, radio, radar, television
Divisions: Faculty of Engineering and Technology (FET)
Depositing User: Ms Suzilawati Abu Samah
Date Deposited: 06 Mar 2025 00:49
Last Modified: 06 Mar 2025 00:49
URII: http://shdl.mmu.edu.my/id/eprint/13573

Downloads

Downloads per month over past year

View ItemEdit (login required)