Skip to main content

EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing

Publication ,  Journal Article
Li, P; Xiao, Z; Wang, X; Huang, K; Huang, Y; Gao, H
Published in: IEEE Transactions on Intelligent Vehicles
January 1, 2024

The increasing complexity of vehicles has led to a growing demand for in-vehicle services that rely on multiple sensors. In the Vehicular Edge Computing (VEC) paradigm, energy-efficient task scheduling is critical to achieving optimal completion time and energy consumption. Although extensive research has been conducted in this field, challenges remain in meeting the requirements of time-sensitive services and adapting to dynamic traffic environments. In this context, a novel algorithm called Multi-action and Environment-adaptive Proximal Policy Optimization algorithm (MEPPO) is designed based on the conventional PPO algorithm and then a joint task scheduling and resource allocation method is proposed based on the designed MEPPO algorithm. In specific, the method involves three core aspects. Firstly, task scheduling strategy is designed to generate task offloading decisions and priority assignment decisions for the tasks utilizing PPO algorithm, which can further reduce the completion time of service requests. Secondly, transmit power allocation scheme is designed considering the expected transmission distance among vehicles and edge servers, which can minimize transmission energy consumption by adjusting the allocated transmit power dynamically. Thirdly, the proposed MEPPO-based scheduling method can make scheduling decisions for vehicles with different numbers of tasks by manipulating the state space of the PPO algorithm, which makes the proposed method be adaptive to real-world dynamic VEC environment. At last, the effectiveness of the proposed method is demonstrated through extensive simulation and on-site experiments.

Duke Scholars

Published In

IEEE Transactions on Intelligent Vehicles

DOI

EISSN

2379-8858

Publication Date

January 1, 2024

Volume

9

Issue

1

Start / End Page

1830 / 1846
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Li, P., Xiao, Z., Wang, X., Huang, K., Huang, Y., & Gao, H. (2024). EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing. IEEE Transactions on Intelligent Vehicles, 9(1), 1830–1846. https://doi.org/10.1109/TIV.2023.3321679
Li, P., Z. Xiao, X. Wang, K. Huang, Y. Huang, and H. Gao. “EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing.” IEEE Transactions on Intelligent Vehicles 9, no. 1 (January 1, 2024): 1830–46. https://doi.org/10.1109/TIV.2023.3321679.
Li P, Xiao Z, Wang X, Huang K, Huang Y, Gao H. EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing. IEEE Transactions on Intelligent Vehicles. 2024 Jan 1;9(1):1830–46.
Li, P., et al. “EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing.” IEEE Transactions on Intelligent Vehicles, vol. 9, no. 1, Jan. 2024, pp. 1830–46. Scopus, doi:10.1109/TIV.2023.3321679.
Li P, Xiao Z, Wang X, Huang K, Huang Y, Gao H. EPtask: Deep Reinforcement Learning Based Energy-Efficient and Priority-Aware Task Scheduling for Dynamic Vehicular Edge Computing. IEEE Transactions on Intelligent Vehicles. 2024 Jan 1;9(1):1830–1846.

Published In

IEEE Transactions on Intelligent Vehicles

DOI

EISSN

2379-8858

Publication Date

January 1, 2024

Volume

9

Issue

1

Start / End Page

1830 / 1846