Skip to main content
Journal cover image

An autonomous differential evolution based on reinforcement learning for cooperative countermeasures of unmanned aerial vehicles

Publication ,  Journal Article
Cao, Z; Xu, K; Jia, H; Fu, Y; Foh, CH; Tian, F
Published in: Applied Soft Computing
January 1, 2025

In recent years, reinforcement learning has been used to improve differential evolution algorithms due to its outstanding performance in strategy selection. However, most existing improved algorithms treat the entire population as a single reinforcement learning agent, applying the same decision to individuals regardless of their different evolutionary states. This approach neglects the differences among individuals within the population during evolution, reducing the likelihood of individuals evolving in promising directions. Therefore, this paper proposes an Autonomous Differential Evolution (AuDE) algorithm guided by the cumulative performance of individuals. In AuDE, at the individual level, the rate of increase in each individual's cumulative reward is used to guide the selection of appropriate search strategies. This ensures that all individuals accumulate experience from their own evolutionary search process, rather than relying on the experiences of others or the population, which may not align with their unique characteristics. Additionally, at the global level, a population backtracking method with stagnation detection is proposed. This method fully utilizes the learned cumulative experience information to enhance the global search ability of AuDE, thereby strengthening the search capability of the entire population. To verify the effectiveness and advantages of AuDE, 15 functions from CEC2015, 28 functions from CEC2017, and a real-world optimization problem on cooperative countermeasures of unmanned aerial vehicles were used to evaluate its performance compared with state-of-the-art DE variants. The experimental results indicate that the overall performance of AuDE is superior to other compared algorithms.

Duke Scholars

Published In

Applied Soft Computing

DOI

ISSN

1568-4946

Publication Date

January 1, 2025

Volume

169

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 4903 Numerical and computational mathematics
  • 4602 Artificial intelligence
  • 0806 Information Systems
  • 0801 Artificial Intelligence and Image Processing
  • 0102 Applied Mathematics
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Cao, Z., Xu, K., Jia, H., Fu, Y., Foh, C. H., & Tian, F. (2025). An autonomous differential evolution based on reinforcement learning for cooperative countermeasures of unmanned aerial vehicles. Applied Soft Computing, 169. https://doi.org/10.1016/j.asoc.2024.112605
Cao, Z., K. Xu, H. Jia, Y. Fu, C. H. Foh, and F. Tian. “An autonomous differential evolution based on reinforcement learning for cooperative countermeasures of unmanned aerial vehicles.” Applied Soft Computing 169 (January 1, 2025). https://doi.org/10.1016/j.asoc.2024.112605.
Cao, Z., et al. “An autonomous differential evolution based on reinforcement learning for cooperative countermeasures of unmanned aerial vehicles.” Applied Soft Computing, vol. 169, Jan. 2025. Scopus, doi:10.1016/j.asoc.2024.112605.
Journal cover image

Published In

Applied Soft Computing

DOI

ISSN

1568-4946

Publication Date

January 1, 2025

Volume

169

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 4903 Numerical and computational mathematics
  • 4602 Artificial intelligence
  • 0806 Information Systems
  • 0801 Artificial Intelligence and Image Processing
  • 0102 Applied Mathematics