登录    注册    忘记密码

详细信息

    

文献类型:期刊文献

中文题名:MADRL-based UAV swarm non-cooperative game under incomplete information

作者:Ershen WANG[1];Fan LIU[1];Chen HONG[2];Jing GUO[1];Lin ZHAO[3];Jian XUE[3];Ning HE[4]

第一作者:Ershen WANG

机构:[1]School of Electronic and Information Engineering,Shenyang Aerospace University,Shenyang 110136,China;[2]College of Robotics,Beijing Union University,Beijing 100101,China;[3]School of Engineering Science,University of Chinese Academy of Sciences,Beijing 100049,China;[4]College of Smart City,Beijing Union University,Beijing 100101,China

第一机构:School of Electronic and Information Engineering,Shenyang Aerospace University,Shenyang 110136,China

年份:2024

卷号:37

期号:6

起止页码:293-306

中文期刊名:Chinese Journal of Aeronautics

外文期刊名:中国航空学报(英文版)

收录:CSTPCD;;Scopus;CSCD:【CSCD2023_2024】;

基金:supported by the National Key R&D Program of China(No.2018AAA0100804);the National Natural Science Foundation of China(No.62173237);the Academic Research Projects of Beijing Union University,China(Nos.SK160202103,ZK50201911,ZK30202107,ZK30202108);the Song Shan Laboratory Foundation,China(No.YYJC062022017);the Applied Basic Research Programs of Liaoning Province,China(Nos.2022020502-JH2/1013,2022JH2/101300150);the Special Funds program of Civil Aircraft,China(No.01020220627066);the Special Funds program of Shenyang Science and Technology,China(No.22-322-3-34).

语种:英文

中文关键词:UAV swarm;Reinforcement learning;Deep learning;Multi-agent;Non-cooperative game;Nash equilibrium

摘要:Unmanned Aerial Vehicles(UAVs)play increasing important role in modern battlefield.In this paper,considering the incomplete observation information of individual UAV in complex combat environment,we put forward an UAV swarm non-cooperative game model based on Multi-Agent Deep Reinforcement Learning(MADRL),where the state space and action space are constructed to adapt the real features of UAV swarm air-to-air combat.The multi-agent particle environment is employed to generate an UAV combat scene with continuous observation space.Some recently popular MADRL methods are compared extensively in the UAV swarm noncooperative game model,the results indicate that the performance of Multi-Agent Soft Actor-Critic(MASAC)is better than that of other MADRL methods by a large margin.UAV swarm employing MASAC can learn more effective policies,and obtain much higher hit rate and win rate.Simulations under different swarm sizes and UAV physical parameters are also performed,which implies that MASAC owns a well generalization effect.Furthermore,the practicability and convergence of MASAC are addressed by investigating the loss value of Q-value networks with respect to individual UAV,the results demonstrate that MASAC is of good practicability and the Nash equilibrium of the UAV swarm non-cooperative game under incomplete information can be reached.

参考文献:

正在载入数据...

版权所有©北京联合大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心