Abstract

Swarms of unmanned aerial systems may pose a threat to soldiers on the battlefield in the future. Land vehicles may be armed with countermeasures to counter these attacks, but in the presence of a large number of threats it will be difficult for human operators to select the countermeasures best suited for maximising the survivability of the vehicle. This paper presents a novel algorithm using multi-armed bandit strategies to recommend countermeasures to human operators. The performance of this algorithm is evaluated in simulation across randomly generated scenarios. The results are compared to the performance of a baseline greedy algorithm that selects the countermeasure with the highest probability of hitting a threat. Even with limited prior knowledge of the probabilities of hits, the multi-armed bandit approach performs competitively against the greedy algorithm. With access to equivalent information, the multi armed bandit approach outperforms the greedy method in over 65% of tests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call