Artificial Intelligence and Human-out-of-the-Loop: Is It Time for Autonomous Military Systems?

Authors

  • Zvonko Trzun Assist. Prof. Dr.sc., Dr. Franjo Tuđman University of Defense and Security, Croatia.

DOI:

https://doi.org/10.46941/2024.2.18

Keywords:

Artificial Intelligence, Human-out-of-the-Loop, autonomous military systems, adversary attacks, manned-unmanned teaming.

Abstract

This paper systematically presents the disruptive technologies that have emerged on the battlefields in recent decades, as well as those that are yet to come. Special attention is given to current technical capabilities: the status of unmanned vehicle development is briefly outlined, focusing primarily on the most prevalent type, unmanned aerial vehicles (UAVs). Additionally, the paper discusses the most common and effective adversarial attack techniques specifically targeting unmanned vehicle technology. The concepts of artificial intelligence (AI), machine learning, deep learning, and convolutional neural networks (CNNs) are introduced. The paper illustrates how CNNs aim to tackle tasks that previously required human intelligence, as well as how the enemy attempts to disrupt the development of CNNs during the crucial training and pattern recognition phase, which is essential for later generalisation. The paper demonstrates the advantages of manned-unmanned teaming as a model that effectively utilises disruptive technologies while simultaneously counteracting the effects of the enemy’s measures. Moreover, it analyses the introduction of fully autonomous, AI-driven military systems on the battlefield, outlining the advantages and disadvantages inherent to such a fundamental change. From the evident lack of interest among young people in joining the armed forces to the autonomous systems’ potential to save the lives of soldiers and civilians, there are numerous reasons suggesting that this technology could alleviate the burden on human soldiers. However, concerns remain that autonomous systems may malfunction, potentially reducing rather than increasing the safety of militaries. The paper concludes with recommendations for future steps in the introduction of new technologies, based on their current state of development and the robustness of the AI models they use.

References

Athalye, A., Engstrom, L., Ilyas, A., Kwok, K. (2018) ‘Synthesizing Robust Adversarial Examples’, in Dy, J., Krause, A. (eds) Proceedings of the 35th International Conference on Machine Learning. PMLR (Proceedings of Machine Learning Research), pp. 284–293. [Online]. Available at: https://proceedings.mlr.press/v80/athalye18b.html (Accessed: 18 October 2023).

Bansal, M. A., Sharma, D. R. Kathuria, D. M. (2022) ‘A systematic review on data scarcity problem in deep learning: solution and applications’, ACM Computing Surveys (CSUR), 54(10s), pp. 1–29; https://doi.org/10.1145/3502287.

Bartulović, V., Trzun, Z., Hoić, M. (2023) ‘Use of Unmanned Aerial Vehicles in Support of Artillery Operations’, Strategos, 7(1), pp. 71–92.

Bengio, Y., Lecun, Y. Hinton, G. (2021) ‘Deep learning for AI’, Communications of the ACM, 64(7), pp. 58–65; https://doi.org/10.1145/3448250.

Brown, T., Mané, D., Roy, A., Abadi, M., Gilmer, J. (2017) ‘Adversarial Patch’. arXiv:1712.09665; https://doi.org/10.48550/arXiv.1712.09665.

Carleo, G., Cirac, I., Cranmer, K., Daudet, L., Schuld, M., Tishby, N., Vogt-Maranto, L., Zdeborová, L. (2019) ‘Machine learning and the physical sciences’, Reviews of Modern Physics, 91(4), pp. (045002)1-39; https://doi.org/10.1103/RevModPhys.91.045002.

Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C. (2018) ‘Robust Physical-World Attacks on Deep Learning Visual Classification’, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1625–1634; https://doi.org/10.1109/CVPR.2018.00175.

Feldman, P., Dant, A. Massey, A. (2019) ‘Integrating artificial intelligence into weapon systems’, arXiv preprint arXiv:1905.03899 [Preprint].

Gilles, J. (2020) The lottery ticket hypothesis in an adversarial setting. Massachusetts: Massachusetts Institute of Technology.

Gong, Z., Wang, W. (2023) ‘Adversarial and clean data are not twins’, Proceedings of the Sixth International Workshop on Exploiting Artificial Intelligence Techniques for Data Management, pp. 1–5; https://doi.org/10.1145/3593078.3593935.

Goodfellow, I. J., Shlens, J., Szegedy, C. (2014) ‘Explaining and Harnessing Adversarial Examples’, CoRR, abs/1412.6; https://doi.org/10.48550/arXiv.1412.6572.

Gupta, S., Gupta, A. (2019) ‘Dealing with noise problem in machine learning data-sets: A systematic review’, Procedia Computer Science, 161, pp. 466–474; https://doi.org/10.1016/j.procs.2019.11.146.

Haider, A. (2021) ‘Introduction’, in Willis, M., Haider, A. (eds) A Comprehensive Approach to Countering Unmanned Aircraft Systems. Kalkar, Germany: Joint Air Power Competence Centre, pp. 14–15; https://doi.org/10.1007/978-3-030-67341-3_1.

Janiesch, C., Zschech, P.,Heinrich, K. (2021) ‘Machine learning and deep learning’, Electronic Markets, 31(3), pp. 685–695; https://doi.org/10.1007/s12525-021-00475-2.

Janssen, M., Brous, P., Estevez, E., Barbosa, L. E., Janowski, T. (2020) ‘Data governance: Organizing data for trustworthy Artificial Intelligence’, Government information quarterly, 37(3), 101493; https://doi.org/10.1016/j.giq.2020.101493.

Kratky, M., Minarik, V., Sustr, M., Ivan, J. (2020) ‘Electronic Warfare Methods Combatting UAVs’, Advances in Science, Technology and Engineering Systems Journal, 5(6), pp. 447–454; https://doi.org/10.25046/aj050653.

Krishnan, A. (2009) Killer Robots: Legality and Ethicality of Autonomous Weapons. Surrey, UK: Ashgate Publishing Limited.

Kunertova, D. (2021) ‘European Drone Clubs Stall Strategic Autonomy’, CSS Policy Perspectives, 9(5), pp. 1-4.

Kunertova, D. (2022) ‘The Ukraine Drone Effect on European Militaries’, CSS Policy Perspectives, 10(15), pp. 1-4; https://doi.org/10.3929/ethz-b-000584078.

Lee, J., Jeon, S., Park, Y., Chung, J., Jeong, D. (2023) ‘A Forensic Methodology for Detecting Image Manipulations’, arXiv preprint arXiv:2308.04723 [Preprint].

Li, Z., Liu, F., Yang, W., Peng, P., Zhou, J. (2021) ‘A survey of convolutional neural networks: analysis, applications, and prospects’, IEEE transactions on neural networks and learning systems, 33(12), pp. 6999–7019; https://doi.org/10.1109/TNNLS.2021.3084827.

McDermott, R. (2017) Russia’s Electronic Warfare Capabilities to 2025: Challenging NATO in the Electromagnetic Spectrum. Talinn: International Centre for Defence and Security.

Monte, L. Del (2018) Genius Weapons. New York: Prometheus Books.

Morgan, F. E., Boudreaux, B., Lohn, A. J., Ashby, M., Curriden, C., Klima, K., Grossman, D. (2020) Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. Santa Monica, CA: RAND Corporation, https://doi.org/10.7249/RR3139-1.

Mu, R., Zeng, X. (2019) ‘A review of deep learning research’, KSII Transactions on Internet and Information Systems (TIIS), 13(4), pp. 1738–1764; https://doi.org/10.3837/tiis.2019.04.001.

Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M.-E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., Broelemann, K., Kasneci, G., Tiropanis, T., Staab, S. (2020) ‘Bias in data‐driven artificial intelligence systems—An introductory survey’, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(3), p. e1356; https://doi.org/10.1002/widm.1356.

Rodrigues, R. (2020) ‘Legal and human rights issues of AI: Gaps, challenges and vulnerabilities’, Journal of Responsible Technology, 4, p. 100005; https://doi.org/10.1016/j.jrt.2020.100005.

Semendiai, S., Tkach, Y., Shelest, M., Korchenko, O., Ziubina, R., Veselska, O. (2023) ‘Improving the Efficiency of UAV Communication Channels in the Context of Electronic Warfare’, International Journal of Electronics and Telecommunications, 69(4), pp. 727–732; https://doi.org/10.24425/ijet.2023.147694.

Smith, P. (2020) Russian Electronic Warfare: A Growing Threat to U.S. Battlefield Supremacy. American Security Project.

Surden, H. (2021) ‘Machine learning and law: An overview’, Research Handbook on Big Data Law, pp. 171–184; https://doi.org/10.4337/9781788972826.00014.

Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R. (2014) ‘Intriguing properties of neural networks’, ArXiv [Preprint].

Tufail, S., Batool, S., Sarwat, A.I. (2021) ‘False data injection impact analysis in AI-based smart grid’, SoutheastCon 2021. pp. 1–7; https://doi.org/10.1109/SoutheastCon45413.2021.9401940.

Wang, W., Siau, K. (2019) ‘Artificial intelligence, machine learning, automation, robotics, future of work and future of humanity: A review and research agenda’, Journal of Database Management (JDM), 30(1), pp. 61–79; https://doi.org/10.4018/JDM.2019010104.

Watts, B. (2013) The Evolution Of Precision Strike. Washington DC: Center for Strategic and Budgetary Assessments.

Wróbel, K. (2021) ‘Searching for the origins of the myth: 80% human error impact on maritime safety’, Reliability Engineering & System Safety, 216; https://doi.org/10.1016/j.ress.2021.107942.

Xiong, H., Pandey, G., Steinbach, M., Kumar, V. (2006) ‘Enhancing data analysis with noise removal’, IEEE transactions on knowledge and data engineering, 18(3), pp. 304–319; https://doi.org/10.1109/TKDE.2006.46.

Xu, H., Ma, Y., Liu, H.-C., Deb, D., Liu, H., Tang, J.-L., Jain, A. K. (2020) ‘Adversarial attacks and defenses in images, graphs and text: A review’, International journal of automation and computing, 17, pp. 151–178.

Zhang, Z., Li, M., Chang, M.-C. (2024) ‘A New Benchmark and Model for Challenging Image Manipulation Detection’, Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), pp. 7405-7413; https://doi.org/10.1609/aaai.v38i7.28571.

Zhou, X., Xiang, Y., Youmin, Z., Yangyang, L., Xiaoyan, P. (2021) ‘Trajectory Planning and Tracking Strategy Applied to an Unmanned Ground Vehicle in the Presence of Obstacles’, IEEE Transactions on Automation Science and Engineering, 18(4), pp. 1575–1589; https://doi.org/10.1109/TASE.2020.3010887.

Downloads

Published

2024-12-30

How to Cite

Trzun, Z. (2024). Artificial Intelligence and Human-out-of-the-Loop: Is It Time for Autonomous Military Systems?. European Integration Studies, 20(2), 479–508. https://doi.org/10.46941/2024.2.18