The Potential Threat of Artificial Intelligence Use in Cybercrime Activities: A Case Study of 2024 Regional Head Elections

  • Lytio Enggar Erlangga Universitas Indonesia, Depok, Indonesia
  • Muhammad Syaroni Rofii Universitas Indonesia, Depok, Indonesia
  • Eko Daryanto Universitas Indonesia, Depok, Indonesia
Keywords: Artificial Intelligence, Cybercrime, 2024 Regional Head Elections, Cybersecurity, Misinformation, Political Security.

Abstract

The adoption of artificial intelligence (AI) has significantly increased in recent years, offering both opportunities and challenges. While AI enhances efficiency across various sectors, it also introduces new risks, particularly in the realm of cybercrime. This study explores the potential threats posed by AI-driven cybercrime activities in the context of the 2024 Regional Head Elections (Pilkada) in Indonesia. Using a qualitative method with literature review and document analysis, the research identifies three key threat domains: cybersecurity, physical security, and political security. AI enables cybercriminals to launch sophisticated and hard-to-detect attacks, such as phishing, misinformation campaigns, and distributed denial-of-service (DDoS) attacks. These threats could disrupt electoral processes, manipulate voter behaviour, and compromise the integrity of election outcomes. The study highlights how AI-driven attacks can also exploit Internet of Things (IoT) devices and spread deepfakes and political bots to influence public perception. With the rapid advancement of AI, traditional cybersecurity measures may struggle to keep pace, making it imperative for authorities and cybersecurity experts to strengthen monitoring and response capabilities. This research emphasizes the need for robust strategies to mitigate AI-driven cyber threats and safeguard the integrity of the electoral process in Indonesia.

References

Ahmed, S. (2022). Impact of Deepfake Technology on Digital World Authenticity: A Review. International Journal of Engineering and Management Research, 12(3), 78–84. https://doi.org/10.31033/ijemr.12.3.10

Asiryan, S. R. (2023). Use of artificial intelligence during elections, practice, threats to the right to vote and ways to overcome them. https://doi.org/10.24144/2307-3322.2023.77.2.2

Bazarkina, D., & Matyashova, D. (2022, April). “Smart” Psychological Operations in Social Media: Security Challenges in China and Germany. In European Conference on Social-Media (Vol. 9, No. 1, pp. 14-20). https://doi.org/10.34190/ecsm.9.1.174

Bazarkina, D., & Pashentsev, E. (2020). Malicious Use of Artificial Intelligence. Russia in Global Affairs, 18(4), 154–177. https://doi.org/10.31278/1810-6374-2020-18-4-154-177

Blauth, T. F., Gstrein, O. J., & Zwitter, A. (2022). Artificial Intelligence Crime: An Overview of Malicious Use and Abuse of AI. IEEE Access, 10, 77110–77122. https://doi.org/10.1109/ACCESS.2022.3191790

Boháček, M., & Farid, H. (2022). Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms. Proceedings of the National Academy of Sciences, 119(48), e2216035119. https://doi.org/10.1073/pnas.2216035119

Caldwell, M., Andrews, J. T. A., Tanay, T., & Griffin, L. D. (2020). AI-enabled future crime. Crime Science, 9(1), 14. https://doi.org/10.1186/s40163-020-00123-8

Chung, K., Li, X., Tang, P., Zhu, Z., Kalbarczyk, Z., Iyer, R. K., & Kesavadas, T. (2019). Smart Malware that Uses Leaked Control Data of Robotic Applications: The Case of Raven-II Surgical Robots. 22nd International Sym Posium on Research in Attacks, Intrusions and Defenses, 337–351.

Coeckelbergh, M. (2022). Democracy, epistemic agency, and AI: political epistemology in times of artificial intelligence. AI and Ethics, 1–10. https://doi.org/10.1007/s43681-022-00239-4

Deepak, P., Simoes, S., & MacCarthaigh, M. (2023). AI and Core Electoral Processes: Mapping the Horizons. arXiv.Org, abs/2302.03774. https://doi.org/10.48550/arXiv.2302.03774

Eme-Uche, U., Aja, E. F., & Okonkwo, C. (2020). Electoral security and voter-turnout in the 2019 gubernatorial election in abia state: Interrogating the narratives. IJASOS- International E-Journal of Advances in Social Sciences, 6(16), 80–92. https://doi.org/10.18769/IJASOS.616011

Hansen, I., & Lim, D. J. (2019). Doxing democracy: Influencing elections via cyber voter interference. Contemporary Politics, 25(2), 150–171. https://doi.org/10.1080/13569775.2018.1493629

Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., Li, W., & Li, K. (2021). Artificial Intelligence Security: Threats and Countermeasures. ACM Computing Surveys, 55, 1–36. https://doi.org/10.1145/3487890

Jeong, D. (2020). Artificial Intelligence Security Threat, Crime, and Forensics: Taxonomy and Open Issues. IEEE Access, 8, 184560–184574. https://doi.org/10.1109/ACCESS.2020.3029280

Kaloudi, N., & Li, J. (2021). The AI-Based Cyber Threat Landscape: A Survey. ACM Computing Surveys, 53(1), 1–34. https://doi.org/10.1145/3372823

König, P. D., & Wenzelburger, G. (2020). Opportunity for renewal or disruptive force? How artificial intelligence alters democratic politics. Government Information Quarterly, 37(3). https://doi.org/10.1016/J.GIQ.2020.101489

Langner, R. (2011). Stuxnet: Dissecting a Cyberwarfare Weapon. IEEE Security & Privacy Magazine, 9(3), 49–51. https://doi.org/10.1109/MSP.2011.67

Mankoo, S. S. (2023). DeepFakes- The Digital Threat in the Real World. Gyan Management Journal, 17(1), 71–77. https://doi.org/10.48165/gmj.2022.17.1.8

McKinsey. (2022, December). The State of AI in 2022 – and half decade in review. QuantumBlack AI by McKinsey.

Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, & Ben Garfinkel. (2018, February). The Malicious Use of Artificial Intelligence Forecasting, Prevention, and Mitigation.

Montasari, R. (2023). Internet of Things and Artificial Intelligence in National Security: Applications and Issues. Advances in Information Security, 27–56. https://doi.org/10.1007/978-3-031-21920-7_3

Rheault, L., & Musulan, A. (2021). Efficient detection of online communities and social bot activity during electoral campaigns. Journal of Information Technology & Politics, 18(3), 324–337. https://doi.org/10.1080/19331681.2021.1879705

Schneider, J., & Breitinger, F. (2023). Towards AI forensics: Did the artificial intelligence system do it? Journal of Information Security and Applications, 76, 103517. https://doi.org/10.1016/j.jisa.2023.103517

Shahzad, H., Rustam, F., Flores, E. S., Mazón, J. L. V., Díez, I. de la T., & Ashraf, I. (2022). A Review of Image Processing Techniques for Deepfakes. Sensors, 22(12), 4556–4556. https://doi.org/10.3390/s22124556

Taddeo, M., McCutcheon, T., & Floridi, L. (2019). Trusting artificial intelligence in cybersecurity is a double-edged sword. Nature Machine Intelligence, 1(12), 557–560. https://doi.org/10.1038/s42256-019-0109-1

Urtaeva, E. В. (2024). Opportunities and Threats of using Artificial Intelligence (AI) in Political Communications. Obŝestvo: Politika, Èkonomika, Pravo, 2, 44–51. https://doi.org/10.24158/pep.2024.2.3

Veerasamy, N., & Pieterse, H. (2022). Rising Above Misinformation and Deepfakes. Proceedings of the ... International Conference on Information Warfare and Security, 17(1), 340–348. https://doi.org/10.34190/iccws.17.1.25

Yener, B., & Gal, T. (2019). Cybersecurity in the Era of Data Science: Examining New Adversarial Models. IEEE Security & Privacy, PP, 1–1. https://doi.org/10.1109/MSEC.2019.2907097

Zeng, Y. (2022). AI Empowers Security Threats and Strategies for Cyber Attacks. Procedia Computer Science, 208, 170–175. https://doi.org/10.1016/j.procs.2022.10.025

Published
2024-10-29
How to Cite
Erlangga, L. E., Rofii, M. S., & Daryanto, E. (2024). The Potential Threat of Artificial Intelligence Use in Cybercrime Activities: A Case Study of 2024 Regional Head Elections. International Journal of Science and Society, 6(4), 56-67. Retrieved from http://ijsoc.goacademica.com/index.php/ijsoc/article/view/1297