Augmented intelligence in social engineering attacks: a diffusion of innovation perspective
DOI:
https://doi.org/10.36096/ijbes.v7i1.676Keywords:
Augmented intelligence, artificial intelligence, information security, social network site (SNS) usersAbstract
This article explores social network site (SNS) users’ understanding of the danger the integration of human intelligence and artificial intelligence (AI), termed “augmented intelligence,” presents. Augmented intelligence, a subsection of artificial intelligence (AI), aims to enhance human intelligence with AI and is heralded as a significant step in problem-solving. A crucial concern is the profound threat to SNS users’ information security. A quantitative approach examined SNS understanding regarding the diffusion of augmented intelligence into SNS users’ spaces. An online survey was administered to 165 SNS users residing in the Gauteng province of South Africa. Diffusion of Innovation (DOI) theory was used as the theoretical lens. Ethical clearance was obtained, and the data collected was anonymized and kept confidential. The article provides new insights that can help SNS users understand that a new threat to their information security in the form of augmented intelligence is emerging. Findings suggest that out of the five constructs drawn from DOI that explain the diffusion of augmented intelligence into sophisticated social engineering attacks, relative advantage, compatibility, and complexity were perceived by study participants as likely predictors of augmented intelligence adoption. Users, however, differed on exactly how the augmentation process was being achieved.
Downloads
References
Agrawal, A. K., Gans, J. S., & Goldfarb, A. (2021). AI adoption and system-wide change. https://doi.org/10.3386/w28811 DOI: https://doi.org/10.3386/w28811
Algarni, A., Xu, Y., Chan, T., & Tian, Y.-C. (2013). Social engineering in social networking sites: Affect-based model. Proceedings of the 8th International Conference for Internet Technology and Secured Transactions (ICITST-2013). https://doi.org/10.1109/ICITST.2013.6750253 DOI: https://doi.org/10.1109/ICITST.2013.6750253
Alneyadi, M. R. M. A. H., & Normalini, M. K. (2023). Factors influencing user’s intention to adopt AI-based cybersecurity systems in the UAE. Interdisciplinary Journal of Information, Knowledge, and Management, 18, 459–486. https://doi.org/10.28945/5166 DOI: https://doi.org/10.28945/5166
Alqatawna, J. F., Madain, A., Al-Zoubi, A. M., & Al-Sayyed, R. (2017). Online social networks security: Threats, attacks, and future directions. In Social Media Shaping E-Publishing and Academia (pp. 121–132). https://doi.org/10.1007/978-3-319-55354-2_10 DOI: https://doi.org/10.1007/978-3-319-55354-2_10
Angelica, A., Opris, I., Lebedev, M. A., & Boehm, F. (2021). Cognitive augmentation via a brain/cloud interface. In Modern Approaches to Augmentation of Brain Function (pp. 357–386). https://doi.org/10.1007/978-3-030-54564-2_17 DOI: https://doi.org/10.1007/978-3-030-54564-2_17
Arquilla, J., Fusco, J., Ruiz, P., & Roschelle, J. (2021). Securing seabed cybersecurity, emphasizing intelligence augmentation. Communications of the ACM, 64(7), 10–12. https://doi.org/10.1145/3464931 DOI: https://doi.org/10.1145/3464931
Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., & Horvitz, E. (2019b). Updates in human-AI teams: Understanding and addressing the performance/compatibility tradeoff. Proceedings of the AAAI Conference on Artificial Intelligence. https://doi.org/10.1609/aaai.v33i01.33012429 DOI: https://doi.org/10.1609/aaai.v33i01.33012429
Bansal, G., Nushi, B., Kamar, E., Weld, D., Lasecki, W., & Horvitz, E. (2019a). A case for backward compatibility for human-AI teams. arXiv Preprint, arXiv:1906.01148. https://doi.org/10.48550/arXiv.1906.01148
Barrat, J. (2023). Our final invention: Artificial intelligence and the end of the human era. Hachette UK.
Barukh, M. C., Zamanirad, S., Baez, M., Beheshti, A., Benatallah, B., Casati, F., … Schiliro, F. (2021). Cognitive augmentation in processes. In Next-Gen Digital Services: A Retrospective and Roadmap for Service Computing of the Future (pp. 123–137). https://doi.org/10.1007/978-3-030-73203-5_10 DOI: https://doi.org/10.1007/978-3-030-73203-5_10
Bazoukis, G., Hall, J., Loscalzo, J., Antman, E. M., Fuster, V., & Armoundas, A. A. (2022). The inclusion of augmented intelligence in medicine: A framework for successful implementation. Cell Reports Medicine, 3(1). https://doi.org/10.1016/j.xcrm.2021.100485 DOI: https://doi.org/10.1016/j.xcrm.2021.100485
Brézillon, P. (1999). Context in artificial intelligence: I. A survey of the literature. Computers and Artificial Intelligence, 18(4), 321–340. Retrieved from https://www.cai.sk/ojs/index.php/cai/article/view/589
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … Filar, B. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv Preprint, arXiv:1802.07228. https://doi.org/10.48550/arXiv.1802.07228
Burkholder, G. J., Cox, K. A., Crawford, L. M., & Hitchcock, J. H. (2019). Research design and methods: An applied guide for the scholar-practitioner. Sage Publications.
Caliskan, A. (2023). Artificial intelligence, bias, and ethics. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. https://doi.org/10.24963/ijcai.2023/799 DOI: https://doi.org/10.24963/ijcai.2023/799
Cambiaso, E., & Caviglione, L. (2023). Scamming the scammers: Using ChatGPT to reply mails for wasting time and resources. arXiv Preprint, arXiv:2303.13521. https://doi.org/10.48550/arXiv.2303.13521
Cinel, C., Valeriani, D., & Poli, R. (2019). Neurotechnologies for human cognitive augmentation: Current state of the art and future prospects. Frontiers in Human Neuroscience, 13, 13. https://doi.org/10.3389/fnhum.2019.00013 DOI: https://doi.org/10.3389/fnhum.2019.00013
Clinch, S., & Davies, N. (2023). Hacking the brain: The risks and challenges of cognitive augmentation. IFIP Conference on Human-Computer Interaction. https://doi.org/10.1007/978-3-031-42293-5_18 DOI: https://doi.org/10.1007/978-3-031-42293-5_18
Cooke, P. (2023). Learning as imitation or mimesis: How ‘smart’ is machine learning for its planning controllers? European Planning Studies, 31(7), 1345–1357. https://doi.org/10.1080/09654313.2022.2124102 DOI: https://doi.org/10.1080/09654313.2022.2124102
Craighead, C. W., Ketchen, D. J., Dunn, K. S., & Hult, G. T. M. (2011). Addressing common method variance: Guidelines for survey research on information technology, operations, and supply chain management. IEEE Transactions on Engineering Management, 58(3), 578–588. https://doi.org/10.1109/TEM.2011.2136437 DOI: https://doi.org/10.1109/TEM.2011.2136437
Dalton, A., Dorr, B., Liang, L., & Hollingshead, K. (2017). Improving cyber-attack predictions through information foraging. 2017 IEEE International Conference on Big Data (Big Data). https://doi.org/10.1109/BigData.2017.8258509 DOI: https://doi.org/10.1109/BigData.2017.8258509
De Felice, F., Petrillo, A., De Luca, C., & Baffo, I. (2022). Artificial intelligence or augmented intelligence? Impact on our lives, rights and ethics. Procedia Computer Science, 200, 1846–1856. https://doi.org/10.1016/j.procs.2022.01.385 DOI: https://doi.org/10.1016/j.procs.2022.01.385
Dobrkovic, A., Döppner, D. A., Iacob, M.-E., & van Hillegersberg, J. (2018). Collaborative literature search system: An intelligence amplification method for systematic literature search. 13th International Conference, DESRIST 2018, Chennai, India, June 3–6, 2018. https://doi.org/10.1007/978-3-319-91800-6_12 DOI: https://doi.org/10.1007/978-3-319-91800-6_12
Dobrkovic, A., Liu, L., Iacob, M.-E., & van Hillegersberg, J. (2016). Intelligence amplification framework for enhancing scheduling processes. 15th Ibero-American Conference on AI, San José, Costa Rica, November 23-25, 2016. https://doi.org/10.1007/978-3-319-47955-2_8 DOI: https://doi.org/10.1007/978-3-319-47955-2_8
Dong, Y., Jiang, X., Jin, Z., & Li, G. (2023). Self-collaboration code generation via ChatGPT. arXiv Preprint, arXiv:2304.07590. https://doi.org/10.48550/arXiv.2304.07590
Falade, P. V. (2023). Decoding the threat landscape: ChatGPT, FraudGPT, and WormGPT in social engineering attacks. arXiv Preprint, arXiv:2310.05595. https://doi.org/10.32628/CSEIT2390533 DOI: https://doi.org/10.32628/CSEIT2390533
Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Taylor & Francis, 25, 277–304. https://doi.org/10.1080/15228053.2023.2233814 DOI: https://doi.org/10.1080/15228053.2023.2233814
Galliers, R. D., & Land, F. F. (1987). Choosing appropriate information systems research methodologies. Communications of the ACM, 30(11), 901–902. https://doi.org/10.1145/32206.315753 DOI: https://doi.org/10.1145/32206.315753
Gehl, R. W., & Lawson, S. T. (2022). Social engineering: How crowdmasters, phreaks, hackers, and trolls created a new form of manipulative communication. MIT Press. https://doi.org/10.7551/mitpress/12984.001.0001 DOI: https://doi.org/10.7551/mitpress/12984.001.0001
Grbic, D. V., & Dujlovic, I. (2023). Social engineering with ChatGPT. 2023 22nd International Symposium INFOTEH-JAHORINA (INFOTEH). https://doi.org/10.1109/INFOTEH57020.2023.10094141 DOI: https://doi.org/10.1109/INFOTEH57020.2023.10094141
Heale, R., & Twycross, A. (2015). Validity and reliability in quantitative studies. Evidence-Based Nursing, 18(3), 66–67. https://doi.org/10.1136/eb-2015-102129 DOI: https://doi.org/10.1136/eb-2015-102129
Hernández-Orallo, J., & Vold, K. (2019). AI extenders: The ethical and societal implications of humans cognitively extended by AI. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3306618.3314238 DOI: https://doi.org/10.1145/3306618.3314238
Hoong, P. (2021). A preliminary propagation tool in social engineering attacks. UTAR. http://eprints.utar.edu.my/id/eprint/4159
Hurley, D. (2020). Brain-computer interfaces move forward at the speed of Musk. Neurology Today, 20(19), 40–42. https://doi.org/10.1097/01.NT.0000720212.33775.ac DOI: https://doi.org/10.1097/01.NT.0000720212.33775.ac
Jain, H., Padmanabhan, B., Pavlou, P. A., & Raghu, T. (2021). Editorial for the special section on humans, algorithms, and augmented intelligence: The future of work, organizations, and society. Information Systems Research, 32(3), 675–687. https://doi.org/10.1287/isre.2021.1046 DOI: https://doi.org/10.1287/isre.2021.1046
Jordan, T. (2009). Hacking and power: Social and technological determinism in the digital age. First Monday. https://doi.org/10.5210/fm.v14i7.2417 DOI: https://doi.org/10.5210/fm.v14i7.2417
Jun, Y., Craig, A., Shafik, W., & Sharif, L. (2021). Artificial intelligence application in cybersecurity and cyberdefense. Wireless Communications and Mobile Computing, 2021, 1–10. https://doi.org/10.1155/2021/3329581 DOI: https://doi.org/10.1155/2021/3329581
Kharb, L., & Chahal, D. (2023). Exploring social engineering exploitations with ChatGPT. International Research Journal of Modernization in Engineering Technology and Science, 05(08), 1843–1849. https://doi.org/10.56726/IRJMETS44199 DOI: https://doi.org/10.56726/IRJMETS44199
Krombholz, K., Hobel, H., Huber, M., & Weippl, E. (2015). Advanced social engineering attacks. Journal of Information Security and Applications, 22, 113–122. https://doi.org/10.1016/j.jisa.2014.09.005 DOI: https://doi.org/10.1016/j.jisa.2014.09.005
Mansfield-Devine, S. (2023). Weaponising ChatGPT. Network Security, 2023(4). https://doi.org/10.12968/S1353-4858(23)70017-2 DOI: https://doi.org/10.12968/S1353-4858(23)70017-2
Manyam, S. (2022). Artificial intelligence’s impact on social engineering attacks. Retrieved from https://opus.govst.edu/cgi/viewcontent.cgi?article=1521&context=capstones
Marcus, B., Weigelt, O., Hergert, J., Gurt, J., & Gelléri, P. (2017). The use of snowball sampling for multi-source organizational research: Some cause for concern. Personnel Psychology, 70(3), 635–673. https://doi.org/10.1111/peps.12169 DOI: https://doi.org/10.1111/peps.12169
Mikhalevich, I. F., & Ryjov, A. P. (2018). Augmented intelligence framework for protecting against cyberattacks. 2018 Engineering and Telecommunication (EnT-MIPT). https://doi.org/10.1109/EnT-MIPT.2018.00039 DOI: https://doi.org/10.1109/EnT-MIPT.2018.00039
Milberry, K. (2012). Hacking for social justice: The politics of prefigurative technology. In (Re) Inventing the Internet (pp. 109–130). Brill. https://doi.org/10.1007/978-94-6091-734-9_6 DOI: https://doi.org/10.1007/978-94-6091-734-9_6
Narayanan, V. K., & O’Connor, G. C. (2015). Knowledge management as intelligence amplification for breakthrough innovations. Design Thinking: New Product Development Essentials from the PDMA, 187–204. https://doi.org/10.1002/9781119154273.ch13 DOI: https://doi.org/10.1002/9781119154273.ch13
Okoli, C., & Schabram, K. (2015). A guide to conducting a systematic literature review of information systems research. https://doi.org/10.17705/1CAIS.03743 DOI: https://doi.org/10.17705/1CAIS.03743
Pankratz, M., Hallfors, D., & Cho, H. (2002). Measuring perceptions of innovation adoption: The diffusion of a federal drug prevention policy. Health Education Research, 17(3), 315–326. https://doi.org/10.1093/her/17.3.315 DOI: https://doi.org/10.1093/her/17.3.315
Parker, C., Scott, S., & Geddes, A. (2019). Snowball sampling. SAGE Research Methods Foundations.
Paul, S., Yuan, L., Jain, H. K., Robert Jr, L. P., Spohrer, J., & Lifshitz-Assaf, H. (2022). Intelligence augmentation: Human factors in AI and future of work. AIS Transactions on Human-Computer Interaction, 14(3), 426–445. https://doi.org/10.17705/1thci.00174 DOI: https://doi.org/10.17705/1thci.00174
Peltier, T. R. (2006). Social engineering: Concepts and solutions. Information Security Journal, 15(5), 13. https://doi.org/10.1201/1086.1065898X/46353.15.4.20060901/95427.3 DOI: https://doi.org/10.1201/1086.1065898X/46353.15.4.20060901/95427.3
Plsek, P. (2003). Complexity and the adoption of innovation in health care. National Institute for Healthcare Management Foundation and National Committee for Quality in Health Care. https://chess.wisc.edu/niatx/PDF/PIPublications/Plsek_2003_NIHCM.pdf
Vargo, A., Tag, B., Hutin, M., Abou-Khalil, V., Ishimaru, S., Augereau, O., & Devillers, L. (2023). Intelligence augmentation: Future directions and ethical implications in HCI. Paper presented at the IFIP Conference on Human-Computer Interaction. https://doi.org/10.1007/978-3-031-42293-5_87 DOI: https://doi.org/10.1007/978-3-031-42293-5_87
Velliangiri, S., Karthikeyan, P., Ravi, V., Almeshari, M., & Alzamil, Y. (2023). Intelligence amplification-based smart health record chain for enterprise management system. Information, 14(5), 284. https://doi.org/10.3390/info14050284 DOI: https://doi.org/10.3390/info14050284
Ventayen, R. J. M. (2023). ChatGPT by OpenAI: Students' viewpoint on cheating using artificial intelligence-based applications. Available at SSRN 4361548. https://doi.org/10.2139/ssrn.4361548 DOI: https://doi.org/10.2139/ssrn.4361548
Walters, G. (2018). Evaluating the effectiveness of personal cognitive augmentation: Utterance/intent relationships, brittleness, and personal cognitive agents. Paper presented at the Human Interface and the Management of Information. https://doi.org/10.1007/978-3-319-92046-7_46 DOI: https://doi.org/10.1007/978-3-319-92046-7_46
Wellsandt, S., Klein, K., Hribernik, K., Lewandowski, M., Bousdekis, A., Mentzas, G., & Thoben, K.-D. (2022). Hybrid-augmented intelligence in predictive maintenance with digital intelligent assistants. Annual Reviews in Control, 53, 382–390. https://doi.org/10.1016/j.arcontrol.2022.04.001 DOI: https://doi.org/10.1016/j.arcontrol.2022.04.001
Wijnhoven, F. (2022). Organizational learning for intelligence amplification adoption: Lessons from a clinical decision support system adoption project. Information Systems Frontiers, 24(3), 731–744. https://doi.org/10.1007/s10796-021-10206-9 DOI: https://doi.org/10.1007/s10796-021-10206-9
Xu, M., Niyato, D., Chen, J., Zhang, H., Kang, J., Xiong, Z., & Han, Z. (2023). Generative AI-empowered simulation for autonomous driving in vehicular mixed reality metaverses. arXiv preprint arXiv:2302.08418. https://doi.org/10.1109/JSTSP.2023.3293650 DOI: https://doi.org/10.1109/JSTSP.2023.3293650
Xue, J., Hu, B., Li, L., & Zhang, J. (2022). Human-machine augmented intelligence: Research and applications. Frontiers of Information Technology & Electronic Engineering, 23(8), 1139–1141. https://doi.org/10.1631/FITEE.2250000 DOI: https://doi.org/10.1631/FITEE.2250000
Yilmaz, R., & Yilmaz, F. G. K. (2023). Augmented intelligence in programming learning: Examining student views on the use of ChatGPT for programming learning. Computers in Human Behavior: Artificial Humans, 1(2), 100005. https://doi.org/10.1016/j.chbah.2023.100005 DOI: https://doi.org/10.1016/j.chbah.2023.100005
Zeng, Y. (2022). AI empowers security threats and strategies for cyber attacks. Procedia Computer Science, 208, 170–175. https://doi.org/10.1016/j.procs.2022.10.025 DOI: https://doi.org/10.1016/j.procs.2022.10.025
Zheng, N.-N., Liu, Z.-Y., Ren, P.-J., Ma, Y.-Q., Chen, S.-T., Yu, S.-Y., & Wang, F.-Y. (2017). Hybrid-augmented intelligence: Collaboration and cognition. Frontiers of Information Technology & Electronic Engineering, 18(2), 153–179. https://doi.org/10.1631/FITEE.1700053 DOI: https://doi.org/10.1631/FITEE.1700053
Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & De Choudhury, M. (2023). Synthetic lies: Understanding AI-generated misinformation and evaluating algorithmic and human solutions. Paper presented at the Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3581318 DOI: https://doi.org/10.1145/3544548.3581318
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Kennedy Njenga, Baswabile Matemane

This work is licensed under a Creative Commons Attribution 4.0 International License.
© 2025 retained by the authors. Licensee BSC International Academy, Istanbul, Turkey. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).