Cybersecurity Threats due to Artificial Intelligence
Background:
-
- While it is true that Artificial Intelligence(AI)in general and Generative AI(Gen-AI) has influenced how we operate, with its integration into sectors such as education, banking, health care, and manufacturing, it has also transformed the range of cyber-risks and safety related to it.
- According to a recently published report, there has been a 1,265% increase in phishing incidents/emails, along with a 967% increase in credential phishing since the fourth quarter of 2022 arising from the exacerbated utilisation/manipulation of generative AI.
- Phishing: A technique for attempting to acquire sensitive data, such as bank account numbers, through a fraudulent transaction in email or on a web site, in which the perpetrator imitates as a legitimate business or reputable person.
Possible Threats due to AI:
- AI-Enabled Disinformation (Deepfakes): AI is used to create convincing but false digital content, such as videos, audio, and images (deep fakes), which can manipulate public opinion, spread misinformation, and cause social and political unrest.
- AI-Driven Cyberattacks: AI can automate and enhance cyberattacks by identifying system vulnerabilities faster and more efficiently than human activity, making attacks more precise and difficult to defend against.
- Generative AI Abuse: Unauthorised use of generative AI tools can produce harmful content, such as malicious software, phishing schemes, or identity theft, posing a significant threat to both individuals and organisations.
- AI in Cyber Warfare: In conflicts like the Ukraine war, AI has been used to facilitate cyberattacks on critical infrastructure, including power grids and telecommunications, leading to widespread affection of public services.
- Automated Social Engineering Attacks: AI-powered tools can be used to personalise phishing attacks and other forms of social engineering by learning the behavior, preferences, and habits of targets, making these attacks more effective.
- AI-Enhanced Malware: AI can help malware adapt to a target’s environment, evade detection, and exploit weaknesses in systems, leading to more resilient and damaging cyberattacks.
- Compromised AI Systems: Hackers can target AI systems themselves, compromising their algorithms to manipulate outcomes, making AI-powered decision-making systems vulnerable to cyberattacks.
- AI-Enabled Surveillance and Privacy Violations: AI can be used to conduct large-scale surveillance and data collection, infringing on privacy rights, and leading to potential misuse of personal data for cybercriminal activities.
- AI-Powered Botnets: AI can enhance botnets, automating large-scale distributed denial-of-service (DDoS) attacks and overwhelming targeted systems, making it harder to respond in real-time.
- Cyber Bullying: AI tools are increasingly used for digital bullying, harassment, and manipulation through fake content creation and identity theft, creating psychological and reputational harm.
Way forward:
- Awareness of the growing danger of digital threats is the first step in the battle against cyber and AI-directed threats.
- Protection against these threats requires great effort and adequate budgetary allocations for both private or public organisations.
Coordinated action among the various stakeholders involved in creating a holistic protection architecture is essential
Subscribe
Login
0 Comments