Attackers Could Use Artificial Intelligence Against Your Computer!
By Irene Gaitirira
Published March 11, 2018
As organisations begin to employ machine learning and artificial intelligence (AI) as part of their defences against cyber threats, it is increasingly becoming evident that AI and its subsets will play a larger role in facilitating cyber attack.
Control Risks, an organisation that specialises in risk to businesses around the world, says cyber threat actors could employ AI technologies in their work.
Though there are currently no known attacks using AI, Nicolas Reys, Associate Director and head of Control Risks’ cyber threat intelligence team, says the threat could become real as AI technologies become more mature and more accessible to threat actors.
“Staying informed and being able to identify relevant emerging attacks, technologies and vulnerabilities is therefore just as important as being prepared in the event of an attack,” Reys says.
Control Risks says AI technologies could assist attackers of computer systems in the following ways:
- Spearphishing campaigns
Threat actors could use algorithms to generate spearphishing campaigns in victims’ native languages, expanding the reach of mass campaigns. Similarly, larger amounts of data could be automatically gathered and analysed to improve social engineering techniques, and with it the effectiveness of spearphishing campaigns
- Dubbed ‘hivenets’
In the post-infection phase, clusters of compromised devices that have the ability to self-learn, so-called dubbed ‘hivenets’, could be used to automatically identify and target additional vulnerable systems
- Extensive, customised attacks
Based on its assessment of the target environment, AI technology could tailor the actual malware or attack in order to be unique to each system it encounters along the way. This would enable threat actors to conduct vast numbers of attacks that are uniquely tailored to each victim. Only bespoke mitigation or responses would be effective for each infection, rendering traditional signature or behaviour-based defence systems obsolete, and
- Advanced obfuscation techniques
Threat actors could evade detection by developing and implementing advanced obfuscation techniques, using data from past campaigns and the analysis of security tools. Attackers may even be able to launch targeted misdirection or ‘noise generation’ attacks to disrupt intelligence gathering and mitigation efforts by automated defence systems.