Technology continues to shape human conflict and artificial intelligence will be no exception, so business needs to up its ability to detect attacks and respond, says security expert
Warwick Ashford writes:
Cyber attacks enabled by artificial intelligence (AI) technology have yet to be seen in real-world attacks, but organisations could soon be defending against a new order of AI attacks, warns Mikko Hypponen, chief research officer at IT security company F-Secure.
Despite the claims of some security suppliers, no criminal groups appear to be using AI to conduct cyber attacks, said Hypponen. “There has been academic research into what AI attacks could look like, but we have not seen any in the real world,” he said.
“The closest we have come so far is attacks against the AI-based systems used by defenders, where criminals are attempting to poison machine learning-based defence systems by throwing garbage data at them to subvert the machine learning.
“But we are not seeing AI used in malware or other types of attack. Attacking machine learning systems is different to actually creating a machine learning-based attack.”
Attempts to confound cyber defence systems in this way are not new, said Hypponen, pointing out that when Bayesian spam filtering became popular, spammers simply flooded email systems with spam messages containing random English words, not links and images.
“This was, in effect, an attack against a learning system,” he said. “As a result, these anti-spam systems began flagging legitimate emails as spam, making the system no longer useful.”
Hypponen believes that people with AI skills and knowledge are so highly sought-after in legitimate industry that there is no need for these well-paid individuals to get involved in criminal activity. However, updating his stance from a year ago when he said AI-enabled attacks were unlikely any time soon, Hypponen said that new commercially available AI development tools could hasten AI-enabled attacks.
“As machine learning development tools get easier and easier to use, criminals will no longer have to find someone with a computer science degree to use them,” he said. “The barrier to entry is coming down and so we will start to see AI-enabled cyber attacks.
“That could be within the next year. They will be rudimentary at first, but soon will be pretty good machine learning attacks, where the malware is capable of rewriting itself to adapt to any obstacles it encounters.”
The continuing game of “cat and mouse” between attackers and defenders will reach a whole new level, said Hypponen, and defenders will have to adapt quickly as soon as they see the first AI-enabled attacks emerging.