AI in Cyber Security – Friend or Foe?

0

Artificial intelligence has been welcomed by the cyber security industry as an invaluable tool in the fight against cyber crime, but is it a doubleedged sword? One that is both a powerful defender but potentially a potent weapon for the cyber criminals.

The same artificial intelligence technologies that are used to power speech recognition and self-driving cars have the capability to be turned to other uses, such as creating viruses that morph faster than antivirus companies can keep up, phishing emails that are indistinguishable from real messages written by humans, and intelligently attacking an organisation’s entire defence infrastructure to find the smallest vulnerability and exploit any gap.

Just like any other technology, AI has both strengths and weaknesses that can be abused when in the wrong hands.  

In the AI-fuelled security wars, the balance of power is currently in the hands of the good guys, but undoubtedly set to change.  

Until now, attackers have been relying on mass distribution and sloppy security. The danger is that we will start to see more adversaries, especially those that are well funded, start to leverage these advanced tools and methods more frequently. It is concerning to know that nation-state attackers like Russia and China have almost unlimited resources to develop these tools and make maximum use of them. 

The dark web acts as a clearing house for the cyber criminals where all manner of crypto software is available.  

There are many ways in which the hackers seek to benefit from your information but the biggest reward is the password which opens up their world to a whole new set of vulnerabilities to exploit. Algorithms can crack millions of passwords within minutes.  

Threat Analytics firm Dark Trace has seen evidence of malware programs showing signs of contextual awareness in trying to steal data and hold systems to ransom. They know what to look for and how to find it by closely observing the infrastructure and they can then work out the best way for them to avoid detection. This means the program no longer needs to maintain contact with the hacker through command and control servers or other means, which is usually one of the most effective means of tracking the perpetrator.

Recently, Microsoft was able to spot an attempted hack of it’s Azure cloud when the AI in the security system identified a false intrusion from a fake site. Without the introduction of AI this would have gone unnoticed had they been using rule based protocols.  AI’s ability to learn and adapt itself to new threats should dramatically improve the enterprise’s ability to protect itself even as data and infrastructure push past the traditional firewall into the cloud and the internet of things. 

Human effort won’t scale – there are too many threats, too many changes, and too many network interactions. 

As cybercrime becomes more and more technologically advanced, there is no doubt that we will witness the bad guys employing AI in various additional sophisticated scenarios. 

It’s time for cybersecurity managers to make sure they’re doing everything they can to reduce their attack surface as much as possible, put cutting-edge defenses in place, and replace time-consuming cybersecurity tasks with automation. 

We should all be concerned that as we begin to see AI-powered chatbots, and extensive influence weaving through social media, we face the prospect of the internet as a weapon to undermine trust and control public opinionThis is a very worrying situtuation indeed!  

RSS Feed Subscribe to our RSS Feed

Posted on : 28-06-2019 | By : richard.gale | In : Uncategorized

Write a comment