AI in Cyber Security – Friend or Foe?

Posted on : 28-06-2019 | By : richard.gale | In : Uncategorized

0

Artificial intelligence has been welcomed by the cyber security industry as an invaluable tool in the fight against cyber crime, but is it a doubleedged sword? One that is both a powerful defender but potentially a potent weapon for the cyber criminals.

The same artificial intelligence technologies that are used to power speech recognition and self-driving cars have the capability to be turned to other uses, such as creating viruses that morph faster than antivirus companies can keep up, phishing emails that are indistinguishable from real messages written by humans, and intelligently attacking an organisation’s entire defence infrastructure to find the smallest vulnerability and exploit any gap.

Just like any other technology, AI has both strengths and weaknesses that can be abused when in the wrong hands.  

In the AI-fuelled security wars, the balance of power is currently in the hands of the good guys, but undoubtedly set to change.  

Until now, attackers have been relying on mass distribution and sloppy security. The danger is that we will start to see more adversaries, especially those that are well funded, start to leverage these advanced tools and methods more frequently. It is concerning to know that nation-state attackers like Russia and China have almost unlimited resources to develop these tools and make maximum use of them. 

The dark web acts as a clearing house for the cyber criminals where all manner of crypto software is available.  

There are many ways in which the hackers seek to benefit from your information but the biggest reward is the password which opens up their world to a whole new set of vulnerabilities to exploit. Algorithms can crack millions of passwords within minutes.  

Threat Analytics firm Dark Trace has seen evidence of malware programs showing signs of contextual awareness in trying to steal data and hold systems to ransom. They know what to look for and how to find it by closely observing the infrastructure and they can then work out the best way for them to avoid detection. This means the program no longer needs to maintain contact with the hacker through command and control servers or other means, which is usually one of the most effective means of tracking the perpetrator.

Recently, Microsoft was able to spot an attempted hack of it’s Azure cloud when the AI in the security system identified a false intrusion from a fake site. Without the introduction of AI this would have gone unnoticed had they been using rule based protocols.  AI’s ability to learn and adapt itself to new threats should dramatically improve the enterprise’s ability to protect itself even as data and infrastructure push past the traditional firewall into the cloud and the internet of things. 

Human effort won’t scale – there are too many threats, too many changes, and too many network interactions. 

As cybercrime becomes more and more technologically advanced, there is no doubt that we will witness the bad guys employing AI in various additional sophisticated scenarios. 

It’s time for cybersecurity managers to make sure they’re doing everything they can to reduce their attack surface as much as possible, put cutting-edge defenses in place, and replace time-consuming cybersecurity tasks with automation. 

We should all be concerned that as we begin to see AI-powered chatbots, and extensive influence weaving through social media, we face the prospect of the internet as a weapon to undermine trust and control public opinionThis is a very worrying situtuation indeed!  

When a picture tells a 1000 words – An image is not quite what it seems

Posted on : 28-06-2019 | By : richard.gale | In : Uncategorized

0

Steganography is not a new concept, the ancient Greeks and Romans used hidden messages to outsmart their opponents and thousands of years later nothing has changed. People have always found ways of hiding secrets in a message in such a way that only the sender can understand. This is different from cryptography as rather than trying to obscure content so it cannot be read by anyone other than the intended, steganography’s aim is to conceal the fact that the content actually exists in the first place. If you take a look at two images one with cryptography and one without there will be no visible difference. It is a great way of sending secure messages where the sender can be assured of confidentiality and not be concerned about unauthorised viewing in the wrong hands. However, like so many technologies today, steganography can be used for good or for bad. When the bad guys get in on the act we have yet another threat to explore in the cyber landscape!

Hackers are increasingly using this method to trick internet users and smuggle in malicious code past security scanners and firewalls. This code can be hidden in harmless software and jump out at the users when they least expect it. The attackers download the file with the hidden data, extract for use in the next step of the attack.

Malvertising is one way in which the cyber criminals exploit the use of steganography. They buy advertising space on trustworthy websites, post their ads which appear legitimate, hiding their harmful code inside. Bad ads can redirect users to malicious websites or install malware on their computers or mobile devices. One of the most concerning aspects of this technique is that users get infected even if they don’t click on the image, often just loading the image is enough. Earlier this year, millions of Apple Mac users were hit when hackers used advertising campaigns to hide malicious code in ad images to avoid detection on the laptops. Some very famous names such as the New York Times and Spotify have inadvertently displayed theses criminal ads, putting their users at risk.

Botnets are another way in which hackers use steganography by using the hidden code to communicate on the inbound traffic flow and download malicious code to general malware. Botnet controllers employ steganography techniques to control target endpoints. They hide commands in plain view – perhaps within images or music files distributed through file sharing or social networking websites. This allows the criminals to surreptitiously issue instructions to their botnets without relying on an ISP to host their infrastructure and minimising the chances of discovery.

It’s not only the cyber criminals who have realised the potential of steganography, the malicious insider too is an enthusiast!  Last year a Chinese engineer was able to exfiltrate sensitive information  from General Electric by stegging it into images of sunsets. He was only discovered when GE Security officials became suspicious of him for an unrelated reason and started to monitor his office computer.

Organisations should be concerned about the rise of the steganography from both malicious outsiders and insiders. The battle between the hackers and security teams is on and one that the hackers are currently winning.  There are so many different steganography techniques that it is almost impossible to find one detection solution that can deal with them all. So, until the there is a detection solution it’s the same old advice. Always be aware of what you are loading and what you are clicking.

There is an old saying “the camera never lies” but sometimes maybe it does!

How secure are your RPA Processes?

Posted on : 17-06-2019 | By : richard.gale | In : Uncategorized

0

Robotic Process Automation is an emerging technology with many organisations looking at how they might benefit from automating some or all, of their business processes. However, in some companies there is a common misconception that letting robots loose on the network could pose a significant security risk. The belief being that robots are far less secure users than their human counterparts.  

In reality, a compelling case could be made that robots are inherently more secure than people 

Provided your robots are treated in the same way as their human teammates i.e. inherit the security access and profile of the person/role they are programmed to simulate there is no reason why a robot should have be any less secure. In other words, the security policies and access controls suitable for humans should be applied to the software robots in just the same way.  

There are many security advantages gained from introducing a robot into your organisation.  

  • Once a robot has been trained to perform a task, it never deviates from the policies, procedures and business rules in place
  • Unlike human users, robots lack curiosity (so they won’t be tempted to open phishing emails), cannot be tricked into revealing information or downloading unauthorised software. 
  • Robots have no motives which might could turn them into a disgruntled employee by ignoring existing policies and procedures.  

So, we can see that on the contrary- in many ways the predictable behaviour of the robot makes them your most trusted employee! 

RPA certainly represents an unprecedented level of transformation and disruption to “business as usual” – one that requires careful preparation and planning. But while caution is prudent, many of the security concerns related to RPA implementation are overstated. 

The issue of data security can be broken down into two points;  

  • Data Security 
  • Access Security 

This means ensuring that the data being accessed and processed by the robot remains secure and confidential. Access management of the robots must be properly assigned and reviewed similar to the review and management of existing human user accounts. 

Here are some of the key security points to consider: 

  1. Segregating access to data is not any different than when granting access to normal users, which is based on what the robot should actually do, and not providing domain admin permissions and/or elevated access, unless absolutely necessary. 
  2. Passwords should be maintained in a password vault and service accounts’ access should be reviewed periodically. 
  3. Monitoring the activity of the robots and logon information via a “control room” (e.g. monitoring of logon information and any errors). 
  4. An RPA environment should be strictly customised via active directory integration, which will increase business efficiency as access management is centralised. 
  5. Encryption of credentials. 
  6. Performing independent code audits and reviews, no different than with any other IT environment. 
  7. Robots are programmed using secure programming methods. 
  8. Security testing against policy controls. 

 

All these points must be considered from the outset. This is security by design, that must be embedded in the RPA process from the start. It must be re-emphasised that the security of RPA is not just about protecting access to the data but securing the data itself. 

Overall, RPA lowers security-related efforts associated with training employees and teaching them security practices (e.g. password management, applications of privacy settings etc) because it ensures a zero-touch environment. By eliminating manual work, automation minimizes security risks at a macro level, if the key controls are implemented at the beginning. 

In addition, an automated environment removes biases, variability and human error. The lack of randomness and variability can increase uniform compliance of company requirements built in the workflows and tasks of the automation. 

Besides security risks, the zero-touch environment of RPA also helps mitigate other human-related risks in business operations. An automated environment is free from biases, prejudices or variability, all of which are human work with the risk of error. Because of this, RPA ensures less risky and consistent work with trustworthy data. 

Therefore, RPA should be wisely implemented, which basically amounts to a choice of a stable RPA product or provider, backed by proper, constant monitoring of security measures. Providing role-based access to confidential data, monitoring access and data encryption are the most salient means to deal with security risks.