Managing cyber security threats: Time to be introspective?

Posted on : 30-05-2014 | By : john.vincent | In : Cyber Security

Tags: , , , , , , , , , , , , ,

0

Over the years, organisations have taken various steps to deal with the security threat to their internal IT assets. Cyber attackers have evolved from lonely hackers passing time to fully “employed” assailants who target their online assaults on an individual, an organisation or even a government.

Going back to the 1980′s and through to the late 1990′s the threats were predominantly from students or people looking for “personal fame”. The potentially serious nature of security breaches was often brushed over by the media and even somewhat glamorised in films. However, from the late 1990′s things changed with criminal gangs after personal gain and “cyber spies” operating on behalf of national interest.

So what about the evolution of our defences over this period? We started out with simple perimeter protection (essentially just firewalls) and anti-virus on the desktop.

As the threats became more sophisticated we added layers to deal with mobile code exploits (Java, ActiveX etc..), then URL blocking, content filtering, Intrusion Prevention Systems (IPS), Intrusion Detection Systems (IDS), Next Generation Firewalls and so on (the most recent of these being zero day malware threat protection systems like FireEye).

However, each time we have added a new layer of security that addresses the new threat vector, it is of course not guaranteed to catch all exploits and over time cyber criminals are generally successful in circumventing controls to a greater or lesser extent. So where should we focus next?

There seems to be a change in thinking…knowing that some level of internal compromise is inevitable then should we start to look more closely at the internal network to analyse threats? Makes a lot of sense and several vendors are emerging in this area such as Darktrace who use mathematics and machine learning to adopt a “self learning and probabilistic approach to cyber threats” or nPulse (recently acquired by FireEye for $60m) who perform network forensics, particularly in high performance business environments.

This shift is natural and in no sense detracts from the perimeter defence mechanisms (although, there is certainly a degree of diminishing returns and organisations need to think carefully about the architecture and future investment in these technologies).

One of the key questions organisations should ask themselves, is:

“Assuming my organisation is already or will be compromised, how will I know? …and how will I respond?”

Let’s take those questions in order.

How will I know? One common industry statistic which is thrown around that the average age of malware sitting “dormant” within the internal network is over 200 days. So, if it has found it’s was through the existing defence mechanisms then you need a different approach. The positive thing about malware is that it needs to travel, either back to a command and Command & Control Centre or laterally to attempt diversion or seek new “prey”.

Once threat actors gain access to the network, they establish and strive to sustain communication with the compromised computer. Threat actors then need to gain more privileges by getting login credentials from the network that has access to valuable information. They may also gather information (e.g. documents found in desktops, network access for shared drives etc.) via regular user accounts. Once identified, the target data will be made ready for exfiltration.

When this occurs, internal network analytic tools can detect the journey through a company’s assets, determining the impact along the way and alerting organisations to take either automated actions such as disabling call back channels, or removing the endpoint devices from the network (such as with Mandiant), or instigate some manual intervention.

An important point to add here is that internally initiated security events, whether malicious or simple human error, are still one of the main causes of data breaches.

How will I respond? Catching cyber threat actors and malware exploits is one thing. Having an effective Incident Response mechanism and process is another. We find that many organisations focus on the control/prevention side and much less of how they will deal with potentially significant breaches of company assets.

Do you have a clear process for rapidly responding to potential data breaches and have you tested using real-life scenarios? In line with other critical incident processes, organisations need to build, test and continually improve their security incident response mechanisms based on their business profile and risk exposure. 

Testing of incident response processes should be scheduled on a regular basis and reported through internal risk management process (similar to BCP).

So, we see the landscape changing over the coming years. This will have an impact both on people/process (as discussed) and also technology. Vendors on the perimeter will seek to protect customers (and arguably more importantly for them, revenue!) by improving functionality or moving to internal products. Another example is in the traditional SIEM technology vendors (Security Information and Event Management) who will need a radical rethink on approach as the sheer amount of security data and analytics required breaks their value proposition (ultimately they must become big data systems or will be made redundant).

We will, of course, be carefully monitoring this next phase in combating the cyber threat from within.

If you would like to explore any of the themes around this article, please contact jo.rosebroadgateconsultants.com

 

 

Is it possible to prevent those IT Failures?

Posted on : 30-05-2014 | By : richard.gale | In : Cyber Security

Tags: , , , , , , ,

0

Last month we counted down our Top 10 Technology Disasters. Here are some of our tips on project planning  which may help avoid failure in the future.

Objectives

What is the project trying to achieve. This should be clear and all involved in the project including the recipients of the solution need to know what they are. Having unclear or unstated goals will not only impact the chances of success but also it will be unclear what ‘success’ is if it occurs.

Value 

The value of the project to the organisation needs to be known and ‘obvious’. Too many projects start without this basic condition.

If the organisation is no better off after the project has been completed then there is little point starting it. Better off can be defined in many ways – business advantage/growth, cost savings/efficiency, internal/external push (e.g. something will break or an auditor or regulator requires it to be done).

Projects are too often initiated for unclear or obscure reasons ranging from “we have some budget to spend on something” through “we would like to play with this new technology and need a project to enable us to to this” to “We’ve started so we’ll finish” when the business has changed or has moved onto other priorities.

Having a clear understanding of the value of the work and a method of measuring success through and after the project has delivered should be a fundamental part of any change process.

Scale

Large projects are difficult. Some projects need to be large, there would be little point building half of London’s Cross Rail tunnels, but large projects seem more likely to fail (or at least get more publicity when they do). Complexity rises logarithmically as projects grow due to the rise in connectivity of the risks, issues, logistics and numbers of people  involved.

Breaking projects down into manageable pieces increases the likelihood of successful outcomes. The projects need to be woven into an overall programme or framework to ensure the sum of the parts does end up equalling the whole though.

Duration

In a similar vein as Scale above. Projects with an extended duration are less likely to achieve full value. Businesses are not static and they change over time and with that their objectives and goals change with them. The longer a project is running the more likely it is that what the  business requires now is not what is being delivered.

As outlined above, some projects are so large that they will run for multiple years, if they do they clear milestones need to be set on a much shorter time perspective to avoid a loss of control (in terms of scope, time, cost). Also regular review points should be built into lengthy projects to reconfirm that business objectives are still being met – ultimately that that change is still required…

Accountability

Nothing new here but someone with both interest in the success and seniority to ensure acceptance should be accountable for the success of the project. If the key stakeholder is not engaged in terms of ownership and driving the project along to completion then the chances of a successful outcome are greatly diminished.

Empowerment

The other side of Accountability is Empowerment. Successful projects need to have empowered teams that understand the objectives of their project, their important part within it and are able to make decisions to guide it to completion. Projects where there is a top-down or command-and-control philosophy may succeed but the person making all the decisions needs to be right all the time. Teams go into reactive or ‘follow without questioning’ modes of operating which will increase the likelihood that that the wrong decision will be made and accepted resulting in project failure.

In conclusion, make sure the project goals are clear, it is adding value to the business, keep it short, ensure senior leadership buy in and ensure the team can make the right decisions! If only it was this easy…