Cyber security: The threat from within

Posted on : 30-04-2015 | By : Jack.Rawden | In : Cyber Security

Tags: , , , , , , , , , , ,


Cyber security, as ever, has been a widely discussed topic in Broadgate over the past few weeks.  Numerous cyber-attacks have made the news, from the TV5Monde hack to the recent article in the financial times stating that cyber criminals are some of the fastest innovators currently in technology.

However, with the focus of attention being outside, the question is, is there an enemy within? Organisations have spent big money and devoted a lot of resource to protect itself against external threats and have built strong defences with firewalls, anti-virus software, mail filters and numerous other filters used extensively to protect itself.  But have they left themselves vulnerable from the inside?

What if an employee’s password has been hacked and an intruder is stealing information?
What if an employee was accessing sensitive information that they shouldn’t?
Are you able to track malware that has already made it past the external defences?

Once a person is past the external defences the level of access they might get and the potential for misuse is often worrying.  Organisations can find it difficult to identify such inside threats, or by the time they have recognised them it may be too late and the leak has already happened. This is made ever more difficult to monitor by the increasing complexity of an organisations network. The amount of data stored and number and type of devices connecting to it makes it harder than ever to monitor usage.

Evidence of this can be found in the 2014 Information security breaches survey conducted by PWC.  Almost 60% of organisations have encountered staff related security breaches with 20% caused by deliberate misuse of computer systems.

55% of large businesses were attacked by an unauthorised outsider in the last year
73% of large organisations suffered from infection by viruses or malicious software in the past year
58% of large organisations suffered staff-related security breaches
31% of the worst security breaches in the year were caused by inadvertent human error
20% of the worst security breaches were caused by deliberate misuse of computer systems

More significant and what can’t be tracked is the damage that may occur to an organisation if a leak does occur.  Reputational damage for private organisations could be the most damaging, especially if the breach is widely publicised in the press.  With this could come a monetary loss though loss of clients or potential fines from regulators – the information commissioner’s office has the power to fine organisations up to £500,000 for the misuse of personal data on UK citizens.

With this threat looming over organisations, what can be done to protect itself?  Solutions present themselves as policy, procedure and innovative technologies that can monitor and identify such misuse. Here are a few pointers;

Effective IT usage policy – Simpler, shorter implementations

  • Establish a person responsible for security
  • Classify data into confidential, internal and public data
  • Limiting and tracking access to important documents/files should be a deterrent to anyone trying to steal data from inside the network.
  • Limiting the use of external storage devices such as USB sticks and limiting access to file sharing sites including webmail
  • Identify the data “Crown jewels” – the data that if it were to leak would have the biggest financial/reputational damage.  Ensuring these types of files are encrypted with limited access
  • Customised role based training of staff

Monitoring – Medium/long term implementation

  • Use specialist security software to track files and malware entering/leaving the network.  Tools such as Fire eye or Dark trace can use advanced tracking functionality to spot unusual behaviour on a network. Tools like this have the ability to track unusual network behaviour as well as unusual user behaviour.
  • Consider tools such as Dtex deployed on an individual’s PC to monitor behaviour.  Capturing changes in user patterns (e.g. an employee getting ready to leave the organisations), High risk pattern behaviour or finding what information was lost on a laptop left on a train.
  • Other monitoring solutions such as Digital Shadows to track data that has left the internal boundary to calculate the amount of exposure you have outside the organisation.  Even tracking data on social media and the “Dark web”.
  • Controlled environment – Four Eyes check of files leaving the network to ensure sensitive files are not being sent externally

These types of attack are difficult to stop completely as they revolve around the people using the systems.

However, with better controls, methods to identify unusual activity and misuse the objective is that potential losses are captured and remediated as quickly as possible.



The security threat: Do you know your real business risk?

Posted on : 31-03-2015 | By : john.vincent | In : Cyber Security

Tags: , , , , , , , , , , , , ,


We are asked by our clients increasingly to assist in helping them assess the current threats to their organisation from a security perspective. Indeed, this is now a core part of our services portfolio.

The question of measuring an organisations threat exposure is not easy. There are many angles and techniques that companies can take, from assessing processes, audit requirements, regulatory posture, perimeter defence mechanisms, end user computing controls, network access and so on.

The reality is, companies often select the approach that suits their current operating model, or if independent, one which is aligned with their technology or methodology bias. In 99% of cases, what these assessment approaches have in common is that they address a subset of the problem.

At Broadgate we take a very different approach. It starts with two very simple guiding principles;

  1. What is the more critical data and digital assets that your company needs to protect?
  2. How do your board members assess, measure and quantify secure risks?

Our methodology applies a top down lens over these questions and then looks at the various inputs into them. We also consider the threats in real world terms, discarding the “FUD” (Fear, Uncertainty and Doubt) that many service providers use to embed solutions and drive revenue, often against the real needs of clients.

Some of the principles of our methodology are:

  1. Top Down – we start with the board room. As the requirements to understand, act and report on breaches within a company become more robust, it is the board/C’Level executives who need the data on which to make informed decisions.
  2. Traceability – any methodology should have a common grounding to position it and also to allow for comparison against the market. Everything we assess can be traced back to industry terminology from top to bottom whilst maintaining a vocabulary that resonates in the board room.
  3. Risk Driven – to conduct a proper assessment of an organisations exposure to security breaches, it is vital that companies accurately understand the various aspects of their business profile and the potential origin of threats, both internal and external. For a thorough assessment, organisations need to consider the likelihood and impact from various data angles, including regulatory position, industry vertical, threat trends and of course, the board members themselves (as attacks are more and more personal by nature). Our methodology takes these, and many other aspects, into consideration and applies a value at risk, which allows for focused remediation plans and development of strategic security roadmaps.
  4. Maturity Based – we map the key security standards and frameworks, such as ISO 27001/2, Sans-20, Cyber Essentials etc. from the top level through to the mechanics of implementation. We then present these in a non technical, business language so that there is a very clear common understanding of where compromises may exist and also the current state maturity level. This is a vital part of our approach which many assessments do not cover, often choosing instead to present a simple black and white picture.
  5. Technology Best Fit – the commercial success of the technology security market has led to a myriad of vendors plying their wares. Navigating this landscape is very difficult, particularly understanding the different approaches to prevention, detection and response. At Broadgate we have spent years looking into what are the best fit technologies to mitigate the threats of a cyber attack or data breach and this experience forms a cornerstone of our methodology.

At Broadgate our mantra is “The Business of Technology”. This applies across all of our products and services and never more so when it comes to really assessing the risks in the security space.

If you would like to explore our approach in more detail, and how it might benefit your company, please contact myself or

Why’s my computer so slow? Maybe someone is digging for virtual gold.

Posted on : 30-06-2014 | By : richard.gale | In : Cyber Security

Tags: , , , , , , ,


We’ve discussed the rise and fall and rise of virtual currencies in a couple of previous articles (When are Bitcoins going to crash and what’s next?,  The hidden costs of transacting with virtual currencies).

Creating new currency (whether it be Bitcoin, Dogecoin, Litecoin etc) involves using more and more complex logarithms that consume computing power. The reward for this problem solving is a virtual coin and the amount of work required to ‘earn’ a ‘coin’ is constantly rising.  ‘Miners’, as the creators are call are always looking for new and creative ways to build more coins and the cost of processing power sometimes outweighs the worth of the output.

A phenomena that will only rise in frequency and impact is the misuse of other people’s computers to do this.  A few examples are outlined below where organisations were unwittingly hosting unauthorised external mining activities (maybe some terminology from the Californian gold rush would be appropriate – are they virtual “claim jumpers” or “processing poachers”?)

Harvard University research servers have been used to mine dogecoins. A powerful cluster of machines known as ‘Odyssey’ had been hijacked – misused really as the user had legitimate access – and a mining operation was in place for an unknown period of time. The perpetrator has now had their access revoked but is is not known how profitable the operation was.

Another example, the US National Science foundation supercomputers had been taken over for bitcoin mining – the researcher accused of creating the mining operation said he was ‘conducting research’ and it is thought around $8,000 worth of bitcoins were produced.

There are other occurrences of this phenomena including rogue Android applications which have been reported to have taken over peoples’ mobile phones to carry out mining activities (although they would need a large number of phones to make this at all valuable).

We think these examples reflect a wider problem. People  can have legitimate access to huge amounts of computing  power, this especially true in academic, governmental and larger enterprises. How can the need to run large simulations or experiments be differentiated from more sinister misuses of that excess power?

This whole space is a difficult area to analyse. What is ‘normal’ and what is ‘abnormal’? We’ve been thinking about how to differentiate the two and are now working with a really smart new security company that can help with this (and many other security) issues.

The product, Darktrace, has been built by some ex-MI5 and GCHQ scientists and it grew out of the need to protect the UK’s critical network infrastructure (energy & water supplies, communications & transport)  against terrorist or foreign state cyber-attack. The guys at Darktrace quickly realised that the current suite of protection could not prevent most insider attacks (whether intentioned or accidental) so a new model was needed.

Darktrace sits at the centre of your network, listens and learns about the behaviour of users, connected devices and the network itself and then alerts when something abnormal or unusual occurs. It has no preconceptions about the environment when it is installed and it learns (for a period of 2-4 weeks) and then shouts (usually to the security operations team or external team such as the Mandiant response units) when something odd happens. Darktrace views the appliance almost like the immune system of a body, It understands what healthy is and alerts its ‘antibodies’ to investigate and destroy if necessary any potential threat.

The product uses some clever probabilistic algorithms that constantly learn and build on its knowledge of your environment. An example could be the user ‘Fred’. Fred normally logs in to the network after 8:00am, accesses mail, three file servers and then logs out before 7:00pm. If Fred suddenly starts logging in at 02:0am, searchers eight different file servers for documents containing the word ‘Patent’ and then starts exporting them outside the organisation to a site in the Ukraine then it would be marked as ‘unusual’ and alerted. This could potentially be legitimate activity if ‘Fred’s role has changed but probably not. Traditional cyber-technologies may not catch these sort of issues as they are looking for specific patterns or types of behaviour rather than general differences from the norm.

We have been working with Darktrace and can install the appliance on your network to perform the analysis for you. We can do this for a period of 4-8 weeks(to give the system enough time to learn the environment and to sufficient data to work with) and can provide analysis of any unusual behaviour and advice to your security team through that period. In that period of time we would expect to see some unusual activity so should hopefully show the value to your organisation.

If you would like to learn more about this please do contact us.

Managing cyber security threats: Time to be introspective?

Posted on : 30-05-2014 | By : john.vincent | In : Cyber Security

Tags: , , , , , , , , , , , , ,


Over the years, organisations have taken various steps to deal with the security threat to their internal IT assets. Cyber attackers have evolved from lonely hackers passing time to fully “employed” assailants who target their online assaults on an individual, an organisation or even a government.

Going back to the 1980′s and through to the late 1990′s the threats were predominantly from students or people looking for “personal fame”. The potentially serious nature of security breaches was often brushed over by the media and even somewhat glamorised in films. However, from the late 1990′s things changed with criminal gangs after personal gain and “cyber spies” operating on behalf of national interest.

So what about the evolution of our defences over this period? We started out with simple perimeter protection (essentially just firewalls) and anti-virus on the desktop.

As the threats became more sophisticated we added layers to deal with mobile code exploits (Java, ActiveX etc..), then URL blocking, content filtering, Intrusion Prevention Systems (IPS), Intrusion Detection Systems (IDS), Next Generation Firewalls and so on (the most recent of these being zero day malware threat protection systems like FireEye).

However, each time we have added a new layer of security that addresses the new threat vector, it is of course not guaranteed to catch all exploits and over time cyber criminals are generally successful in circumventing controls to a greater or lesser extent. So where should we focus next?

There seems to be a change in thinking…knowing that some level of internal compromise is inevitable then should we start to look more closely at the internal network to analyse threats? Makes a lot of sense and several vendors are emerging in this area such as Darktrace who use mathematics and machine learning to adopt a “self learning and probabilistic approach to cyber threats” or nPulse (recently acquired by FireEye for $60m) who perform network forensics, particularly in high performance business environments.

This shift is natural and in no sense detracts from the perimeter defence mechanisms (although, there is certainly a degree of diminishing returns and organisations need to think carefully about the architecture and future investment in these technologies).

One of the key questions organisations should ask themselves, is:

“Assuming my organisation is already or will be compromised, how will I know? …and how will I respond?”

Let’s take those questions in order.

How will I know? One common industry statistic which is thrown around that the average age of malware sitting “dormant” within the internal network is over 200 days. So, if it has found it’s was through the existing defence mechanisms then you need a different approach. The positive thing about malware is that it needs to travel, either back to a command and Command & Control Centre or laterally to attempt diversion or seek new “prey”.

Once threat actors gain access to the network, they establish and strive to sustain communication with the compromised computer. Threat actors then need to gain more privileges by getting login credentials from the network that has access to valuable information. They may also gather information (e.g. documents found in desktops, network access for shared drives etc.) via regular user accounts. Once identified, the target data will be made ready for exfiltration.

When this occurs, internal network analytic tools can detect the journey through a company’s assets, determining the impact along the way and alerting organisations to take either automated actions such as disabling call back channels, or removing the endpoint devices from the network (such as with Mandiant), or instigate some manual intervention.

An important point to add here is that internally initiated security events, whether malicious or simple human error, are still one of the main causes of data breaches.

How will I respond? Catching cyber threat actors and malware exploits is one thing. Having an effective Incident Response mechanism and process is another. We find that many organisations focus on the control/prevention side and much less of how they will deal with potentially significant breaches of company assets.

Do you have a clear process for rapidly responding to potential data breaches and have you tested using real-life scenarios? In line with other critical incident processes, organisations need to build, test and continually improve their security incident response mechanisms based on their business profile and risk exposure. 

Testing of incident response processes should be scheduled on a regular basis and reported through internal risk management process (similar to BCP).

So, we see the landscape changing over the coming years. This will have an impact both on people/process (as discussed) and also technology. Vendors on the perimeter will seek to protect customers (and arguably more importantly for them, revenue!) by improving functionality or moving to internal products. Another example is in the traditional SIEM technology vendors (Security Information and Event Management) who will need a radical rethink on approach as the sheer amount of security data and analytics required breaks their value proposition (ultimately they must become big data systems or will be made redundant).

We will, of course, be carefully monitoring this next phase in combating the cyber threat from within.

If you would like to explore any of the themes around this article, please contact