Cloud as an “Innovation Enabler”

Posted on : 30-06-2014 | By : john.vincent | In : Cloud

Tags: , , , , , , ,


It seems that most people we come across in our daily activities now agree that cloud computing is a key disrupter to “traditional” technology service delivery. We no longer start conversations with “let’s define what we mean by cloud computing”, “cloud means different things to different people” or having to ensuring all documents have descriptions of public, private and hybrid cloud as laid out by NIST (the  National Institute of Standards and Technology).

People get cloud now. Of course, there are still naysayers and those that raise the security, compliance or regulatory card, but those voices are now becoming fainter (Indeed, if you look closer you’ll often find that what it actually stems from is a cultural fear, such as loss of control).

If we look at the evolution and adoption of cloud technology, it has predominantly been focused around two business drivers, efficiency and agility. The first of these took some time just from an economic business case perspective. As with most new technologies or ideologies, economies of scale create the tipping point for accelerating adoption but we have now reached the point where the pure cost benefits of on-demand infrastructure are compelling when compared to the internally managed alternative.

The agility angle requires more of a shift in the operating model and mindset for technology organisations. CIOs are generally used to owning and managing infrastructure in “tranches” – deploying additional compute capability for new applications or removing it for consolidation, rationalisation and changes in business strategy.

What cloud technologies provide is the capability for matching demand and supply of compute resource without step changes. To deliver this, however, requires improved forecasting, provisioning and monitoring processes within the technology organisation.

So that’s where most organisations have positioned the cloud. However, what about using cloud to drive business innovation?

A recent McKinsey study on Cloud and Innovation made the following point:

The problem in many cases is that adopting cloud technologies is an IT initiative, which means that cloud solutions are all around improving IT and IT productivity. But that’s not where growth is going to come from. . . Incremental investments in productivity don’t drive growth. . . Investments need to go into innovation and disruptive business models . . . Unless companies are asking themselves how to use the cloud to disrupt their own business models or someone else’s, then adopting the cloud is just another IT project.

This observation encapsulates the current situation well – we often see cloud in the category of “another IT project”. We also saw similar with the whole “Big Data” hype (not that we like that label) in recent years when some IT organisations were building capabilities with products like Hadoop without really knowing what the business objectives or value were. Sound familiar?

Building further on this, we see the problem with driving innovation through cloud based technology as two-fold.

Firstly, many organisations still struggle to foster innovation, whether within the company boundaries or via external ventures and partnerships. We have written about this in previous articles (here as related to innovation in banks). Although things are developing with companies building “Digital Business Units” as a complete separate entity (staffed with both business and IT stakeholders), or sponsoring/funding start-up programmes, it is still too slow. Sadly, innovation is too often just an objective on a performance appraisal which was “achieved” through something fairly uninspiring.

The second point is that relating cloud technology as an enabler to innovation requires a high degree of abstraction between current and future state. It needs people to work together that understand and can shape;

  • The current value of a business and history from a people, process, asset and customer perspective
  • How cloud technology can innovate and underpin new digital channels, such as mobile, social, payments, the internet-of-things and the like
  • How to change the mindset of peer C’level executives to embrace the “art of the possible” – to take decisions that will bring a step change in the companies client services

The challenge facing many organisations is that the shift to innovative cloud based services, which connects clients, services, data and devices on a potentially huge scale, is not supported by traditional technology architectures. It jars with the old, tried and tested way of designing technology infrastructure within a defined boundaries.

However, if organisations do not adapt and innovate then the real threat comes from those companies who know nothing more than “innovating in the cloud”. They started there and use it not only as an efficiency and agility tool but to deliver new and disruptive cloud based business services. To compete, traditional organisations will need to evolve their cloud based innovation.

Why’s my computer so slow? Maybe someone is digging for virtual gold.

Posted on : 30-06-2014 | By : richard.gale | In : Cyber Security

Tags: , , , , , , ,


We’ve discussed the rise and fall and rise of virtual currencies in a couple of previous articles (When are Bitcoins going to crash and what’s next?,  The hidden costs of transacting with virtual currencies).

Creating new currency (whether it be Bitcoin, Dogecoin, Litecoin etc) involves using more and more complex logarithms that consume computing power. The reward for this problem solving is a virtual coin and the amount of work required to ‘earn’ a ‘coin’ is constantly rising.  ‘Miners’, as the creators are call are always looking for new and creative ways to build more coins and the cost of processing power sometimes outweighs the worth of the output.

A phenomena that will only rise in frequency and impact is the misuse of other people’s computers to do this.  A few examples are outlined below where organisations were unwittingly hosting unauthorised external mining activities (maybe some terminology from the Californian gold rush would be appropriate – are they virtual “claim jumpers” or “processing poachers”?)

Harvard University research servers have been used to mine dogecoins. A powerful cluster of machines known as ‘Odyssey’ had been hijacked – misused really as the user had legitimate access – and a mining operation was in place for an unknown period of time. The perpetrator has now had their access revoked but is is not known how profitable the operation was.

Another example, the US National Science foundation supercomputers had been taken over for bitcoin mining – the researcher accused of creating the mining operation said he was ‘conducting research’ and it is thought around $8,000 worth of bitcoins were produced.

There are other occurrences of this phenomena including rogue Android applications which have been reported to have taken over peoples’ mobile phones to carry out mining activities (although they would need a large number of phones to make this at all valuable).

We think these examples reflect a wider problem. People  can have legitimate access to huge amounts of computing  power, this especially true in academic, governmental and larger enterprises. How can the need to run large simulations or experiments be differentiated from more sinister misuses of that excess power?

This whole space is a difficult area to analyse. What is ‘normal’ and what is ‘abnormal’? We’ve been thinking about how to differentiate the two and are now working with a really smart new security company that can help with this (and many other security) issues.

The product, Darktrace, has been built by some ex-MI5 and GCHQ scientists and it grew out of the need to protect the UK’s critical network infrastructure (energy & water supplies, communications & transport)  against terrorist or foreign state cyber-attack. The guys at Darktrace quickly realised that the current suite of protection could not prevent most insider attacks (whether intentioned or accidental) so a new model was needed.

Darktrace sits at the centre of your network, listens and learns about the behaviour of users, connected devices and the network itself and then alerts when something abnormal or unusual occurs. It has no preconceptions about the environment when it is installed and it learns (for a period of 2-4 weeks) and then shouts (usually to the security operations team or external team such as the Mandiant response units) when something odd happens. Darktrace views the appliance almost like the immune system of a body, It understands what healthy is and alerts its ‘antibodies’ to investigate and destroy if necessary any potential threat.

The product uses some clever probabilistic algorithms that constantly learn and build on its knowledge of your environment. An example could be the user ‘Fred’. Fred normally logs in to the network after 8:00am, accesses mail, three file servers and then logs out before 7:00pm. If Fred suddenly starts logging in at 02:0am, searchers eight different file servers for documents containing the word ‘Patent’ and then starts exporting them outside the organisation to a site in the Ukraine then it would be marked as ‘unusual’ and alerted. This could potentially be legitimate activity if ‘Fred’s role has changed but probably not. Traditional cyber-technologies may not catch these sort of issues as they are looking for specific patterns or types of behaviour rather than general differences from the norm.

We have been working with Darktrace and can install the appliance on your network to perform the analysis for you. We can do this for a period of 4-8 weeks(to give the system enough time to learn the environment and to sufficient data to work with) and can provide analysis of any unusual behaviour and advice to your security team through that period. In that period of time we would expect to see some unusual activity so should hopefully show the value to your organisation.

If you would like to learn more about this please do contact us.