AI in Cyber Security – Friend or Foe?

Posted on : 28-06-2019 | By : richard.gale | In : Uncategorized

0

Artificial intelligence has been welcomed by the cyber security industry as an invaluable tool in the fight against cyber crime, but is it a doubleedged sword? One that is both a powerful defender but potentially a potent weapon for the cyber criminals.

The same artificial intelligence technologies that are used to power speech recognition and self-driving cars have the capability to be turned to other uses, such as creating viruses that morph faster than antivirus companies can keep up, phishing emails that are indistinguishable from real messages written by humans, and intelligently attacking an organisation’s entire defence infrastructure to find the smallest vulnerability and exploit any gap.

Just like any other technology, AI has both strengths and weaknesses that can be abused when in the wrong hands.  

In the AI-fuelled security wars, the balance of power is currently in the hands of the good guys, but undoubtedly set to change.  

Until now, attackers have been relying on mass distribution and sloppy security. The danger is that we will start to see more adversaries, especially those that are well funded, start to leverage these advanced tools and methods more frequently. It is concerning to know that nation-state attackers like Russia and China have almost unlimited resources to develop these tools and make maximum use of them. 

The dark web acts as a clearing house for the cyber criminals where all manner of crypto software is available.  

There are many ways in which the hackers seek to benefit from your information but the biggest reward is the password which opens up their world to a whole new set of vulnerabilities to exploit. Algorithms can crack millions of passwords within minutes.  

Threat Analytics firm Dark Trace has seen evidence of malware programs showing signs of contextual awareness in trying to steal data and hold systems to ransom. They know what to look for and how to find it by closely observing the infrastructure and they can then work out the best way for them to avoid detection. This means the program no longer needs to maintain contact with the hacker through command and control servers or other means, which is usually one of the most effective means of tracking the perpetrator.

Recently, Microsoft was able to spot an attempted hack of it’s Azure cloud when the AI in the security system identified a false intrusion from a fake site. Without the introduction of AI this would have gone unnoticed had they been using rule based protocols.  AI’s ability to learn and adapt itself to new threats should dramatically improve the enterprise’s ability to protect itself even as data and infrastructure push past the traditional firewall into the cloud and the internet of things. 

Human effort won’t scale – there are too many threats, too many changes, and too many network interactions. 

As cybercrime becomes more and more technologically advanced, there is no doubt that we will witness the bad guys employing AI in various additional sophisticated scenarios. 

It’s time for cybersecurity managers to make sure they’re doing everything they can to reduce their attack surface as much as possible, put cutting-edge defenses in place, and replace time-consuming cybersecurity tasks with automation. 

We should all be concerned that as we begin to see AI-powered chatbots, and extensive influence weaving through social media, we face the prospect of the internet as a weapon to undermine trust and control public opinionThis is a very worrying situtuation indeed!  

When a picture tells a 1000 words – An image is not quite what it seems

Posted on : 28-06-2019 | By : richard.gale | In : Uncategorized

0

Steganography is not a new concept, the ancient Greeks and Romans used hidden messages to outsmart their opponents and thousands of years later nothing has changed. People have always found ways of hiding secrets in a message in such a way that only the sender can understand. This is different from cryptography as rather than trying to obscure content so it cannot be read by anyone other than the intended, steganography’s aim is to conceal the fact that the content actually exists in the first place. If you take a look at two images one with cryptography and one without there will be no visible difference. It is a great way of sending secure messages where the sender can be assured of confidentiality and not be concerned about unauthorised viewing in the wrong hands. However, like so many technologies today, steganography can be used for good or for bad. When the bad guys get in on the act we have yet another threat to explore in the cyber landscape!

Hackers are increasingly using this method to trick internet users and smuggle in malicious code past security scanners and firewalls. This code can be hidden in harmless software and jump out at the users when they least expect it. The attackers download the file with the hidden data, extract for use in the next step of the attack.

Malvertising is one way in which the cyber criminals exploit the use of steganography. They buy advertising space on trustworthy websites, post their ads which appear legitimate, hiding their harmful code inside. Bad ads can redirect users to malicious websites or install malware on their computers or mobile devices. One of the most concerning aspects of this technique is that users get infected even if they don’t click on the image, often just loading the image is enough. Earlier this year, millions of Apple Mac users were hit when hackers used advertising campaigns to hide malicious code in ad images to avoid detection on the laptops. Some very famous names such as the New York Times and Spotify have inadvertently displayed theses criminal ads, putting their users at risk.

Botnets are another way in which hackers use steganography by using the hidden code to communicate on the inbound traffic flow and download malicious code to general malware. Botnet controllers employ steganography techniques to control target endpoints. They hide commands in plain view – perhaps within images or music files distributed through file sharing or social networking websites. This allows the criminals to surreptitiously issue instructions to their botnets without relying on an ISP to host their infrastructure and minimising the chances of discovery.

It’s not only the cyber criminals who have realised the potential of steganography, the malicious insider too is an enthusiast!  Last year a Chinese engineer was able to exfiltrate sensitive information  from General Electric by stegging it into images of sunsets. He was only discovered when GE Security officials became suspicious of him for an unrelated reason and started to monitor his office computer.

Organisations should be concerned about the rise of the steganography from both malicious outsiders and insiders. The battle between the hackers and security teams is on and one that the hackers are currently winning.  There are so many different steganography techniques that it is almost impossible to find one detection solution that can deal with them all. So, until the there is a detection solution it’s the same old advice. Always be aware of what you are loading and what you are clicking.

There is an old saying “the camera never lies” but sometimes maybe it does!

How secure are your RPA Processes?

Posted on : 17-06-2019 | By : richard.gale | In : Uncategorized

0

Robotic Process Automation is an emerging technology with many organisations looking at how they might benefit from automating some or all, of their business processes. However, in some companies there is a common misconception that letting robots loose on the network could pose a significant security risk. The belief being that robots are far less secure users than their human counterparts.  

In reality, a compelling case could be made that robots are inherently more secure than people 

Provided your robots are treated in the same way as their human teammates i.e. inherit the security access and profile of the person/role they are programmed to simulate there is no reason why a robot should have be any less secure. In other words, the security policies and access controls suitable for humans should be applied to the software robots in just the same way.  

There are many security advantages gained from introducing a robot into your organisation.  

  • Once a robot has been trained to perform a task, it never deviates from the policies, procedures and business rules in place
  • Unlike human users, robots lack curiosity (so they won’t be tempted to open phishing emails), cannot be tricked into revealing information or downloading unauthorised software. 
  • Robots have no motives which might could turn them into a disgruntled employee by ignoring existing policies and procedures.  

So, we can see that on the contrary- in many ways the predictable behaviour of the robot makes them your most trusted employee! 

RPA certainly represents an unprecedented level of transformation and disruption to “business as usual” – one that requires careful preparation and planning. But while caution is prudent, many of the security concerns related to RPA implementation are overstated. 

The issue of data security can be broken down into two points;  

  • Data Security 
  • Access Security 

This means ensuring that the data being accessed and processed by the robot remains secure and confidential. Access management of the robots must be properly assigned and reviewed similar to the review and management of existing human user accounts. 

Here are some of the key security points to consider: 

  1. Segregating access to data is not any different than when granting access to normal users, which is based on what the robot should actually do, and not providing domain admin permissions and/or elevated access, unless absolutely necessary. 
  2. Passwords should be maintained in a password vault and service accounts’ access should be reviewed periodically. 
  3. Monitoring the activity of the robots and logon information via a “control room” (e.g. monitoring of logon information and any errors). 
  4. An RPA environment should be strictly customised via active directory integration, which will increase business efficiency as access management is centralised. 
  5. Encryption of credentials. 
  6. Performing independent code audits and reviews, no different than with any other IT environment. 
  7. Robots are programmed using secure programming methods. 
  8. Security testing against policy controls. 

 

All these points must be considered from the outset. This is security by design, that must be embedded in the RPA process from the start. It must be re-emphasised that the security of RPA is not just about protecting access to the data but securing the data itself. 

Overall, RPA lowers security-related efforts associated with training employees and teaching them security practices (e.g. password management, applications of privacy settings etc) because it ensures a zero-touch environment. By eliminating manual work, automation minimizes security risks at a macro level, if the key controls are implemented at the beginning. 

In addition, an automated environment removes biases, variability and human error. The lack of randomness and variability can increase uniform compliance of company requirements built in the workflows and tasks of the automation. 

Besides security risks, the zero-touch environment of RPA also helps mitigate other human-related risks in business operations. An automated environment is free from biases, prejudices or variability, all of which are human work with the risk of error. Because of this, RPA ensures less risky and consistent work with trustworthy data. 

Therefore, RPA should be wisely implemented, which basically amounts to a choice of a stable RPA product or provider, backed by proper, constant monitoring of security measures. Providing role-based access to confidential data, monitoring access and data encryption are the most salient means to deal with security risks. 

The ultimate way to move beyond trading latency?

Posted on : 29-03-2019 | By : richard.gale | In : Finance, Uncategorized

Tags: , , , , , , ,

0

A number of power surges and outages have been experienced in the East Grinstead area of the UK in recent months. Utility companies involved have traced the cause to one of three  high capacity feeds to a Global Investment bank’s data centre facility.

The profits created by the same bank’s London based Propriety Trading group has increased tenfold in the same time.

This bank employs 1% of the world’s best post-doctoral theoretical Physics graduates  to help build its black box trading systems

Could there be a connection? Wild & unconfirmed rumours have been circulating within  the firm that a major breakthrough in removing the problem of latency – the physical limitation the time it takes a signal to transfer down a wire – ultimately governed by of the speed of light.

For years traders have been trying to reduce execution latency to provide competitive advantage in a highly competitive fast moving environment. The focus has moved from seconds to milli and now microsecond savings.

Many Financial Services & technology organisations have attempted to solve this problem through reducing  data hopping, routing, and going as far as placing their hardware physically close to the source of data (such as in an Exchange’s data centre) to minimise latency but no one has solved the issue – yet.

It sounds like this bank may have gone one step further. It is known that at the boundary of the speed of light – physics as we know it -changes (Quantum mechanics is an example where the time/space continuum becomes ‘fuzzy’). Conventional physics states that travelling faster than the speed of light and see into the future would require infinite energy and so is not possible.

Investigation with a number of insiders at the firm has resulted in an amazing and almost unbelievable insight. They have managed to build a device which ‘hovers’ over the present and immediate future – little detail is known about it but it is understood to be based on the previously unproven ‘Alcubierre drive’ principle. This allows the trading system to predict (in reality observe) the next direction in the market providing invaluable trading advantage.

The product is still in test mode as the effects of trading ahead of the data they have already traded against is producing outages in the system as it then tries to correct the error in the future data which again changes the data ad finitum… The prediction model only allows a small glimpse into the immediate future which also limits the window of opportunity for trading.

The power requirements for the equipment are so large that they have had to been moved to the data centre environment where consumption can be more easily hidden (or not as the power outages showed).

If the bank does really crack this problem then they will have the ultimate trading advantage – the ability to see into the future and trade with ‘inside’ knowledge legally. Unless another bank is doing similar in the ‘trading arms race’ then the bank will quickly become dominant and the other banks may go out of business.

The US Congress have apparently discovered some details of this mechanism and are requesting the bank to disclose details of the project. The bank is understandably reluctant to do this as it has spent over $80m developing this and wants to make some return on its investment.

If this system goes into true production mode surely it cannot be long before Financial Regulators outlaw the tool as it will both distort and ultimately destroy the markets.

Of course the project has a codename…. Project Tachyons

No one from the company was available to comment on the accuracy of the claims.

Do you believe that your legacy systems are preventing digital transformation?

Posted on : 14-03-2019 | By : richard.gale | In : Data, Finance, FinTech, Innovation, Uncategorized

Tags: , , , , , , , ,

0

According to the results of our recent Broadgate Futures Survey more than half of our clients agreed that digital transformation within their organisation was being hampered by legacy systems. Indeed, no one “strongly disagreed” confirming the extent of the problem.

Many comments suggested that this was not simply a case of budget constraints, but the sheer size, scale and complexity of the transition had deterred organisations in fear of the fact that they were not adequately equipped to deliver successful change.

Legacy systems have a heritage going back many years to the days of the mega mainframes of the 70’s and 80’s. This was a time when banks were the masters of technological innovation. We saw the birth of ATMs, BACS and international card payments. It was an exciting time of intense modernisation. Many of the core systems that run the finance sector today are the same ones that were built back then. The only problem is that, although these systems were built to last they were not built for change.

The new millennium experienced another significant development with the introduction of the internet, an opportunity the banks could have seized and considered developing new, simpler, more versatile systems. However, instead they decided to adopt a different strategy and modify their existing systems, in their eyes there was no need to reinvent the wheel. They made additions and modifications as and when required. As a result, most financial organisations have evolved over the decades into organisations of complex networks, a myriad of applications and an overloaded IT infrastructure.

The Bank of England itself has recently been severely reprimanded by a Commons Select Committee review who found the Bank to be drowning in out of date processes in dire need of modernisation. Its legacy systems are overly complicated and inefficient, following a merger with the PRA in 2014 their IT estate comprises of duplicated systems and extensive data overload.

Budget, as stated earlier is not the only factor in preventing digital transformation, although there is no doubt that these projects are expensive and extremely time consuming. The complexity of the task and the fear of failure is another reason why companies hold on to their legacy systems. Better the devil you know! Think back to the TSB outage (there were a few…), systems were down for hours and customers were unable to access their accounts following a system upgrade. The incident ultimately led to huge fines from the Financial Conduct Authority and the resignation of the Chief Executive.

For most organisations abandoning their legacy systems is simply not an option so they need to find ways to update in order to facilitate the connection to digital platforms and plug into new technologies.

Many of our clients believe that it is not the legacy system themselves which are the barrier, but it is the inability to access the vast amount of data which is stored in its infrastructure.  It is the data that is the key to the digital transformation, so accessing it is a crucial piece of the puzzle.

“It’s more about legacy architecture and lack of active management of data than specifically systems”

By finding a way to unlock the data inside these out of date systems, banks can decentralise their data making it available to the new digital world.

With the creation of such advancements as the cloud and API’s, it is possible to sit an agility layer between the existing legacy systems and newly adopted applications. HSBC has successfully adopted this approach and used an API strategy to expand its digital and mobile services without needing to replace its legacy systems.

Legacy systems are no longer the barrier to digital innovation that they once were. With some creative thinking and the adoption of new technologies legacy can continue to be part of your IT infrastructure in 2019!

https://www.finextra.com/newsarticle/33529/bank-of-england-slammed-over-outdated-it-and-culture

Has the agile product delivery model has been too widely adopted?

Posted on : 30-01-2019 | By : richard.gale | In : Uncategorized

Tags: , , , ,

0

As a consultancy, we have the benefit of working with many clients across almost all industry verticals. Specifically, over the last 7-8 years we have seen a huge uptake in the shift from traditional project delivery models towards more agile techniques.

The combination of people, process and technology with this delivery model has been hugely beneficial in increasing both the speed of execution and alignment of business requirements with products. That said, in more recent years we have observed an almost “religious like” adoption of agile often, in our view, at the expense of pragmatism and execution focus. A purist approach to agile—where traditional development is completely replaced in one fell swoop— results in failure for many organisations, especially those that rely on tight controls, rigid structures and cost-benefit analysis.

Despite its advantages, many organisations struggle to successfully transition to agile, leading to an unnecessarily high agile project failure rate. While there are several common causes for this failure rate, one of the top causes—if not the leading cause—is the lack of an agile-ready culture.

This has been evident with our own client discussions which have centred around “organisational culture at odds with agile values” and “lack of business customer or product owner availability” as challenges for adopting and scaling agile.  Agile as a methodology does require a corresponding agile culture to ensure success.  It’s no good committing to implementing in an agile way when the organisation is anything but agile!

Doing Agile v Being Agile

Adopting an Agile methodology in an organisation which has not fully embraced Agile can still reap results (various estimates but benchmark around a 20% increase in benefits). If, on the other hand, the firm has truly embraced an agile approach in the organisation from CEO to receptionist then the sky is the limit and improvements of 200% plus have been experienced!

Investing in the change management required to build an agile culture is the key to making a successful transition to agile and experiencing all of the competitive advantages it affords. Through this investment, your business leadership, IT leadership and IT teams can align, collaborate and deliver quality solutions for customers, as well as drive organisational transformation—both today and into the future.

There are certain projects, where shoehorning them into agile processes just serves to slow down the delivery with no benefit. Some of this may come from the increase in devops delivery but we see it stifling many infrastructure or underpinning projects, which still lend themselves to a more waterfall delivery approach.

The main difference between agile methodologies and waterfall methodologies is the phased approach that waterfall takes (define requirements, freeze requirements, begin coding, move to testing, etc.) as opposed to the iterative approach of agile. However, there are different ways to implement a waterfall methodology, including iterative waterfall, which still practices the phased approach but delivers in smaller release cycles.

Today, more and more teams would say that they are using an agile methodology. When in fact, many of those teams are likely to be using a hybrid model that includes elements of several agile methodologies as well as waterfall.

It is crucial to bring together people, processes and technologies and identify where it makes business sense to implement agile; agile is not a silver bullet. An assessment of the areas where agile would work best is required, which will then guide the transition. Many organisations kick off an agile project without carrying out this assessment and find following this path is just too difficult. A well-defined transitional approach is a prerequisite for success.

We all understand that today’s business units need to be flexible and agile to survive but following an agile delivery model is not always the only solution.

The Challenges of Implementing Robotic Process Automation (RPA)

Posted on : 25-01-2019 | By : kerry.housley | In : Innovation, Uncategorized

Tags: , , , , ,

0

We recently surveyed our clients on their views around the future of technology in the workplace and the changes that they think are likely to shape their future working environment. 

One of the questions identified by many clients as a major challenge was around the adoption of RPA. We asked the question; 

“Do You Agree that RPA could improve the Efficiency of Your Business? 

Around 65% of the respondents to our survey agreed that RPA could improve the efficiency of their business, but many commented that they were put off by the challenges that needed to be overcome in order for RPA deployment to be a success. 

“The challenge is being able to identify how and where RPA is best deployed, avoiding any detrimental disruption 

In this article we will discuss in more detail the challenges, and what steps can be taken to ensure a more successful outcome. 

The benefits of RPA are:

  • Reduced operating costs
  • Increased productivity
  • Reduce employee’s workload to spend more time on higher value tasks
  • Get more done in less time! 

What Processes are Right for Automation? 

One of the challenges facing many organisations is deciding which processes are good for automation and which process to choose to automate first. This line from Bill Gates offers some good advice; 

automation applied to an inefficient operation will magnify the inefficiency” 

It follows therefore, that the first step in any automation journey is reviewing all of your business processes to ensure that they are all running as efficiently as possible.  You do not want to waste time, money and effort in implementing a robot to carry an inefficient process which will reap no rewards at all.  

Another challenge is choosing which process to automate first. In our experience, many clients have earmarked one of their most painful processes as process number one in order to heal the pain.  This fails more often than not because the most painful process is often one of the most difficult to automate.  Ideally, you want to pick a straightforward, highly repetitive process which will be easier to automate with simple results, clearly showing the benefits to automation. Buy-in at this stage from all stakeholders is critical if RPA is be successfully deployed further in the organisation. Management need to see the efficiency saving and employees can see how the robot can help them to do their job quicker and free up their time to do more interesting work. Employee resistance and onboarding should not be underestimated. Keeping workers in the loop and reducing the perceived threat is crucial to your RPA success.  

Collaboration is Key 

Successful RPA deployment is all about understanding and collaboration which if not approached carefully could ultimately lead to the failure of the project.  RPA in one sense, is just like any other piece of software that you will implement, but in another way it’s not. Implementation involves close scrutiny of an employee’s job with the employee feeling threatened by the fact that the robot may take over and they will be left redundant in the process.   

IT and the business must work closely together to ensure that process accuracy, cost reduction, and customer satisfaction benchmarks are met during implementation.  RPA implementation success is both IT- and business-driven, with RPA governance sitting directly in the space between business and IT. Failure to maintain consistent communication between these two sides will mean that project governance will be weak and that any obstacles, such as potential integration issues of RPA with existing programs, cannot be dealt effectively. 

Don’t Underestimate Change 

Change management should not be underestimated, the implementation of RPA is a major change in an organisation which needs to be planned for, and carefully managed. Consistently working through the change management aspects is critical to making RPA successful. It is important to set realistic expectations and look at RPA from an enterprise perspective focusing on the expected results and what will be delivered. 

 RPA = Better Business Outcomes 

RPA is a valuable automation asset in a company’s digital road map and can deliver great results if implemented well. However, often RPA implementations have not delivered the returns promised, impacted by the challenges we have discussed. Implementations that give significant consideration to the design phase and realise the importance of broader change management into the process will benefit from better business outcomes across the end-to-end process. Enterprises looking to embark on the RPA journey can have chance to take note, avoid the pitfalls and experience the success that RPA can bring. 

It’s Time to Take Control of Your Supply Chain Security

Posted on : 31-08-2018 | By : richard.gale | In : Uncategorized

1

According to the Annual Symantec Threat Report supply chain attacks have risen 200% in the period 2016-2017. Confirming the trend for attackers to start small, move up the chain and hit the big time!

Attackers are increasingly hijacking software updates as an entry point to target networks further up the supply chain. Nyetya, a global attack started this way affecting such companies as FedEx and Maersk costing them millions.

Although many corporations have wised up to the need to protect their network and their data, have all their suppliers? And their supplier’s suppliers? All it takes is a single vulnerability of one of your trusted vendors to gain access to your network and you and your customer’s sensitive data could be compromised.

Even if your immediate third parties don’t pose a direct risk, their third parties (your fourth parties) might. It is crucial to gain visibility into the flow of sensitive data among all third and fourth parties, and closely monitor every organization in your supply chain. If you have 100 vendors in your supply chain and 60 of them are using a certain provider for a critical service, what will happen if that critical provider experiences downtime or is breached?

The changing nature of the digital supply chain landscape calls for coordinated, efficient and agile defences. Unless the approach to supply chain risk management moves with the times, we will continue to see an increase in third-party attacks.

Organizations need to fundamentally change the way they approach managing third-party risk, and that means more collaboration, automation of the process with the adoption of new technology and procedures. It is no longer sufficient simply to add some clauses to your vendor contract stating that everything that applies to your third-party vendor applies to the vendors sub-contractors.  

Traditionally, vendor management means carrying out an assessment during the onboarding process and then perhaps an annual review to see if anything has changed since the initial review. This assessment is only based on the view at a point in time against a moving threat environment. What looks secure today may not be next week!

The solution to this problem is to supplement this assessment by taking an external view of your vendors using threat analytics which are publicly available, to see what is happening on their network today. With statistics coming through in real time you can monitor your suppliers on a continuous basis. It is not possible to prevent a third-party attack in your supply chain, but with up to date monitoring issues can be detected at the earliest possible opportunity limiting the potential damage to your company’s reputation and your client’s data.

Many vendor supply management tools use security ratings as a way of verifying the security of your suppliers using data-driven insights into any vendor’s security performance by continuously analysing, and monitoring companies’ cybersecurity, all from the outside. Security ratings are generated daily, giving organizations continuous visibility into the security posture of key business partners. By using security ratings, it enables an organisation to assess all suppliers in the supply chain at the touch of a button. This is marked difference to the traditional point-in-time risk assessment.

Here at Broadgate we have helped several clients to take back control of their supply chain by implementing the right technology solution, together with the right policies and procedures the security and efficiency of the vendor management process can be vastly improved.

If you are responsible for cyber security risk management in these times, you are certainly being faced with some overwhelming challenges.  Implementing a vendor risk management program that is well-managed, well-controlled, and well-maintained will mean that you have a more secure supply chain as a result. Companies with more secure third parties will in turn have a lower risk of accruing any financial or reputational damage that would result from a third-party breach. Don’t fret about your supply chain, invest in it and you will reap the rewards!

How Can Artificial Intelligence Add Value to Cyber Security?

Posted on : 28-07-2018 | By : richard.gale | In : Uncategorized

0

Cyber security is major concern for all organisations. A recent EY survey found that Cyber Security is the top risk for financial services. The cyber threat is ever growing and constantly changing. It is becoming increasingly difficult to put the right controls and procedures in place to detect potential attacks and guard against them. It is now imperative that we make use of advanced tools and technologies to get ahead of the game.

A major weapon in the race against the cyber attacker are Artificial Intelligence (AI) powered tools which can be used to prevent, detect and remediate potential threats.

Threat detection is a labour intensive arduous task and AI can help considerably with the workload which is often like looking for a needle in a haystack.

AI machines are intended to work and react like human beings. They can be trained to process substantial amounts of data and identify trends and patterns. A major cyber security issue has been the lack of skilled individuals with organisations unable to find staff with the necessary skills. AI and machine learning tools would help overcome these gaps.

Despite what you’ve seen in the movies, robotic machines are not about to take over the world!  Human intelligence is unique characteristic which a robot does not have (not yet anyway). Cybersecurity isn’t about man or machine but man and machine. A successful cyber strategy means machine intelligence and human analysts working together.

The machines perform the heavy lifting (data aggregation, pattern recognition, etc.) and provide a manageable number of actionable insights. The human analysts make decisions on how to act. Computers, after all, are extremely good at specific things, such as automating simple tasks and solving complex equations, but they have no passion, creativity, or intuition. Skilled humans, meanwhile, can display all these traits, but can be outperformed by even the most basic of computers when it comes to raw calculating power.

Data has posed perhaps the single greatest challenge in cybersecurity over the past decade. For a human, or even a large team of humans, the amount of data produced daily on a global scale is unthinkable. Add to this the massive number of alerts most organizations see from their SIEM, firewall logs, and user activity, and it’s clear human security analysts are simply unable to operate in isolation. Thankfully, this is where machines excel, automating simple tasks such as processing and classification to ensure analysts are left with a manageable quantity of actionable insights.

It’s essential that we respond quickly to security incidents, but we also need to understand enough about an incident to respond intelligently. Machines play a huge role here because they can process a massive amount of incoming data in a tiny fraction of the time it would take even a large group of skilled humans. They can’t make the decision of how to act, but they can provide an analyst with everything they need to do so.

Selecting a new “digitally focused” sourcing partner

Posted on : 18-07-2018 | By : john.vincent | In : Cloud, FinTech, Innovation, Uncategorized

Tags: , , , , , ,

0

It was interesting to see the recent figures this month from the ISG Index, showing that the traditional outsourcing market in EMEA has rebounded. Figures for the second quarter for commercial outsourcing contracts show a combined annual contract value (ACV) of €3.7Bn. This is significantly up 23% on 2017 and for the traditional sourcing market, reverses a downward trend which had persisted for the previous four quarters.

This is an interesting change of direction, particularly against a backdrop of economic uncertainty around Brexit and the much “over indulged”, GDPR preparation. It seems that despite this, rather than hunkering down with a tin hat and stockpiling rations, companies in EMEA have invested in their technology service provision to support an agile digital growth for the future. The global number also accelerated, up 31% to a record ACV of €9.9Bn.

Underpinning some of these figures has been a huge acceleration in the As-a-Service market. In the last 2 years the ACV attributed to SaaS and IaaS has almost doubled. This has been fairly consistent across all sectors.

So when selecting a sourcing partner, what should companies consider outside of the usual criteria including size, capability, cultural fit, industry experience, flexibility, cost and so on?

One aspect that is interesting from these figures is the influence that technologies such as cloud based services, automation (including AI) and robotic process automation (RPA) are having both now and in the years to come. Many organisations have used sourcing models to fix costs and benefit from labour arbitrage as a pass-through from suppliers. Indeed, this shift of labour ownership has fuelled incredible growth within some of the service providers. For example, Tata Consultancy Services (TCS) has grown from 45.7k employees in 2005 to 394k in March 2018.

However, having reached this heady number if staff, the technologies mentioned previously are threatening the model of some of these companies. As-a-Service providers such as Microsoft Azure and Amazon AWS have platforms now which are carving their way through technology service provision, which previously would have been managed by human beings.

In the infrastructure space commoditisation is well under way. Indeed, we predict that the within 3 years the build, configure and manage skills in areas such Windows and Linux platforms will be rarely in demand. DevOps models, and variants of, are moving at a rapid pace with tools to support spinning up platforms on demand to support application services now mainstream. Service providers often focus on their technology overlay “value add” in this space, with portals or orchestration products which can manage cloud services. However, the value of these is often questionable over direct access or through commercial 3rd party products.

Secondly, as we’ve discussed here before, technology advances in RPA, machine learning and AI are transforming service provision. This of course is not just in terms of business applications but also in terms of the underpinning services. This is translating itself into areas such as self-service Bots which can be queried by end users to provide solutions and guidance, or self-learning AI processes which can predict potential system failures before they occur and take preventative actions.

These advances present a challenge to the workforce focused outsource providers.

Given the factors above, and the market shift, it is important that companies take these into account when selecting a technology service provider. Questions to consider are;

  • What are their strategic relationships with cloud providers, and not just at the “corporate” level, but do they have in depth knowledge of the whole technology ecosystem at a low level?
  • Can they demonstrate skills in the orchestration and automation of platforms at an “infrastructure as a code” level?
  • Do they have capability to deliver process automation through techniques such as Bots, can they scale to enterprise and where are their RPA alliances?
  • Does the potential partner have domain expertise and open to partnership around new products and shared reward/JV models?

The traditional sourcing engagement models are evolving which has developed new opportunities on both sides. Expect new entrants, without the technical debt, organisational overheads and with a more technology solution focus to disrupt the market.