AI in Cyber Security – Friend or Foe?

Artificial intelligence has been welcomed by the cyber security industry as an invaluable tool in the fight against cyber crime, but is it a doubleedged sword? One that is both a powerful defender but potentially a potent weapon for the cyber criminals.

The same artificial intelligence technologies that are used to power speech recognition and self-driving cars have the capability to be turned to other uses, such as creating viruses that morph faster than antivirus companies can keep up, phishing emails that are indistinguishable from real messages written by humans, and intelligently attacking an organisation’s entire defence infrastructure to find the smallest vulnerability and exploit any gap.

Just like any other technology, AI has both strengths and weaknesses that can be abused when in the wrong hands.  

In the AI-fuelled security wars, the balance of power is currently in the hands of the good guys, but undoubtedly set to change.  

Until now, attackers have been relying on mass distribution and sloppy security. The danger is that we will start to see more adversaries, especially those that are well funded, start to leverage these advanced tools and methods more frequently. It is concerning to know that nation-state attackers like Russia and China have almost unlimited resources to develop these tools and make maximum use of them. 

The dark web acts as a clearing house for the cyber criminals where all manner of crypto software is available.  

There are many ways in which the hackers seek to benefit from your information but the biggest reward is the password which opens up their world to a whole new set of vulnerabilities to exploit. Algorithms can crack millions of passwords within minutes.  

Threat Analytics firm Dark Trace has seen evidence of malware programs showing signs of contextual awareness in trying to steal data and hold systems to ransom. They know what to look for and how to find it by closely observing the infrastructure and they can then work out the best way for them to avoid detection. This means the program no longer needs to maintain contact with the hacker through command and control servers or other means, which is usually one of the most effective means of tracking the perpetrator.

Recently, Microsoft was able to spot an attempted hack of it’s Azure cloud when the AI in the security system identified a false intrusion from a fake site. Without the introduction of AI this would have gone unnoticed had they been using rule based protocols.  AI’s ability to learn and adapt itself to new threats should dramatically improve the enterprise’s ability to protect itself even as data and infrastructure push past the traditional firewall into the cloud and the internet of things. 

Human effort won’t scale – there are too many threats, too many changes, and too many network interactions. 

As cybercrime becomes more and more technologically advanced, there is no doubt that we will witness the bad guys employing AI in various additional sophisticated scenarios. 

It’s time for cybersecurity managers to make sure they’re doing everything they can to reduce their attack surface as much as possible, put cutting-edge defenses in place, and replace time-consuming cybersecurity tasks with automation. 

We should all be concerned that as we begin to see AI-powered chatbots, and extensive influence weaving through social media, we face the prospect of the internet as a weapon to undermine trust and control public opinionThis is a very worrying situtuation indeed!  

Posted on : 28-06-2019 | By : richard.gale | In : Uncategorized

0

When a picture tells a 1000 words – An image is not quite what it seems

Steganography is not a new concept, the ancient Greeks and Romans used hidden messages to outsmart their opponents and thousands of years later nothing has changed. People have always found ways of hiding secrets in a message in such a way that only the sender can understand. This is different from cryptography as rather than trying to obscure content so it cannot be read by anyone other than the intended, steganography’s aim is to conceal the fact that the content actually exists in the first place. If you take a look at two images one with cryptography and one without there will be no visible difference. It is a great way of sending secure messages where the sender can be assured of confidentiality and not be concerned about unauthorised viewing in the wrong hands. However, like so many technologies today, steganography can be used for good or for bad. When the bad guys get in on the act we have yet another threat to explore in the cyber landscape!

Hackers are increasingly using this method to trick internet users and smuggle in malicious code past security scanners and firewalls. This code can be hidden in harmless software and jump out at the users when they least expect it. The attackers download the file with the hidden data, extract for use in the next step of the attack.

Malvertising is one way in which the cyber criminals exploit the use of steganography. They buy advertising space on trustworthy websites, post their ads which appear legitimate, hiding their harmful code inside. Bad ads can redirect users to malicious websites or install malware on their computers or mobile devices. One of the most concerning aspects of this technique is that users get infected even if they don’t click on the image, often just loading the image is enough. Earlier this year, millions of Apple Mac users were hit when hackers used advertising campaigns to hide malicious code in ad images to avoid detection on the laptops. Some very famous names such as the New York Times and Spotify have inadvertently displayed theses criminal ads, putting their users at risk.

Botnets are another way in which hackers use steganography by using the hidden code to communicate on the inbound traffic flow and download malicious code to general malware. Botnet controllers employ steganography techniques to control target endpoints. They hide commands in plain view – perhaps within images or music files distributed through file sharing or social networking websites. This allows the criminals to surreptitiously issue instructions to their botnets without relying on an ISP to host their infrastructure and minimising the chances of discovery.

It’s not only the cyber criminals who have realised the potential of steganography, the malicious insider too is an enthusiast!  Last year a Chinese engineer was able to exfiltrate sensitive information  from General Electric by stegging it into images of sunsets. He was only discovered when GE Security officials became suspicious of him for an unrelated reason and started to monitor his office computer.

Organisations should be concerned about the rise of the steganography from both malicious outsiders and insiders. The battle between the hackers and security teams is on and one that the hackers are currently winning.  There are so many different steganography techniques that it is almost impossible to find one detection solution that can deal with them all. So, until the there is a detection solution it’s the same old advice. Always be aware of what you are loading and what you are clicking.

There is an old saying “the camera never lies” but sometimes maybe it does!

Posted on : 28-06-2019 | By : richard.gale | In : Uncategorized

0

How secure are your RPA Processes?

Robotic Process Automation is an emerging technology with many organisations looking at how they might benefit from automating some or all, of their business processes. However, in some companies there is a common misconception that letting robots loose on the network could pose a significant security risk. The belief being that robots are far less secure users than their human counterparts.  

In reality, a compelling case could be made that robots are inherently more secure than people 

Provided your robots are treated in the same way as their human teammates i.e. inherit the security access and profile of the person/role they are programmed to simulate there is no reason why a robot should have be any less secure. In other words, the security policies and access controls suitable for humans should be applied to the software robots in just the same way.  

There are many security advantages gained from introducing a robot into your organisation.  

  • Once a robot has been trained to perform a task, it never deviates from the policies, procedures and business rules in place
  • Unlike human users, robots lack curiosity (so they won’t be tempted to open phishing emails), cannot be tricked into revealing information or downloading unauthorised software. 
  • Robots have no motives which might could turn them into a disgruntled employee by ignoring existing policies and procedures.  

So, we can see that on the contrary- in many ways the predictable behaviour of the robot makes them your most trusted employee! 

RPA certainly represents an unprecedented level of transformation and disruption to “business as usual” – one that requires careful preparation and planning. But while caution is prudent, many of the security concerns related to RPA implementation are overstated. 

The issue of data security can be broken down into two points;  

  • Data Security 
  • Access Security 

This means ensuring that the data being accessed and processed by the robot remains secure and confidential. Access management of the robots must be properly assigned and reviewed similar to the review and management of existing human user accounts. 

Here are some of the key security points to consider: 

  1. Segregating access to data is not any different than when granting access to normal users, which is based on what the robot should actually do, and not providing domain admin permissions and/or elevated access, unless absolutely necessary. 
  2. Passwords should be maintained in a password vault and service accounts’ access should be reviewed periodically. 
  3. Monitoring the activity of the robots and logon information via a “control room” (e.g. monitoring of logon information and any errors). 
  4. An RPA environment should be strictly customised via active directory integration, which will increase business efficiency as access management is centralised. 
  5. Encryption of credentials. 
  6. Performing independent code audits and reviews, no different than with any other IT environment. 
  7. Robots are programmed using secure programming methods. 
  8. Security testing against policy controls. 

 

All these points must be considered from the outset. This is security by design, that must be embedded in the RPA process from the start. It must be re-emphasised that the security of RPA is not just about protecting access to the data but securing the data itself. 

Overall, RPA lowers security-related efforts associated with training employees and teaching them security practices (e.g. password management, applications of privacy settings etc) because it ensures a zero-touch environment. By eliminating manual work, automation minimizes security risks at a macro level, if the key controls are implemented at the beginning. 

In addition, an automated environment removes biases, variability and human error. The lack of randomness and variability can increase uniform compliance of company requirements built in the workflows and tasks of the automation. 

Besides security risks, the zero-touch environment of RPA also helps mitigate other human-related risks in business operations. An automated environment is free from biases, prejudices or variability, all of which are human work with the risk of error. Because of this, RPA ensures less risky and consistent work with trustworthy data. 

Therefore, RPA should be wisely implemented, which basically amounts to a choice of a stable RPA product or provider, backed by proper, constant monitoring of security measures. Providing role-based access to confidential data, monitoring access and data encryption are the most salient means to deal with security risks. 

Posted on : 17-06-2019 | By : richard.gale | In : Uncategorized

0

Are you able to access all the data across your organisation?

For many years data has been the lifeblood of the organisation and more recently, the value of this commodity has been realised by many companies (see our previous article “Data is like oil”).

Advances in technology, processing power and analytics means that companies can collect and process data in real time. Most businesses are sitting on vast amounts of data and those that can harness it effectively can gain a much deeper understanding of their customers, better predict and improve their customer experience.

Our survey revealed that whilst most companies understand the value of their data and the benefits it can bring, many clients revealed a level of frustration in the systems and processes that manage it. Some respondents did qualify that “most of the data” was available, whilst others admitted some was stranded.

 “Data is in legacy silos, our long-term goal is to provide access through a consistent data management framework”

The deficiencies that we also discuss in this newsletter regarding legacy systems are partly responsible for this, although not wholly. This is a particular issue in financial services where many organisations are running on old systems that are too complex and too expensive to replace. Critical company data is trapped in silos, disconnected and incompatible with the rest of the enterprise.

These silos present a huge challenge for many companies. Recalling a comment of one Chief Data Office at a large institution;

“If I ask a question in more than one place, I usually get more than one answer!”

Data silos are expanding as companies collect too much data which they hold onto for longer than they need to. Big data has been a buzz word for a while now, but it is important that companies distinguish between big data and big bad data! The number of data sources are increasing all the time so the issue must be addressed if the data is to be used effectively to return some business value. Collecting a virtually unlimited amount of data needs to be managed properly to ensure that all data stored has a purpose and can be protected.

Shadow data further exacerbates the issue. This data is unverified, often inaccurate and out of date. Oversharing of this data results in it being stored in areas that are unknown and unable to be traced. Creating yet more data silos hidden from the wider enterprise. This data is viewed as a valid data source relied upon and then used as input into other systems, which can ultimately lead to bad business decisions being made.

A robust data governance and management strategy is something which the importance of cannot be underestimated, particularly for those serious about the digital agenda and customer experience. This is also a topic where the combination of business and IT leadership aligning on the product strategy and underlying “data plumbing” is a must.  This is not just about systems but also about the organisation’s attitude to data and its importance in the life of every business process. It is important that companies implement a data management strategy which encompasses not only the internal platforms and governance but also the presentation layer for business users, consumers and data insights.

Posted on : 31-03-2019 | By : richard.gale | In : Data, Finance

0

The ultimate way to move beyond trading latency?

A number of power surges and outages have been experienced in the East Grinstead area of the UK in recent months. Utility companies involved have traced the cause to one of three  high capacity feeds to a Global Investment bank’s data centre facility.

The profits created by the same bank’s London based Propriety Trading group has increased tenfold in the same time.

This bank employs 1% of the world’s best post-doctoral theoretical Physics graduates  to help build its black box trading systems

Could there be a connection? Wild & unconfirmed rumours have been circulating within  the firm that a major breakthrough in removing the problem of latency – the physical limitation the time it takes a signal to transfer down a wire – ultimately governed by of the speed of light.

For years traders have been trying to reduce execution latency to provide competitive advantage in a highly competitive fast moving environment. The focus has moved from seconds to milli and now microsecond savings.

Many Financial Services & technology organisations have attempted to solve this problem through reducing  data hopping, routing, and going as far as placing their hardware physically close to the source of data (such as in an Exchange’s data centre) to minimise latency but no one has solved the issue – yet.

It sounds like this bank may have gone one step further. It is known that at the boundary of the speed of light – physics as we know it -changes (Quantum mechanics is an example where the time/space continuum becomes ‘fuzzy’). Conventional physics states that travelling faster than the speed of light and see into the future would require infinite energy and so is not possible.

Investigation with a number of insiders at the firm has resulted in an amazing and almost unbelievable insight. They have managed to build a device which ‘hovers’ over the present and immediate future – little detail is known about it but it is understood to be based on the previously unproven ‘Alcubierre drive’ principle. This allows the trading system to predict (in reality observe) the next direction in the market providing invaluable trading advantage.

The product is still in test mode as the effects of trading ahead of the data they have already traded against is producing outages in the system as it then tries to correct the error in the future data which again changes the data ad finitum… The prediction model only allows a small glimpse into the immediate future which also limits the window of opportunity for trading.

The power requirements for the equipment are so large that they have had to been moved to the data centre environment where consumption can be more easily hidden (or not as the power outages showed).

If the bank does really crack this problem then they will have the ultimate trading advantage – the ability to see into the future and trade with ‘inside’ knowledge legally. Unless another bank is doing similar in the ‘trading arms race’ then the bank will quickly become dominant and the other banks may go out of business.

The US Congress have apparently discovered some details of this mechanism and are requesting the bank to disclose details of the project. The bank is understandably reluctant to do this as it has spent over $80m developing this and wants to make some return on its investment.

If this system goes into true production mode surely it cannot be long before Financial Regulators outlaw the tool as it will both distort and ultimately destroy the markets.

Of course the project has a codename…. Project Tachyons

No one from the company was available to comment on the accuracy of the claims.

Posted on : 29-03-2019 | By : richard.gale | In : Finance, Uncategorized

Tags: , , , , , , ,

0

Do you believe that your legacy systems are preventing digital transformation?

According to the results of our recent Broadgate Futures Survey more than half of our clients agreed that digital transformation within their organisation was being hampered by legacy systems. Indeed, no one “strongly disagreed” confirming the extent of the problem.

Many comments suggested that this was not simply a case of budget constraints, but the sheer size, scale and complexity of the transition had deterred organisations in fear of the fact that they were not adequately equipped to deliver successful change.

Legacy systems have a heritage going back many years to the days of the mega mainframes of the 70’s and 80’s. This was a time when banks were the masters of technological innovation. We saw the birth of ATMs, BACS and international card payments. It was an exciting time of intense modernisation. Many of the core systems that run the finance sector today are the same ones that were built back then. The only problem is that, although these systems were built to last they were not built for change.

The new millennium experienced another significant development with the introduction of the internet, an opportunity the banks could have seized and considered developing new, simpler, more versatile systems. However, instead they decided to adopt a different strategy and modify their existing systems, in their eyes there was no need to reinvent the wheel. They made additions and modifications as and when required. As a result, most financial organisations have evolved over the decades into organisations of complex networks, a myriad of applications and an overloaded IT infrastructure.

The Bank of England itself has recently been severely reprimanded by a Commons Select Committee review who found the Bank to be drowning in out of date processes in dire need of modernisation. Its legacy systems are overly complicated and inefficient, following a merger with the PRA in 2014 their IT estate comprises of duplicated systems and extensive data overload.

Budget, as stated earlier is not the only factor in preventing digital transformation, although there is no doubt that these projects are expensive and extremely time consuming. The complexity of the task and the fear of failure is another reason why companies hold on to their legacy systems. Better the devil you know! Think back to the TSB outage (there were a few…), systems were down for hours and customers were unable to access their accounts following a system upgrade. The incident ultimately led to huge fines from the Financial Conduct Authority and the resignation of the Chief Executive.

For most organisations abandoning their legacy systems is simply not an option so they need to find ways to update in order to facilitate the connection to digital platforms and plug into new technologies.

Many of our clients believe that it is not the legacy system themselves which are the barrier, but it is the inability to access the vast amount of data which is stored in its infrastructure.  It is the data that is the key to the digital transformation, so accessing it is a crucial piece of the puzzle.

“It’s more about legacy architecture and lack of active management of data than specifically systems”

By finding a way to unlock the data inside these out of date systems, banks can decentralise their data making it available to the new digital world.

With the creation of such advancements as the cloud and API’s, it is possible to sit an agility layer between the existing legacy systems and newly adopted applications. HSBC has successfully adopted this approach and used an API strategy to expand its digital and mobile services without needing to replace its legacy systems.

Legacy systems are no longer the barrier to digital innovation that they once were. With some creative thinking and the adoption of new technologies legacy can continue to be part of your IT infrastructure in 2019!

https://www.finextra.com/newsarticle/33529/bank-of-england-slammed-over-outdated-it-and-culture

Posted on : 14-03-2019 | By : richard.gale | In : Data, Finance, FinTech, Innovation, Uncategorized

Tags: , , , , , , , ,

0

Has the agile product delivery model has been too widely adopted?

As a consultancy, we have the benefit of working with many clients across almost all industry verticals. Specifically, over the last 7-8 years we have seen a huge uptake in the shift from traditional project delivery models towards more agile techniques.

The combination of people, process and technology with this delivery model has been hugely beneficial in increasing both the speed of execution and alignment of business requirements with products. That said, in more recent years we have observed an almost “religious like” adoption of agile often, in our view, at the expense of pragmatism and execution focus. A purist approach to agile—where traditional development is completely replaced in one fell swoop— results in failure for many organisations, especially those that rely on tight controls, rigid structures and cost-benefit analysis.

Despite its advantages, many organisations struggle to successfully transition to agile, leading to an unnecessarily high agile project failure rate. While there are several common causes for this failure rate, one of the top causes—if not the leading cause—is the lack of an agile-ready culture.

This has been evident with our own client discussions which have centred around “organisational culture at odds with agile values” and “lack of business customer or product owner availability” as challenges for adopting and scaling agile.  Agile as a methodology does require a corresponding agile culture to ensure success.  It’s no good committing to implementing in an agile way when the organisation is anything but agile!

Doing Agile v Being Agile

Adopting an Agile methodology in an organisation which has not fully embraced Agile can still reap results (various estimates but benchmark around a 20% increase in benefits). If, on the other hand, the firm has truly embraced an agile approach in the organisation from CEO to receptionist then the sky is the limit and improvements of 200% plus have been experienced!

Investing in the change management required to build an agile culture is the key to making a successful transition to agile and experiencing all of the competitive advantages it affords. Through this investment, your business leadership, IT leadership and IT teams can align, collaborate and deliver quality solutions for customers, as well as drive organisational transformation—both today and into the future.

There are certain projects, where shoehorning them into agile processes just serves to slow down the delivery with no benefit. Some of this may come from the increase in devops delivery but we see it stifling many infrastructure or underpinning projects, which still lend themselves to a more waterfall delivery approach.

The main difference between agile methodologies and waterfall methodologies is the phased approach that waterfall takes (define requirements, freeze requirements, begin coding, move to testing, etc.) as opposed to the iterative approach of agile. However, there are different ways to implement a waterfall methodology, including iterative waterfall, which still practices the phased approach but delivers in smaller release cycles.

Today, more and more teams would say that they are using an agile methodology. When in fact, many of those teams are likely to be using a hybrid model that includes elements of several agile methodologies as well as waterfall.

It is crucial to bring together people, processes and technologies and identify where it makes business sense to implement agile; agile is not a silver bullet. An assessment of the areas where agile would work best is required, which will then guide the transition. Many organisations kick off an agile project without carrying out this assessment and find following this path is just too difficult. A well-defined transitional approach is a prerequisite for success.

We all understand that today’s business units need to be flexible and agile to survive but following an agile delivery model is not always the only solution.

Posted on : 30-01-2019 | By : richard.gale | In : Uncategorized

Tags: , , , ,

0

What will the IT department look like in the future?

We are going through a significant change in how technology services are delivered as we stride further into the latest phase of the Digital Revolution. The internet provided the starting pistol for this phase and now access to new technology, data and services is accelerating at breakneck speed.

More recently the real enablers of a more agile and service-based technology have been the introduction of virtualisation and orchestration technologies which allowed for compute to be tapped into on demand and removed the friction between software and hardware.

The impact of this cannot be underestimated. The removal of the needed to manually configure and provision new compute environments was a huge step forwards, and one which continues with developments in Infrastructure as Code (“IaC”), micro services and server-less technology.

However, whilst these technologies continually disrupt the market, the corresponding changes to the overall operating models has in our view lagged (this is particularly true in larger organisations which have struggled to shift from the old to the new).

If you take a peek into organisation structures today they often still resemble those of the late 90’s where capabilities in infrastructure were organised by specialists such as data centre, storage, service management, application support etc. There have been changes, specifically more recently with the shift to devops and continuous integration and development, but there is still a long way go.

Our recent Technology Futures Survey provided a great insight into how our clients (290) are responding to the shifting technology services landscape.

“What will your IT department look like in 5-7 years’ time?”

There were no surprises in the large majority of respondents agreeing that the organisation would look different in the near future. The big shift is to a more service focused, vendor led technology model, with between 53%-65% believing that this is the direction of travel.

One surprise was a relatively low consensus on the impact that Artificial Intelligence (“AI”) would have on management of live services, with only 10% saying it would be very likely. However, the providers of technology and services formed a smaller proportion of our respondents (28%) and naturally were more positive about the impact of AI.

The Broadgate view is that the changing shape of digital service delivery is challenging previous models and applying tension to organisations and providers alike.  There are two main areas where we see this;

  1. With the shift to cloud based and on-demand services, the need for any provider, whether internal or external, has diminished
  2. Automation, AI and machine learning are developing new capabilities in self-managing technology services

We expect that the technology organisation will shift to focus more on business products and procuring the best fit service providers. Central to this is AI and ML which, where truly intelligent (and not just marketing), can create a self-healing and dynamic compute capability with limited human intervention.

Cloud, machine learning and RPA will remove much of the need to manage and develop code

To really understand how the organisation model is shifting, we have to look at the impact that technology is having the on the whole supply chain. We’ve long outsourced the delivery of services. However, if we look the traditional service providers (IBM, DXC, TCS, Cognizant etc.) that in the first instance acted as brokers to this new digital technology innovations we see that they are increasingly being disintermediated, with provisioning and management now directly in the hands of the consumer.

Companies like Microsoft, Google and Amazon have superior technical expertise and they are continuing to expose these directly to the end consumer. Thus, the IT department needs to think less about how to either build or procure from a third party, but more how to build a framework of services which “knits together” a service model which can best meet their business needs with a layered, end-to-end approach. This fits perfectly with a more business product centric approach.

We don’t see an increase for in-house technology footprints with maybe the exception of truly data driven organisations or tech companies themselves.

In our results, the removal of cyber security issues was endorsed by 28% with a further 41% believing that this was a possible outcome. This represents a leap of faith given the current battle that organisations are undertaking to combat data breaches! Broadgate expect that organisations will increasingly shift the management of these security risks to third party providers, with telecommunication carriers also taking more responsibilities over time.

As the results suggest, the commercial and vendor management aspects of the IT department will become more important. This is often a skill which is absent in current companies, so a conscious strategy to develop capability is needed.

Organisations should update their operating model to reflect the changing shape of technology services, with the closer alignment of products and services to technology provision never being as important as it is today.

Indeed, our view is that even if your model serves you well today, by 2022 it is likely to look fairly stale. This is because what your company currently offers to your customers is almost certain to change, which will require fundamental re-engineering across, and around, the entire IT stack.

Posted on : 29-01-2019 | By : john.vincent | In : Cloud, Data, General News, Innovation

Tags: , , , , , , , , , ,

0

The Challenges of Implementing Robotic Process Automation (RPA)

We recently surveyed our clients on their views around the future of technology in the workplace and the changes that they think are likely to shape their future working environment. 

One of the questions identified by many clients as a major challenge was around the adoption of RPA. We asked the question; 

“Do You Agree that RPA could improve the Efficiency of Your Business? 

Around 65% of the respondents to our survey agreed that RPA could improve the efficiency of their business, but many commented that they were put off by the challenges that needed to be overcome in order for RPA deployment to be a success. 

“The challenge is being able to identify how and where RPA is best deployed, avoiding any detrimental disruption 

In this article we will discuss in more detail the challenges, and what steps can be taken to ensure a more successful outcome. 

The benefits of RPA are:

  • Reduced operating costs
  • Increased productivity
  • Reduce employee’s workload to spend more time on higher value tasks
  • Get more done in less time! 

What Processes are Right for Automation? 

One of the challenges facing many organisations is deciding which processes are good for automation and which process to choose to automate first. This line from Bill Gates offers some good advice; 

automation applied to an inefficient operation will magnify the inefficiency” 

It follows therefore, that the first step in any automation journey is reviewing all of your business processes to ensure that they are all running as efficiently as possible.  You do not want to waste time, money and effort in implementing a robot to carry an inefficient process which will reap no rewards at all.  

Another challenge is choosing which process to automate first. In our experience, many clients have earmarked one of their most painful processes as process number one in order to heal the pain.  This fails more often than not because the most painful process is often one of the most difficult to automate.  Ideally, you want to pick a straightforward, highly repetitive process which will be easier to automate with simple results, clearly showing the benefits to automation. Buy-in at this stage from all stakeholders is critical if RPA is be successfully deployed further in the organisation. Management need to see the efficiency saving and employees can see how the robot can help them to do their job quicker and free up their time to do more interesting work. Employee resistance and onboarding should not be underestimated. Keeping workers in the loop and reducing the perceived threat is crucial to your RPA success.  

Collaboration is Key 

Successful RPA deployment is all about understanding and collaboration which if not approached carefully could ultimately lead to the failure of the project.  RPA in one sense, is just like any other piece of software that you will implement, but in another way it’s not. Implementation involves close scrutiny of an employee’s job with the employee feeling threatened by the fact that the robot may take over and they will be left redundant in the process.   

IT and the business must work closely together to ensure that process accuracy, cost reduction, and customer satisfaction benchmarks are met during implementation.  RPA implementation success is both IT- and business-driven, with RPA governance sitting directly in the space between business and IT. Failure to maintain consistent communication between these two sides will mean that project governance will be weak and that any obstacles, such as potential integration issues of RPA with existing programs, cannot be dealt effectively. 

Don’t Underestimate Change 

Change management should not be underestimated, the implementation of RPA is a major change in an organisation which needs to be planned for, and carefully managed. Consistently working through the change management aspects is critical to making RPA successful. It is important to set realistic expectations and look at RPA from an enterprise perspective focusing on the expected results and what will be delivered. 

 RPA = Better Business Outcomes 

RPA is a valuable automation asset in a company’s digital road map and can deliver great results if implemented well. However, often RPA implementations have not delivered the returns promised, impacted by the challenges we have discussed. Implementations that give significant consideration to the design phase and realise the importance of broader change management into the process will benefit from better business outcomes across the end-to-end process. Enterprises looking to embark on the RPA journey can have chance to take note, avoid the pitfalls and experience the success that RPA can bring. 

Posted on : 25-01-2019 | By : kerry.housley | In : Innovation, Uncategorized

Tags: , , , , ,

0

It’s Time to Take Control of Your Supply Chain Security

According to the Annual Symantec Threat Report supply chain attacks have risen 200% in the period 2016-2017. Confirming the trend for attackers to start small, move up the chain and hit the big time!

Attackers are increasingly hijacking software updates as an entry point to target networks further up the supply chain. Nyetya, a global attack started this way affecting such companies as FedEx and Maersk costing them millions.

Although many corporations have wised up to the need to protect their network and their data, have all their suppliers? And their supplier’s suppliers? All it takes is a single vulnerability of one of your trusted vendors to gain access to your network and you and your customer’s sensitive data could be compromised.

Even if your immediate third parties don’t pose a direct risk, their third parties (your fourth parties) might. It is crucial to gain visibility into the flow of sensitive data among all third and fourth parties, and closely monitor every organization in your supply chain. If you have 100 vendors in your supply chain and 60 of them are using a certain provider for a critical service, what will happen if that critical provider experiences downtime or is breached?

The changing nature of the digital supply chain landscape calls for coordinated, efficient and agile defences. Unless the approach to supply chain risk management moves with the times, we will continue to see an increase in third-party attacks.

Organizations need to fundamentally change the way they approach managing third-party risk, and that means more collaboration, automation of the process with the adoption of new technology and procedures. It is no longer sufficient simply to add some clauses to your vendor contract stating that everything that applies to your third-party vendor applies to the vendors sub-contractors.  

Traditionally, vendor management means carrying out an assessment during the onboarding process and then perhaps an annual review to see if anything has changed since the initial review. This assessment is only based on the view at a point in time against a moving threat environment. What looks secure today may not be next week!

The solution to this problem is to supplement this assessment by taking an external view of your vendors using threat analytics which are publicly available, to see what is happening on their network today. With statistics coming through in real time you can monitor your suppliers on a continuous basis. It is not possible to prevent a third-party attack in your supply chain, but with up to date monitoring issues can be detected at the earliest possible opportunity limiting the potential damage to your company’s reputation and your client’s data.

Many vendor supply management tools use security ratings as a way of verifying the security of your suppliers using data-driven insights into any vendor’s security performance by continuously analysing, and monitoring companies’ cybersecurity, all from the outside. Security ratings are generated daily, giving organizations continuous visibility into the security posture of key business partners. By using security ratings, it enables an organisation to assess all suppliers in the supply chain at the touch of a button. This is marked difference to the traditional point-in-time risk assessment.

Here at Broadgate we have helped several clients to take back control of their supply chain by implementing the right technology solution, together with the right policies and procedures the security and efficiency of the vendor management process can be vastly improved.

If you are responsible for cyber security risk management in these times, you are certainly being faced with some overwhelming challenges.  Implementing a vendor risk management program that is well-managed, well-controlled, and well-maintained will mean that you have a more secure supply chain as a result. Companies with more secure third parties will in turn have a lower risk of accruing any financial or reputational damage that would result from a third-party breach. Don’t fret about your supply chain, invest in it and you will reap the rewards!

Posted on : 31-08-2018 | By : richard.gale | In : Uncategorized

1