The ultimate way to move beyond trading latency?

Posted on : 29-03-2019 | By : richard.gale | In : Finance, Uncategorized

Tags: , , , , , , ,

0

A number of power surges and outages have been experienced in the East Grinstead area of the UK in recent months. Utility companies involved have traced the cause to one of three  high capacity feeds to a Global Investment bank’s data centre facility.

The profits created by the same bank’s London based Propriety Trading group has increased tenfold in the same time.

This bank employs 1% of the world’s best post-doctoral theoretical Physics graduates  to help build its black box trading systems

Could there be a connection? Wild & unconfirmed rumours have been circulating within  the firm that a major breakthrough in removing the problem of latency – the physical limitation the time it takes a signal to transfer down a wire – ultimately governed by of the speed of light.

For years traders have been trying to reduce execution latency to provide competitive advantage in a highly competitive fast moving environment. The focus has moved from seconds to milli and now microsecond savings.

Many Financial Services & technology organisations have attempted to solve this problem through reducing  data hopping, routing, and going as far as placing their hardware physically close to the source of data (such as in an Exchange’s data centre) to minimise latency but no one has solved the issue – yet.

It sounds like this bank may have gone one step further. It is known that at the boundary of the speed of light – physics as we know it -changes (Quantum mechanics is an example where the time/space continuum becomes ‘fuzzy’). Conventional physics states that travelling faster than the speed of light and see into the future would require infinite energy and so is not possible.

Investigation with a number of insiders at the firm has resulted in an amazing and almost unbelievable insight. They have managed to build a device which ‘hovers’ over the present and immediate future – little detail is known about it but it is understood to be based on the previously unproven ‘Alcubierre drive’ principle. This allows the trading system to predict (in reality observe) the next direction in the market providing invaluable trading advantage.

The product is still in test mode as the effects of trading ahead of the data they have already traded against is producing outages in the system as it then tries to correct the error in the future data which again changes the data ad finitum… The prediction model only allows a small glimpse into the immediate future which also limits the window of opportunity for trading.

The power requirements for the equipment are so large that they have had to been moved to the data centre environment where consumption can be more easily hidden (or not as the power outages showed).

If the bank does really crack this problem then they will have the ultimate trading advantage – the ability to see into the future and trade with ‘inside’ knowledge legally. Unless another bank is doing similar in the ‘trading arms race’ then the bank will quickly become dominant and the other banks may go out of business.

The US Congress have apparently discovered some details of this mechanism and are requesting the bank to disclose details of the project. The bank is understandably reluctant to do this as it has spent over $80m developing this and wants to make some return on its investment.

If this system goes into true production mode surely it cannot be long before Financial Regulators outlaw the tool as it will both distort and ultimately destroy the markets.

Of course the project has a codename…. Project Tachyons

No one from the company was available to comment on the accuracy of the claims.

Do you believe that your legacy systems are preventing digital transformation?

Posted on : 14-03-2019 | By : richard.gale | In : Data, Finance, FinTech, Innovation, Uncategorized

Tags: , , , , , , , ,

0

According to the results of our recent Broadgate Futures Survey more than half of our clients agreed that digital transformation within their organisation was being hampered by legacy systems. Indeed, no one “strongly disagreed” confirming the extent of the problem.

Many comments suggested that this was not simply a case of budget constraints, but the sheer size, scale and complexity of the transition had deterred organisations in fear of the fact that they were not adequately equipped to deliver successful change.

Legacy systems have a heritage going back many years to the days of the mega mainframes of the 70’s and 80’s. This was a time when banks were the masters of technological innovation. We saw the birth of ATMs, BACS and international card payments. It was an exciting time of intense modernisation. Many of the core systems that run the finance sector today are the same ones that were built back then. The only problem is that, although these systems were built to last they were not built for change.

The new millennium experienced another significant development with the introduction of the internet, an opportunity the banks could have seized and considered developing new, simpler, more versatile systems. However, instead they decided to adopt a different strategy and modify their existing systems, in their eyes there was no need to reinvent the wheel. They made additions and modifications as and when required. As a result, most financial organisations have evolved over the decades into organisations of complex networks, a myriad of applications and an overloaded IT infrastructure.

The Bank of England itself has recently been severely reprimanded by a Commons Select Committee review who found the Bank to be drowning in out of date processes in dire need of modernisation. Its legacy systems are overly complicated and inefficient, following a merger with the PRA in 2014 their IT estate comprises of duplicated systems and extensive data overload.

Budget, as stated earlier is not the only factor in preventing digital transformation, although there is no doubt that these projects are expensive and extremely time consuming. The complexity of the task and the fear of failure is another reason why companies hold on to their legacy systems. Better the devil you know! Think back to the TSB outage (there were a few…), systems were down for hours and customers were unable to access their accounts following a system upgrade. The incident ultimately led to huge fines from the Financial Conduct Authority and the resignation of the Chief Executive.

For most organisations abandoning their legacy systems is simply not an option so they need to find ways to update in order to facilitate the connection to digital platforms and plug into new technologies.

Many of our clients believe that it is not the legacy system themselves which are the barrier, but it is the inability to access the vast amount of data which is stored in its infrastructure.  It is the data that is the key to the digital transformation, so accessing it is a crucial piece of the puzzle.

“It’s more about legacy architecture and lack of active management of data than specifically systems”

By finding a way to unlock the data inside these out of date systems, banks can decentralise their data making it available to the new digital world.

With the creation of such advancements as the cloud and API’s, it is possible to sit an agility layer between the existing legacy systems and newly adopted applications. HSBC has successfully adopted this approach and used an API strategy to expand its digital and mobile services without needing to replace its legacy systems.

Legacy systems are no longer the barrier to digital innovation that they once were. With some creative thinking and the adoption of new technologies legacy can continue to be part of your IT infrastructure in 2019!

https://www.finextra.com/newsarticle/33529/bank-of-england-slammed-over-outdated-it-and-culture

Has the agile product delivery model has been too widely adopted?

Posted on : 30-01-2019 | By : richard.gale | In : Uncategorized

Tags: , , , ,

0

As a consultancy, we have the benefit of working with many clients across almost all industry verticals. Specifically, over the last 7-8 years we have seen a huge uptake in the shift from traditional project delivery models towards more agile techniques.

The combination of people, process and technology with this delivery model has been hugely beneficial in increasing both the speed of execution and alignment of business requirements with products. That said, in more recent years we have observed an almost “religious like” adoption of agile often, in our view, at the expense of pragmatism and execution focus. A purist approach to agile—where traditional development is completely replaced in one fell swoop— results in failure for many organisations, especially those that rely on tight controls, rigid structures and cost-benefit analysis.

Despite its advantages, many organisations struggle to successfully transition to agile, leading to an unnecessarily high agile project failure rate. While there are several common causes for this failure rate, one of the top causes—if not the leading cause—is the lack of an agile-ready culture.

This has been evident with our own client discussions which have centred around “organisational culture at odds with agile values” and “lack of business customer or product owner availability” as challenges for adopting and scaling agile.  Agile as a methodology does require a corresponding agile culture to ensure success.  It’s no good committing to implementing in an agile way when the organisation is anything but agile!

Doing Agile v Being Agile

Adopting an Agile methodology in an organisation which has not fully embraced Agile can still reap results (various estimates but benchmark around a 20% increase in benefits). If, on the other hand, the firm has truly embraced an agile approach in the organisation from CEO to receptionist then the sky is the limit and improvements of 200% plus have been experienced!

Investing in the change management required to build an agile culture is the key to making a successful transition to agile and experiencing all of the competitive advantages it affords. Through this investment, your business leadership, IT leadership and IT teams can align, collaborate and deliver quality solutions for customers, as well as drive organisational transformation—both today and into the future.

There are certain projects, where shoehorning them into agile processes just serves to slow down the delivery with no benefit. Some of this may come from the increase in devops delivery but we see it stifling many infrastructure or underpinning projects, which still lend themselves to a more waterfall delivery approach.

The main difference between agile methodologies and waterfall methodologies is the phased approach that waterfall takes (define requirements, freeze requirements, begin coding, move to testing, etc.) as opposed to the iterative approach of agile. However, there are different ways to implement a waterfall methodology, including iterative waterfall, which still practices the phased approach but delivers in smaller release cycles.

Today, more and more teams would say that they are using an agile methodology. When in fact, many of those teams are likely to be using a hybrid model that includes elements of several agile methodologies as well as waterfall.

It is crucial to bring together people, processes and technologies and identify where it makes business sense to implement agile; agile is not a silver bullet. An assessment of the areas where agile would work best is required, which will then guide the transition. Many organisations kick off an agile project without carrying out this assessment and find following this path is just too difficult. A well-defined transitional approach is a prerequisite for success.

We all understand that today’s business units need to be flexible and agile to survive but following an agile delivery model is not always the only solution.

The Challenges of Implementing Robotic Process Automation (RPA)

Posted on : 25-01-2019 | By : kerry.housley | In : Innovation, Uncategorized

Tags: , , , , ,

0

We recently surveyed our clients on their views around the future of technology in the workplace and the changes that they think are likely to shape their future working environment. 

One of the questions identified by many clients as a major challenge was around the adoption of RPA. We asked the question; 

“Do You Agree that RPA could improve the Efficiency of Your Business? 

Around 65% of the respondents to our survey agreed that RPA could improve the efficiency of their business, but many commented that they were put off by the challenges that needed to be overcome in order for RPA deployment to be a success. 

“The challenge is being able to identify how and where RPA is best deployed, avoiding any detrimental disruption 

In this article we will discuss in more detail the challenges, and what steps can be taken to ensure a more successful outcome. 

The benefits of RPA are:

  • Reduced operating costs
  • Increased productivity
  • Reduce employee’s workload to spend more time on higher value tasks
  • Get more done in less time! 

What Processes are Right for Automation? 

One of the challenges facing many organisations is deciding which processes are good for automation and which process to choose to automate first. This line from Bill Gates offers some good advice; 

automation applied to an inefficient operation will magnify the inefficiency” 

It follows therefore, that the first step in any automation journey is reviewing all of your business processes to ensure that they are all running as efficiently as possible.  You do not want to waste time, money and effort in implementing a robot to carry an inefficient process which will reap no rewards at all.  

Another challenge is choosing which process to automate first. In our experience, many clients have earmarked one of their most painful processes as process number one in order to heal the pain.  This fails more often than not because the most painful process is often one of the most difficult to automate.  Ideally, you want to pick a straightforward, highly repetitive process which will be easier to automate with simple results, clearly showing the benefits to automation. Buy-in at this stage from all stakeholders is critical if RPA is be successfully deployed further in the organisation. Management need to see the efficiency saving and employees can see how the robot can help them to do their job quicker and free up their time to do more interesting work. Employee resistance and onboarding should not be underestimated. Keeping workers in the loop and reducing the perceived threat is crucial to your RPA success.  

Collaboration is Key 

Successful RPA deployment is all about understanding and collaboration which if not approached carefully could ultimately lead to the failure of the project.  RPA in one sense, is just like any other piece of software that you will implement, but in another way it’s not. Implementation involves close scrutiny of an employee’s job with the employee feeling threatened by the fact that the robot may take over and they will be left redundant in the process.   

IT and the business must work closely together to ensure that process accuracy, cost reduction, and customer satisfaction benchmarks are met during implementation.  RPA implementation success is both IT- and business-driven, with RPA governance sitting directly in the space between business and IT. Failure to maintain consistent communication between these two sides will mean that project governance will be weak and that any obstacles, such as potential integration issues of RPA with existing programs, cannot be dealt effectively. 

Don’t Underestimate Change 

Change management should not be underestimated, the implementation of RPA is a major change in an organisation which needs to be planned for, and carefully managed. Consistently working through the change management aspects is critical to making RPA successful. It is important to set realistic expectations and look at RPA from an enterprise perspective focusing on the expected results and what will be delivered. 

 RPA = Better Business Outcomes 

RPA is a valuable automation asset in a company’s digital road map and can deliver great results if implemented well. However, often RPA implementations have not delivered the returns promised, impacted by the challenges we have discussed. Implementations that give significant consideration to the design phase and realise the importance of broader change management into the process will benefit from better business outcomes across the end-to-end process. Enterprises looking to embark on the RPA journey can have chance to take note, avoid the pitfalls and experience the success that RPA can bring. 

It’s Time to Take Control of Your Supply Chain Security

Posted on : 31-08-2018 | By : richard.gale | In : Uncategorized

1

According to the Annual Symantec Threat Report supply chain attacks have risen 200% in the period 2016-2017. Confirming the trend for attackers to start small, move up the chain and hit the big time!

Attackers are increasingly hijacking software updates as an entry point to target networks further up the supply chain. Nyetya, a global attack started this way affecting such companies as FedEx and Maersk costing them millions.

Although many corporations have wised up to the need to protect their network and their data, have all their suppliers? And their supplier’s suppliers? All it takes is a single vulnerability of one of your trusted vendors to gain access to your network and you and your customer’s sensitive data could be compromised.

Even if your immediate third parties don’t pose a direct risk, their third parties (your fourth parties) might. It is crucial to gain visibility into the flow of sensitive data among all third and fourth parties, and closely monitor every organization in your supply chain. If you have 100 vendors in your supply chain and 60 of them are using a certain provider for a critical service, what will happen if that critical provider experiences downtime or is breached?

The changing nature of the digital supply chain landscape calls for coordinated, efficient and agile defences. Unless the approach to supply chain risk management moves with the times, we will continue to see an increase in third-party attacks.

Organizations need to fundamentally change the way they approach managing third-party risk, and that means more collaboration, automation of the process with the adoption of new technology and procedures. It is no longer sufficient simply to add some clauses to your vendor contract stating that everything that applies to your third-party vendor applies to the vendors sub-contractors.  

Traditionally, vendor management means carrying out an assessment during the onboarding process and then perhaps an annual review to see if anything has changed since the initial review. This assessment is only based on the view at a point in time against a moving threat environment. What looks secure today may not be next week!

The solution to this problem is to supplement this assessment by taking an external view of your vendors using threat analytics which are publicly available, to see what is happening on their network today. With statistics coming through in real time you can monitor your suppliers on a continuous basis. It is not possible to prevent a third-party attack in your supply chain, but with up to date monitoring issues can be detected at the earliest possible opportunity limiting the potential damage to your company’s reputation and your client’s data.

Many vendor supply management tools use security ratings as a way of verifying the security of your suppliers using data-driven insights into any vendor’s security performance by continuously analysing, and monitoring companies’ cybersecurity, all from the outside. Security ratings are generated daily, giving organizations continuous visibility into the security posture of key business partners. By using security ratings, it enables an organisation to assess all suppliers in the supply chain at the touch of a button. This is marked difference to the traditional point-in-time risk assessment.

Here at Broadgate we have helped several clients to take back control of their supply chain by implementing the right technology solution, together with the right policies and procedures the security and efficiency of the vendor management process can be vastly improved.

If you are responsible for cyber security risk management in these times, you are certainly being faced with some overwhelming challenges.  Implementing a vendor risk management program that is well-managed, well-controlled, and well-maintained will mean that you have a more secure supply chain as a result. Companies with more secure third parties will in turn have a lower risk of accruing any financial or reputational damage that would result from a third-party breach. Don’t fret about your supply chain, invest in it and you will reap the rewards!

How Can Artificial Intelligence Add Value to Cyber Security?

Posted on : 28-07-2018 | By : richard.gale | In : Uncategorized

0

Cyber security is major concern for all organisations. A recent EY survey found that Cyber Security is the top risk for financial services. The cyber threat is ever growing and constantly changing. It is becoming increasingly difficult to put the right controls and procedures in place to detect potential attacks and guard against them. It is now imperative that we make use of advanced tools and technologies to get ahead of the game.

A major weapon in the race against the cyber attacker are Artificial Intelligence (AI) powered tools which can be used to prevent, detect and remediate potential threats.

Threat detection is a labour intensive arduous task and AI can help considerably with the workload which is often like looking for a needle in a haystack.

AI machines are intended to work and react like human beings. They can be trained to process substantial amounts of data and identify trends and patterns. A major cyber security issue has been the lack of skilled individuals with organisations unable to find staff with the necessary skills. AI and machine learning tools would help overcome these gaps.

Despite what you’ve seen in the movies, robotic machines are not about to take over the world!  Human intelligence is unique characteristic which a robot does not have (not yet anyway). Cybersecurity isn’t about man or machine but man and machine. A successful cyber strategy means machine intelligence and human analysts working together.

The machines perform the heavy lifting (data aggregation, pattern recognition, etc.) and provide a manageable number of actionable insights. The human analysts make decisions on how to act. Computers, after all, are extremely good at specific things, such as automating simple tasks and solving complex equations, but they have no passion, creativity, or intuition. Skilled humans, meanwhile, can display all these traits, but can be outperformed by even the most basic of computers when it comes to raw calculating power.

Data has posed perhaps the single greatest challenge in cybersecurity over the past decade. For a human, or even a large team of humans, the amount of data produced daily on a global scale is unthinkable. Add to this the massive number of alerts most organizations see from their SIEM, firewall logs, and user activity, and it’s clear human security analysts are simply unable to operate in isolation. Thankfully, this is where machines excel, automating simple tasks such as processing and classification to ensure analysts are left with a manageable quantity of actionable insights.

It’s essential that we respond quickly to security incidents, but we also need to understand enough about an incident to respond intelligently. Machines play a huge role here because they can process a massive amount of incoming data in a tiny fraction of the time it would take even a large group of skilled humans. They can’t make the decision of how to act, but they can provide an analyst with everything they need to do so.

Selecting a new “digitally focused” sourcing partner

Posted on : 18-07-2018 | By : john.vincent | In : Cloud, FinTech, Innovation, Uncategorized

Tags: , , , , , ,

0

It was interesting to see the recent figures this month from the ISG Index, showing that the traditional outsourcing market in EMEA has rebounded. Figures for the second quarter for commercial outsourcing contracts show a combined annual contract value (ACV) of €3.7Bn. This is significantly up 23% on 2017 and for the traditional sourcing market, reverses a downward trend which had persisted for the previous four quarters.

This is an interesting change of direction, particularly against a backdrop of economic uncertainty around Brexit and the much “over indulged”, GDPR preparation. It seems that despite this, rather than hunkering down with a tin hat and stockpiling rations, companies in EMEA have invested in their technology service provision to support an agile digital growth for the future. The global number also accelerated, up 31% to a record ACV of €9.9Bn.

Underpinning some of these figures has been a huge acceleration in the As-a-Service market. In the last 2 years the ACV attributed to SaaS and IaaS has almost doubled. This has been fairly consistent across all sectors.

So when selecting a sourcing partner, what should companies consider outside of the usual criteria including size, capability, cultural fit, industry experience, flexibility, cost and so on?

One aspect that is interesting from these figures is the influence that technologies such as cloud based services, automation (including AI) and robotic process automation (RPA) are having both now and in the years to come. Many organisations have used sourcing models to fix costs and benefit from labour arbitrage as a pass-through from suppliers. Indeed, this shift of labour ownership has fuelled incredible growth within some of the service providers. For example, Tata Consultancy Services (TCS) has grown from 45.7k employees in 2005 to 394k in March 2018.

However, having reached this heady number if staff, the technologies mentioned previously are threatening the model of some of these companies. As-a-Service providers such as Microsoft Azure and Amazon AWS have platforms now which are carving their way through technology service provision, which previously would have been managed by human beings.

In the infrastructure space commoditisation is well under way. Indeed, we predict that the within 3 years the build, configure and manage skills in areas such Windows and Linux platforms will be rarely in demand. DevOps models, and variants of, are moving at a rapid pace with tools to support spinning up platforms on demand to support application services now mainstream. Service providers often focus on their technology overlay “value add” in this space, with portals or orchestration products which can manage cloud services. However, the value of these is often questionable over direct access or through commercial 3rd party products.

Secondly, as we’ve discussed here before, technology advances in RPA, machine learning and AI are transforming service provision. This of course is not just in terms of business applications but also in terms of the underpinning services. This is translating itself into areas such as self-service Bots which can be queried by end users to provide solutions and guidance, or self-learning AI processes which can predict potential system failures before they occur and take preventative actions.

These advances present a challenge to the workforce focused outsource providers.

Given the factors above, and the market shift, it is important that companies take these into account when selecting a technology service provider. Questions to consider are;

  • What are their strategic relationships with cloud providers, and not just at the “corporate” level, but do they have in depth knowledge of the whole technology ecosystem at a low level?
  • Can they demonstrate skills in the orchestration and automation of platforms at an “infrastructure as a code” level?
  • Do they have capability to deliver process automation through techniques such as Bots, can they scale to enterprise and where are their RPA alliances?
  • Does the potential partner have domain expertise and open to partnership around new products and shared reward/JV models?

The traditional sourcing engagement models are evolving which has developed new opportunities on both sides. Expect new entrants, without the technical debt, organisational overheads and with a more technology solution focus to disrupt the market.

Insider Threat – Who is Taking Your Data Home?

Posted on : 25-06-2018 | By : richard.gale | In : Uncategorized

0

“Employee theft has always been a problem for organisations. Critical information is now more accessible and portable than ever before. So, what used to be an irritation has now become a threat to a company’s very existence.”

Stealing company secrets or having a grudge against a company is nothing new. However, today the rise of the digital age has made it easier to gain access to information from the inside and created a host of vulnerabilities ripe for exploitation.

Organisations can find it difficult to identify such insider threats, or by the time they have recognised them it may be too late, and the leak has already happened. This is made ever more difficult to monitor by the increasing complexity of an organisation’s network. The amount of data stored and number and types of devices connecting to it makes it harder than ever to monitor usage.

Companies have spent big money and devoted a lot of resource to protect themselves against external threats and have built strong defences with firewalls, anti-virus software, mail filters and numerous other filters used extensively to protect themselves.  But have they left themselves vulnerable from the inside?

Recently, two Corporate giants Coca-Cola and Tesla fell victim to malicious behaviour. In the case of Coca-Cola, a former employee stored 1000’s of employees’ personal data on an external hard drive.  Electric car giant Tesla was sabotaged by an aggrieved employee who was upset not to have been awarded a promotion. To demonstrate his feelings, he stole highly sensitive data from the manufacturing operating system and sold it on to third parties.

According to a recent survey by Egress Software Technologies,  almost a quarter of UK employees have purposely shared business information to people outside of their organisation. Clearswift research has found that employees are willing to sell company information for as little as £125, so it doesn’t take much to turn a disgruntled or bored employee into the criminal’s accomplice! Add to this the number of employees tricked by social engineering and spoof emails causing damage unintentionally, then organisations are faced with a potentially massive security problem inside their own organisation.

Guarding against the insider threat is difficult because technology alone cannot solve the issue. This type of threat is more about personality and behaviour, feelings and motivation. There are highly capable tools to track keyboard strokes and data, but these will not identify an individual that was passed up for a promotion or the individual going through a divorce or financial difficulties, technology alone cannot detect that.

So, what can companies do? There is a fine balance between monitoring employees and allowing them the freedom and responsibility to do their job.  Let’s face it, no one wants to work for an organisation where every move they make is monitored and they feel they are not trusted to behave in the appropriate way.

Where cybercrime is concerned, people can often be the weakest link in the security chain, but with education and training, they can be your greatest asset.

Ongoing training and education programs are essential in influencing employee behaviour, it only takes one person to click on a phishing email to expose an entire organisation. Companies also need to continue to invest in employee education about cybercrime and the detrimental effect a breach can have on brand, reputation and the bottom line. When assessing personnel, consider how much access they should have, what data they control and influence, and run background checks on new employees before granting physical or logical access to facilities, systems or data. Also, identify which people within the business have significant information system security roles, and ensure the process for documentation is comprehensive and regularly updated.

Once you have set policies and procedures in place, a layer of technology can be added to bring additional security. But, as we said before, technology alone will not address all the issues:

  • Use specialist security software to track files and malware entering/leaving the network. Many tools now have advanced tracking functionality to spot unusual behaviour on a network. Tools such as Darktrace, FireEye and Palo Alto can track unusual network behaviour as well as unexpected user behaviour.
  • Consider tools such as Dtex or Egress deployed on an individual’s PC to monitor behaviour. Capturing changes in user patterns (e.g. an employee getting ready to leave the organisations), High risk pattern behaviour or finding what information was lost on a laptop left on a train.
  • Other monitoring solutions such as Digital Shadows to track data that has left the internal boundary to calculate the amount of exposure you have outside the organisation. Even tracking data on social media and the “Dark web”. Controlled environment – Four Eyes checks of files leaving the network to ensure sensitive files are not being sent externally.

With better controls, procedures and policies in place together with technology that can identify unusual activity and misuse, it is possible to capture potential losses and remediate as quickly as possible thereby limiting any damage caused.

 

As always, it’s not just about technology but the people and processes too!

 

kerry.housley@broadgateconsultants.com

LET’S THINK INTELLIGENTLY ABOUT AI

Posted on : 30-04-2018 | By : kerry.housley | In : Uncategorized

0

Currently there is a daily avalanche of artificial intelligence (AI) related news clogging the internet. Almost every new product, service or feature has an AI, ‘Machine Learning’ or ‘Robo something’  angle to it. So what is so great about AI? What is different about it and how can it improve the way we live and work? We think there has been an over emphasis on ‘machine learning’ relying on crunching huge amounts of information via a set of algorithms. The actual ‘intelligence’ part has been overlooked, the unsupervised way humans learn through observation and modifying our behaviour based on changes to our actions is missing. Most ‘AI’ tools today work well but have a very narrow range of abilities and have no ability to really think creatively and as wide ranging as a human (or animal) brain.

Origins

Artificial Intelligence as a concept has been around for hundreds of years. That human thought, learning, reasoning and creativity could be replicated in some form of machine. AI as an academic practice really grew out of the early computing concepts of Alan Turing and the first AI research lab was created in Dartmouth college in 1956. The objective seemed simple, create a machine as intelligent as a human being. The original team quickly found they had grossly underestimated the complexity of the task and progress in AI moved gradually forward over the next 50 years.

Although there are a number of approaches to AI, all generally rely on learning, processing information about the environment, how it changes, the  frequency and type of inputs experienced. This can result in a huge amount of data to be absorbed. The combination of the availability of vast amounts of computing power & storage with massive amounts of information (from computer searches and interaction) has enabled AI, sometimes known as machine learning to gather pace. There are three main types of learning in AI;

  • Reinforcement learning — This is focused on the problem of how an AI tool ought to act in order to maximise the chance of solving a problem. In a particular situation, the machine picks an action or a sequence of actions, and progresses. This is frequently used when teaching machines to play and win chess games. One issue is that in its purest form, reinforcement learning requires an extremely large number of repetitions to achieve a level of success.
  • Supervised learning —  The programme is told what the correct answer is for a particular input: here is the image of a kettle the correct answer is “kettle.” It is called supervised learning because the process of an algorithm learning from the labelled training data-set is similar to showing a picture book to a young child. The adult knows the correct answer and the child makes predictions based on previous examples. This is the most common technique for training neural networks and other machine learning architectures. An example might be: Given the descriptions of a large number of houses in your town together with their prices, try to predict the selling price of your own home.
  • Unsupervised learning / predictive learning — Much of what humans and animals learn, they learn it in the first hours, days, months, and years of their lives in an unsupervised manner: we learn how the world works by observing it and seeing the result of our actions. No one is here to tell us the name and function of every object we perceive. We learn very basic concepts, like the fact that the world is three-dimensional, that objects don’t disappear spontaneously, that objects that are not supported fall. We do not know how to do this with machines at the moment, at least not at the level that humans and animals can. Our lack of AI technique for unsupervised or predictive learning is one of the factors that limits the progress of AI at the moment.

How useful is AI?

We are constantly interacting with AI. There are a multitude of programmes, working, helping and predicting  your next move (or at least trying to). Working out the best route is an obvious one where Google uses feedback from thousands of other live and historic journeys to route you the most efficient way to work. It then updates its algorithms based on the results from yours. Ad choices, ‘people also liked/went on to buy’ all assist in some ways to make our lives ‘easier’. The way you spend money is predictable so any unusual behaviour can result in a call from your bank to check a transaction. Weather forecasting uses machine learning (and an enormous amount of processing power combined with historic data) to provide improving short and medium term forecasts.

One of the limitations with current reinforcement and supervised models of learning is that, although we can build a highly intelligent device it has very limited focus. The chess computer ‘Deep Blue’ could beat Grand-master human chess players but, unlike them, it cannot drive a car, open a window or describe the beauty of a painting.

What’s next?

So could a machine ever duplicate or move beyond the capabilities of a human brain. The short answer is ‘of course’. Another short answer is ‘never’… Computers and programmes are getting more powerful, sophisticated and consistent each year. The amount of digital data is doubling on a yearly basis and the reach of devices is expanding at extreme pace. What does that mean for us? Who knows is the honest answer. AI and intelligent machines will become a part of all our daily life but the creativity of humans should ensure we partner and use them to enrich and improve our lives and environment.

Deep Learning‘ is the latest buzz term in AI. Some researchers explain this as ‘working just like the brain’ a better explanation from Jan LeCun (Head of AI at Facebook) is ‘machines that learn to represent the world’. So more general purpose machine learning tools rather than highly specialised single purpose ones. We see this as the next likely direction for AI in the same way, perhaps, that the general purpose Personal Computer (PC) transformed computing from dedicated single purpose to multi-purpose business tools.

Will Robotic Process Automation be responsible for the next generation of technical debt?

Posted on : 28-03-2018 | By : kerry.housley | In : FinTech, Innovation, Predictions, Uncategorized

Tags: , , , , , , , , , ,

0

All hail the great Bill Gates and his immortal words:

The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”

With the Robotic Process Automation (RPA) wave crashing down all about us and as we all scramble around trying to catch a ride on its efficiency, cost saving and performance optimising goodness, we should take a minute and take heed of Mr Gate’s wise words and remember that poorly designed processes done more efficiently will still be ineffectual. In theory, you’re just getting better at doing things poorly.

Now before we go any further, we should state that we have no doubt about the many benefits of RPA and in our opinion RPA should be taken advantage of and utilised where appropriate.

Now with that said…

RPA lends itself very well to quick fixes and fast savings, which are very tempting to any organisation. However, there are many organisations with years of technical debt built up already through adding quick fixes to fundamental issues in their IT systems. For these organisations, the introduction of RPA (although very fruitful in the short term) will actually add more technological dependencies to the mix. This will increase their technical debt if not maintained effectively. Eventually, this will become unsustainable and very costly to your organisation.

RPA will increase dependencies on other systems, adding subtle complex levels of interoperability, and like any interdependent ecosystem, when one thing alters there is an (often unforeseen) knock-on effect in other areas.

An upgrade that causes a subtle change to a user interface will cause the RPA process to stop working, or worse the process will keep working but do the wrong thing.

Consider this; what happens when an RPA process that has been running for a few years needs updating or changing? Will you still have the inherent expert understanding of this particular process at the human level or has that expertise now been lost?

How will we get around these problems?  Well, as with most IT issues, an overworked and understaffed IT department will create a quick workaround to solve the problem, and then move on to the myriad of other technical issues that need their attention. Hey presto… technical debt.

So, what is the answer? Of course, we need to stay competitive and take advantage of this new blend of technologies. It just needs to be a considered decision, you need to go in with your eyes open and understand the mid and long-term implications.

A big question surrounding RPA is who owns this new technology within organisations? Does it belong to the business side or the IT side and how involved should your CIO or CTO be?

It’s tempting to say that processes are designed by the business side and because RPA is simply going to replace the human element of an already existing process this can all be done by the business side, we don’t need to (or want to) involve the CIO in this decision. However, you wouldn’t hire a new employee into your organisation without HR being involved and the same is true of introducing new tech into your system. True, RPA is designed to sit outside/on top of your networks and systems in which case it shouldn’t interfere with your existing network, but at the very least the CIO and IT department should have an oversight of RPA being introduced into the organisation. They can then be aware of any issues that may occur as a result of any upgrades or changes to the existing system.

Our advice would be that organisations should initially only implement RPA measures that have been considered by both the CIO and the business side to be directly beneficial to the strategic goals of the company.

Following this, you can then perform a proper opportunity assessment to find the optimum portfolio of processes.  Generally, low or medium complexity processes or sub-processes will be the best initial options for RPA, if your assessment shows that the Full Time Equivalent (FTE) savings are worth it of course. Ultimately, you should be looking for the processes with the best return, and simplest delivery.

A final point on software tools and vendors. Like most niche markets of trending technology RPA is awash with companies offering various software tools. You may have heard of some of the bigger and more reputable names like UiPath and Blue Prism. It can be a minefield of offerings, so understanding your needs and selecting an appropriate vendor will be key to making the most of RPA. In order to combat the build-up of technical debt, tools provided by the vendor to enable some of the maintenance and management of the RPA processes is essential.

For advice on how to begin to introduce RPA into your organisation, vendor selection or help conducting a RPA opportunity assessment, or for help reducing your technical debt please email Richard.gale@broadgateconsultants.com.