Selecting a new “digitally focused” sourcing partner

Posted on : 18-07-2018 | By : john.vincent | In : Cloud, FinTech, Innovation, Uncategorized

Tags: , , , , , ,

0

It was interesting to see the recent figures this month from the ISG Index, showing that the traditional outsourcing market in EMEA has rebounded. Figures for the second quarter for commercial outsourcing contracts show a combined annual contract value (ACV) of €3.7Bn. This is significantly up 23% on 2017 and for the traditional sourcing market, reverses a downward trend which had persisted for the previous four quarters.

This is an interesting change of direction, particularly against a backdrop of economic uncertainty around Brexit and the much “over indulged”, GDPR preparation. It seems that despite this, rather than hunkering down with a tin hat and stockpiling rations, companies in EMEA have invested in their technology service provision to support an agile digital growth for the future. The global number also accelerated, up 31% to a record ACV of €9.9Bn.

Underpinning some of these figures has been a huge acceleration in the As-a-Service market. In the last 2 years the ACV attributed to SaaS and IaaS has almost doubled. This has been fairly consistent across all sectors.

So when selecting a sourcing partner, what should companies consider outside of the usual criteria including size, capability, cultural fit, industry experience, flexibility, cost and so on?

One aspect that is interesting from these figures is the influence that technologies such as cloud based services, automation (including AI) and robotic process automation (RPA) are having both now and in the years to come. Many organisations have used sourcing models to fix costs and benefit from labour arbitrage as a pass-through from suppliers. Indeed, this shift of labour ownership has fuelled incredible growth within some of the service providers. For example, Tata Consultancy Services (TCS) has grown from 45.7k employees in 2005 to 394k in March 2018.

However, having reached this heady number if staff, the technologies mentioned previously are threatening the model of some of these companies. As-a-Service providers such as Microsoft Azure and Amazon AWS have platforms now which are carving their way through technology service provision, which previously would have been managed by human beings.

In the infrastructure space commoditisation is well under way. Indeed, we predict that the within 3 years the build, configure and manage skills in areas such Windows and Linux platforms will be rarely in demand. DevOps models, and variants of, are moving at a rapid pace with tools to support spinning up platforms on demand to support application services now mainstream. Service providers often focus on their technology overlay “value add” in this space, with portals or orchestration products which can manage cloud services. However, the value of these is often questionable over direct access or through commercial 3rd party products.

Secondly, as we’ve discussed here before, technology advances in RPA, machine learning and AI are transforming service provision. This of course is not just in terms of business applications but also in terms of the underpinning services. This is translating itself into areas such as self-service Bots which can be queried by end users to provide solutions and guidance, or self-learning AI processes which can predict potential system failures before they occur and take preventative actions.

These advances present a challenge to the workforce focused outsource providers.

Given the factors above, and the market shift, it is important that companies take these into account when selecting a technology service provider. Questions to consider are;

  • What are their strategic relationships with cloud providers, and not just at the “corporate” level, but do they have in depth knowledge of the whole technology ecosystem at a low level?
  • Can they demonstrate skills in the orchestration and automation of platforms at an “infrastructure as a code” level?
  • Do they have capability to deliver process automation through techniques such as Bots, can they scale to enterprise and where are their RPA alliances?
  • Does the potential partner have domain expertise and open to partnership around new products and shared reward/JV models?

The traditional sourcing engagement models are evolving which has developed new opportunities on both sides. Expect new entrants, without the technical debt, organisational overheads and with a more technology solution focus to disrupt the market.

The Opportunity for Intelligent Process Automation in KYC / AML

Posted on : 28-06-2018 | By : richard.gale | In : compliance, Data, Finance, FinTech, Innovation

Tags: , , , , , , , , , , ,

0

Financial services firms have had a preoccupation with meeting the rules and regulations for fighting Financial Crime for the best part of the past decade. Ever since HSBC received sanction from both UK and US regulators in 2010, many other firms have also been caught short in failing to meet society’s expectations in this space. There have been huge programmes of change and remediation, amounting to 10’s of Billions of any currency you choose, to try to get Anti-Financial Crime (AFC) or Know Your Customer (KYC) / Anti-Money Laundering (AML) policies, risk methodologies, data sources, processes, organisation structures, systems and client populations into shape, at least to be able to meet the expectations of regulators, if not exactly stop financial crime.

The challenge for the industry is that Financial Crime is a massive and complex problem to solve. It is not just the detection and prevention of money laundering, but also needs to cover terrorist financing, bribery & corruption and tax evasion. Therefore, as the Banks, Asset Managers and Insurers have been doing, there is a need to focus upon all elements of the AFC regime, from education to process, and all the other activities in-between. Estimates as to the scale of the problem vary but the consensus is that somewhere between $3-5 trillion is introduced into the financial systems each year.

However, progress is being made. Harmonisation and clarity of industry standards and more consistency has come from the regulators with initiatives such as the 4th EU AML Directive. The appreciation and understanding of the importance of the controls are certainly better understood within Financial Services firms and by their shareholders. Perhaps what has not yet progressed significantly are the processes of performing client due diligence and monitoring of their subsequent activity. Most would argue that this is down to a number of factors, possibly the greatest challenge being the disparate and inconsistent nature of the data required to support these processes. Data needs to be sourced in many formats from country registries, stock exchanges, documents of incorporation, multiple media sources etc… Still today many firms have a predominantly manual process to achieve this, even when much of the data is available in digital form. Many still do not automatically ingest data into their work flows and have poorly defined processes to progress onboarding, or monitoring activities. This is for the regulations as they stand today, in the future this burden will further increase as firms will be expected to take all possible efforts to determine the integrity of their clients i.e. by establishing linkages to bad actors through other data sources such as social media and the dark web not evident in traditional sources such as company registries.

There have been several advances in recent years with technologies that have enormous potential for supporting the AFC cause. Data vendors have made big improvements in providing a broader and higher quality of data. The Aggregation solutions, such as Encompass offer services where the constituents of a corporate ownership structure can be assembled, and sanctions & PEP checks undertaken in seconds, rather than the current norm of multiple hours. This works well where the data is available from a reliable electronic source. However, does not work where there are no, or unreliable sources of digital data, as is the case for Trusts or in many jurisdictions around the world. Here we quickly get back to the world of paper and PDFs’ which still require human horsepower to review and decision.

Getting the information in the first instance can be very time consuming with complex interactions between multiple parties (relationship managers, clients, lawyers, data vendors, compliance teams etc) and multiple communications channels i.e. voice, email and chat in its various forms. We also have the challenge of Adverse Media, where thousands of news stories are generated every day on Corporates and Individuals that are the clients of Financial firms. The news items can be positive or negative but consumes tens of thousands of people to review, eliminate or investigate this mountain of data each day. The same challenges come with transaction monitoring, where individual firms can have thousands of ‘hits’ every day on ‘unusual’ payment patterns or ‘questionable’ beneficiaries. These also require review, repair, discounting or further investigation, the clear majority of which are false positives that can be readily discarded.

What is probably the most interesting opportunity for allowing the industry to see the wood for the trees in this data heavy world, is the maturing of Artificial Intelligence (AI) based, or ‘Intelligent’ solutions. The combination of Natural Language Processing with Machine Learning can help the human find the needles in the haystack or make sense of unstructured data that would ordinarily require much time to read and record. AI on its own is not a solution but combined with process management (workflow) and digitised, multi-channel communications, and even Robotics can achieve significant advances. In summary ‘Intelligent’ processing can address 3 of the main data challenges with the AFC regimes within financial institutions;

  1. Sourcing the right data – Where data is structured and digitally obtainable it can be readily harvested but needs to be integrated into the process flows to be compared, analysed, accepted or rejected as part of a review process. Here AI can be used to perform these comparisons, support analysis and look for patterns of common or disparate Data. Where the data is unstructured i.e. embedded in a paper document (email / PDF / doc etc.) then AI NLP and Machine Learning can be used to extract the relevant data and turn the unstructured into structured form for onward processing
  2. Filtering – with both Transaction Monitoring and Adverse Media reviews there is a tsunami of data and events presented to Compliance and Operations teams for sifting, reviewing, rejecting or further investigation. The use of AI can be extremely effective at performing this sifting and presenting back only relevant results to users. Done correctly this can reduce this burden by 90+% but perhaps more importantly, never miss or overlook a case so providing reassurance that relevant data is being captured
  3. By using Intelligent workflows, processes can be fully automated where simple decision making is supported by AI, thereby removing the need for manual intervention in many tasks being processed. Leaving the human to provide value in the complex end of problem solving

Solutions are now emerging in the industry, such as OPSMATiX, one of the first Intelligent Process Automation (IPA) solutions. Devised by a group of industry business experts as a set of technologies that combine to make sense of data across different communication channels, uses AI to turn the unstructured data into structured, and applies robust workflows to optimally manage the resolution of cases, exceptions and issues. The data vendors, and solution vendors such as Encompass are also embracing AI techniques and technologies to effectively create ‘smart filters’ that can be used to scour through thousands, if not millions of pieces of news and other media to discover, or discount information of interest. This can be achieved in a tiny fraction of the time, and therefore cost, and more importantly with far better accuracy than the human can achieve. The outcome of this will be to liberate the human from the process, and firms can either choose to reduce the costs of their operations or use people more effectively to investigate and analyse those events, information and clients that maybe of genuine cause for concern, rather than deal with the noise.

Only once the process has been made significantly more efficient, and the data brought under control can Financial firms really start to address the insidious business of financial crime. Currently all the effort is still going into meeting the regulations, and not societies actual demand which is to combat this global menace, Intelligent process should unlock this capability

 

Guest Author : David Deane, Managing Partner of FIMATIX and CEO of OPSMATiX. David has had a long and illustrious career within Operations and Technology global leadership with Wholesale Banks and Wealth Managers. Before creating FIMATIX and OPSMATiX, he was recently the Global Head of KYC / AML Operations for a Tier 1 Wholesale Bank.

david.deane@fimatix.com

Welcoming Robots to the Team

Posted on : 30-05-2018 | By : richard.gale | In : Finance, FinTech, Innovation

Tags: , , , , ,

1

Research suggests that that the adoption of Robotic Process Automation (RPA) and AI technologies is set to double by 2019. This marks a fundamental change in how organisations work and the potential impact on employees should not be underestimated.

For many years we have seen robots on the factory floor where manual processes have been replaced by automation. This has drastically changed the nature of manufacturing and has inevitably led to a reduction in these workforces.  It is understandable therefore, that we can hear the trembling voices of city workers shouting, “the robots are coming!”

Robotic software should not be thought of as the enemy but rather as a friendly addition to the IT family.  A different approach is needed. If you were replacing an excel spreadsheet with a software program an employee would see this as advantage, as it makes their job quicker and easier to do, therefore welcome the change. Looking at RPA in the same way will change the way employees view its implementation and how they feel about it.

There is no doubt that in some cases RPA is intended as a cost saver but organisations that see RPA as simply a cost saving solution will reap the least rewards. For many companies who have already completed successful RPA programmes, the number one priority has been to eliminate repetitive work that employees didn’t want or need to do. Approaching an RPA project in a carefully thought out and strategic manner will provide results that show that RPA and employees can work together.

Successful transformation using RPA relies on an often used but very relevant phrase  “it’s all about the People Process and Technology”.  You need all three in the equation. It is undeniable that automation is a disruptive technology which will affect employees outlook and affect the way they work. Change management is key in managing these expectations. If robots are to be a part of your organisation, then your employees must be prepared and included.

Perhaps it’s time to demystify RPA, and see it for what is really is, just another piece of software! Automation is about making what you do easier to execute, with less mistakes and greater flexibility. It is important to demonstrate to your staff that RPA is part of a much wider strategic plan of growth and new opportunities.

It is vital to communicate with staff at every level, explaining the purpose of RPA and what it will mean for them. Ensure everyone understands the implications and the benefits of the transition to automation. Even though activities and relationships within an organisation may change this does not necessarily mean a change for the worst.

Employees must be involved from the start of the process. Those individuals who have previously performed the tasks to be automated will be your subject matter experts. You will need to train several existing employees in RPA to manage the process going forward.  Building an RPA team from current employees will ensure that you have their buy- in which is crucial if the implementation is to be a success.

With any new software training is often an afterthought. In the case of RPA training is more important than ever, ensuring that the robots and employees understand each other and can work efficiently together. Working to train RPA experts internally will result in a value-added proposition for the future when it comes to maintaining or scaling your solution.

When analysing the initial RPA requirements, a great deal of thought must be given to the employees who are being replaced and where their skills can be effectively be redeployed. Employee engagement increases when personnel feel that their contribution to the organisation is meaningful and widespread.

Consultation and collaboration throughout the entire process will help to ensure a smoother transition where everyone can feel the benefits. Following a successful RPA implementation share the results with everyone in your organisation.  Share the outcomes and what you have learnt, highlight those employees and teams that have helped along the way.

The robots are coming! They are here to help and at your service!

AI Evolution: Survival of the Smartest

Posted on : 21-05-2018 | By : richard.gale | In : Innovation, Predictions

Tags: , , , , ,

0

Artificial intelligence is getting very good at identifying things: Let it analyse a million pictures, and it can tell with amazing accuracy which show a child crossing the road. But AI is hopeless at generating images of people or whatever by itself. If it could do that, it would be able to create visions of realistic but synthetic pictures depicting people in various settings, which a self-driving car could use to train itself without ever going out on the road.

The problem is, creating something entirely new requires imaginationand until now that has been a step to far for machine learning.

There is an emerging solution first conceived by  Ian Goodfellow during an academic argument in a bar in 2014… The approach, known as a generative adversarial network, or “GAN”, takes two neural networksthe simplified mathematical models of the human brain that underpin most modern machine learningand pits them against each other to identify flaws and gaps in the others thought model.

Both networks are trained on the same data set. One, known as the generator, is tasked with creating variations on images it’s already seenperhaps a picture of a pedestrian with an extra arm. The second, known as the discriminator, is asked to identify whether the example it sees is like the images it has been trained on or a fake produced by the generatorbasically, is that three-armed person likely to be real?

Over time, the generator can become so good at producing images that the discriminator can’t spot fakes. Essentially, the generator has been taught to recognize, and then create, realistic-looking images of pedestrians.

The technology has become one of the most promising advances in AI in the past decade, able to help machines produce results that fool even humans.

GANs have been put to use creating realistic-sounding speech and photo realistic fake imagery. In one compelling example, researchers from chipmaker Nvidia primed a GAN with celebrity photographs to create hundreds of credible faces of people who don’t exist. Another research group made not-unconvincing fake paintings that look like the works of van Gogh. Pushed further, GANs can reimagine images in different waysmaking a sunny road appear snowy, or turning horses into zebras.

The results aren’t always perfect: GANs can conjure up bicycles with two sets of handlebars, say, or faces with eyebrows in the wrong place. But because the images and sounds are often startlingly realistic, some experts believe there’s a sense in which GANs are beginning to understand the underlying structure of the world they see and hear. And that means AI may gain, along with a sense of imagination, a more independent ability to make sense of what it sees in the world. 

This approach is starting to provide programmed machines with something along the lines of imagination. This, in turn, will make them less reliant on human help to differentiate. It will also help blur the lines between what is real and what is fake… And in an age where we are already plagued with ‘fake news’ and doctored pictures are we ready for seemingly real but constructed images and voices….

The 2018 Broadgate Predictions

Posted on : 19-12-2017 | By : richard.gale | In : Predictions

Tags: , , , , , , , , , ,

1

Battle of the Chiefs

Chief Information Officer 1 –  Chief Digital Officer 0

Digital has been the interloper into the world of IT – originating from the Marketing Department through the medium of Website morphing into Ecommerce. The result was more budget and so power with the CDiO than the CIO and the two Chiefs have been rubbing along uncomfortably together, neither fully understanding the boundaries between them. 2018 will see the re-emergence of CIO empire as technology becomes more service based (Cloud, SaaS, Microservices etc) and focus returns to delivering high paced successful transformational change.

 

Battle of the Algorithms

Quantum 2 – Security 1

All the major Tech companies now have virtual Quantum computers available (so the toolkits if not the technology). These allow adventurous techies to experiment with Quantum concepts. Who knows what the capabilities are of Quantum but through its enormous processing power it will have the capability to look at every possible combination of events for a giving situation at once. That is great in terms of deciding which share to buy or how people interacting on Facebook but it will also have the potential to crack most current encryption mechanisms. Saying that it will enable another level of secure access too!

 

Battle of the Search Engines

Voice 2 – Screen 0

OK Google, Alexa, Siri…. There’s a great video of Google talking to Alexa on infinite loop. That’s all fun but in 2018 Voice will start to become a dominant force for search and for general utility. Effectively stopping what you are doing and typing in a command or search will start to feel a little strange and old-fashioned. OK in the office we may not all start shouting at our computers (well not more than normal) but around the home, car using our phones it is the obvious way to interact. This trend is already gathering momentum. VR and especially AR will add to this, the main thing holding it back is the fact you look like an idiot with the headset on. Once that is cracked then there will be no stopping it.

 

RoboWars – to be continued…

Robots 1 – People 1

AI and ‘robot process automation’ RPA are everywhere. Every services firm worth its salt has process automation plans and the hype around companies such as Blue Prisim is phenomenal.  This is all very exciting and many doomsayers have been predicting the end of most jobs (and some the end of most people!). Yes. Automation of processes is here. It’s been here for years – that is what most ERP (aka workflow) systems do. It makes absolute sense to automate mundane processes and if you can build in a bit of intelligence to deal with slight differences in the pattern then all the better. Will it result in the loss of millions of jobs… well maybe and probably in the short-term but once again, as every time in the past, technology will replace human endeavour whilst humans will be busy building the next creative, innovative wave.

 

The Lightbulb Moment

Internet 1 – Internet of Things 3

Is there anything left which is not internet connected? Two years ago, there were very few people that had any interest in communicating with a lightbulb – apart from flicking a light-switch. Now IoT connected lightbulbs appear be everywhere and the trend will grow and grow. The speed this happening is accelerating and the scope of connected devices is expanding beyond belief. Who would have thought we needed a smart hairbrush? This is all fine and will enrich our lives in ways we probably haven’t even thought about yet but there is a cost. We are allowing these devices to listen, see, control parts of our lives and the data they gather has value both for good and bad reasons. There is no ‘culture of security’ for IoT. Many of the devices are cheaply designed and manufactured with no thought towards security or data privacy. We are allowing these devices into our lives and we don’t really know what they know and who knows what they know. This may be a subtler change for 2018 – the securing of ‘the Thing’ – well lets hope so!

 

Welcome to our ESports Day

Call Of Duty 2 – Premiership Football 1

Sport is a big business. From Curling to Swimming to Indy Car racing it has a thousand differing forms, millions of participants and billions of armchair viewers. Top class athletes in a popular sport can earn millions of dollars a year both from performing and through product endorsements.

Video games have been popular for years. They started as single, two player games and now are worldwide multiplayer extravaganzas where you can battle, race or fight against people throughout the world. A number of superstars or EAthletes have emerged, first through winning competitions and then through youtube etc where their tournaments are recorded and watched again and again. This business has now broken the $1B mark – still way off ‘real’ sport but its growing massively and some point soon will become part of the mainstream.

Let’s think Intelligently about AI.

Posted on : 17-01-2017 | By : richard.gale | In : Uncategorized

Tags: , , , ,

0

Currently there is a daily avalanche of artificial intelligence (AI) related news clogging the internet. Almost every new product, service or feature has an AI, ‘Machine Learning’ or ‘Robo something’  angle to it. So what is so great about AI? What is different about it and how can it improve the way we live and work? We think there has been an over emphasis on ‘machine learning’ relying on crunching huge amounts of information via a set of algorithms. The actual ‘intelligence’ part has been overlooked, the unsupervised way humans learn through observation and modifying our behaviour based on changes to our actions is missing. Most ‘AI’ tools today work well but have a very narrow range of abilities and have no ability to really think creatively and as wide ranging as a human (or animal) brain.

Origins

Artificial Intelligence as a concept has been around for hundreds of years. That human thought, learning, reasoning and creativity could be replicated in some form of machine. AI as an academic practice really grew out of the early computing concepts of Alan Turing and the first AI research lab was created in Dartmouth college in 1956. The objective seemed simple, create a machine as intelligent as a human being. The original team quickly found they had grossly underestimated the complexity of the task and progress in AI moved gradually forward over the next 50 years.

Although there are a number of approaches to AI, all generally rely on learning, processing information about the environment, how it changes, the  frequency and type of inputs experienced. This can result in a huge amount of data to be absorbed. The combination of the availability of vast amounts of computing power & storage with massive amounts of information (from computer searches and interaction) has enabled AI, sometimes known as machine learning to gather pace. There are three main types of learning in AI;

  • Reinforcement learning — This is focused on the problem of how an AI tool ought to act in order to maximise the chance of solving a problem. In a particular situation, the machine picks an action or a sequence of actions, and progresses. This is frequently used when teaching machines to play and win chess games. One issue is that in its purest form, reinforcement learning requires an extremely large number of repetitions to achieve a level of success.
  • Supervised learning —  The programme is told what the correct answer is for a particular input: here is the image of a kettle the correct answer is “kettle.” It is called supervised learning because the process of an algorithm learning from the labelled training data-set is similar to showing a picture book to a young child. The adult knows the correct answer and the child makes predictions based on previous examples. This is the most common technique for training neural networks and other machine learning architectures. An example might be: Given the descriptions of a large number of houses in your town together with their prices, try to predict the selling price of your own home.
  • Unsupervised learning / predictive learning — Much of what humans and animals learn, they learn it in the first hours, days, months, and years of their lives in an unsupervised manner: we learn how the world works by observing it and seeing the result of our actions. No one is here to tell us the name and function of every object we perceive. We learn very basic concepts, like the fact that the world is three-dimensional, that objects don’t disappear spontaneously, that objects that are not supported fall. We do not know how to do this with machines at the moment, at least not at the level that humans and animals can. Our lack of AI technique for unsupervised or predictive learning is one of the factors that limits the progress of AI at the moment.

How useful is AI?

We are constantly interacting with AI. There are a multitude of programmes, working, helping and predicting  your next move (or at least trying to). Working out the best route is an obvious one where Google uses feedback from thousands of other live and historic journeys to route you the most efficient way to work. It then updates its algorithms based on the results from yours. Ad choices, ‘people also liked/went on to buy’ all assist in some ways to make our lives ‘easier’. The way you spend money is predictable so any unusual behaviour can result in a call from your bank to check a transaction. Weather forecasting uses machine learning (and an enormous amount of processing power combined with historic data) to provide improving short and medium term forecasts.

One of the limitations with current reinforcement and supervised models of learning is that, although we can build a highly intelligent device it has very limited focus. The chess computer ‘Deep Blue’ could beat Grand-master human chess players but, unlike them, it cannot drive a car, open a window or describe the beauty of a painting.

What’s next?

So could a machine ever duplicate or move beyond the capabilities of a human brain. The short answer is ‘of course’. Another short answer is ‘never’… Computers and programmes are getting more powerful, sophisticated and consistent each year. The amount of digital data is doubling on a yearly basis and the reach of devices is expanding at extreme pace. What does that mean for us? Who knows is the honest answer. AI and intelligent machines will become a part of all our daily life but the creativity of humans should ensure we partner and use them to enrich and improve our lives and environment.

Deep Learning‘ is the latest buzz term in AI. Some researchers explain this as ‘working just like the brain’ a better explanation from Jan LeCun (Head of AI at Facebook) is ‘machines that learn to represent the world’. So more general purpose machine learning tools rather than highly specialised single purpose ones. We see this as the next likely direction for AI in the same way, perhaps, that the general purpose Personal Computer (PC) transformed computing from dedicated single purpose to multi-purpose business tools.