Artificial Intelligence – Explaining the Unexplainable

Posted on : 23-09-2019 | By : kerry.housley | In : Finance, FinTech, General News, Innovation

Tags: , , , , , ,

0

The rise of Artificial Intelligence (AI) is dramatically changing the way businesses operate and provide their services. The acceleration of intelligent automation is enabling companies to operate more efficiently, promote growth, deliver greater customer satisfaction and drive up profits. But what exactly is AI? How does it reach its decisions? How can we be sure it follows all corporate, regulatory and ethical guideline? Do we need more human control? 

Is it time for AI to explain itself? 

The enhancement of human intelligence with AI’s speed and precisiomeans a gigantic leap forward for productivity. The ability to feed data into an algorithm black box and return results in a fraction of the time a human could compute, is no longer sci fi fantasy but now a reality.  

However, not everyone talks about AI with such enthusiasmCritics are concerned that the adoption of AI machines will lead to the decline of the human role rather than freedom and enhancement for workers.   

Ian McEwan in his latest novel Machines Like Me writes about a world where machines take over in the face of human decline. He questions machine learning referring to it as

“the triumph of humanism or the angel of death?” 

Whatever your view, we are not staring at the angel of death just yet!  AI has the power to drive a future full of potential and amazing discovery. If we consider carefully all the aspects of AI and its effects, then we can attempt to create a world where AI works for us and not against us. 

Let us move away from the hype and consider in real terms the implications of the shift from humans to machines. What does this really mean? How far does the shift go?  

If we are to operate in world where we are relying on decisions made by software, we must understand how this decision is calculated in order to have faith in the result.   

In the beginning the AI algorithms were relatively simple as humans learned how to define them. As time has moved on, algorithms have evolved and become more complex. If you add to this machine learning, and we have a situation where we have machines that can “learn behaviour patterns thereby altering the original algorithm. As humans don’t have access to the algorithms black box we are no longer in charge of the process.   

The danger is that where we do not understand what is going on in the black box and can therefore no longer be confident in the results produced.

If we have no idea how the results are calculated, then we have lost trust in the process. Trust is the key element for any business, and indeed for society at large. There is a growing consensus around the need for AI to be more transparent. Companies need to have a greater understanding of their AI machines. Explainable AI is the idea that an AI algorithm should be able to explain how it reached its conclusion in a way that humans can understand. Often, we can determine the outcome but cannot explain how it got there!  

Where that is the case, how can we trust the result to be true, and how can we trust the result to be unbiased?  The impact of this is not the same in every case, it depends on whether we are talking about low impact or high impact outcomes. For example, an algorithm that decides what time you should eat your breakfast is clearly not as critical as an algorithm which determines what medical treatment you should have.  

As we witness a greater shift from humans to machines, the greater the need for the explainability.  

Consensus for more explainable AI is one thing, achieving it is quite another. Governance is an imperative, but how can we expect regulators to dig deep into these algorithms to check that they comply, when the technologists themselves don’t understand how to do this. 

One way forward could be a “by design” approach – i.e., think about the explainable element at the start of the process. It may not be possible to identify each and every step once machine learning is introduced but a good business process map will help the users the define process steps.  

The US government have been concerned about this lack of transparency for some time and have introduced the Algorithmic Accountability Act 2019. The Act looks at automated decision making and will require companies to show how their systems have been designed and built. It only applies to the large tech companies with turnover of more than $50M dollars, but it provides a good example that all companies would be wise to follow.  

Here in the UK, the Financial Conduct Authority is working very closely with the Alan Turing Institute to ascertain what the role of the regulator should be and how governance can be  appropriately introduced.

The question is how explainable and how accurate the explanation needs to be in each case, depending on the risk and the impact.  

With AI moving to ever increasing complexity levels, its crucial to understand how we get to the results in order to trust the outcome. Trust really is the basis of any AI operation. Everyone one involved in the process needs to have confidence in the result and know that AI is making the right decision, avoiding manipulationbias and respecting ethical practices. It is crucial that the AI operates within public acceptable boundaries.  

Explainable AI is the way forward if we want to follow good practice guidelines, enable regulatory control and most importantly build up trust so that the customer always has confidence in the outcome.   

AI is not about delegating to robots, it is about helping people to achieve more precise outcomes more efficiently and more quickly.  

If we are to ensure that AI operates within boundaries that humans expect then we need human oversight at every step. 

What will the IT department look like in the future?

Posted on : 29-01-2019 | By : john.vincent | In : Cloud, Data, General News, Innovation

Tags: , , , , , , , , , ,

0

We are going through a significant change in how technology services are delivered as we stride further into the latest phase of the Digital Revolution. The internet provided the starting pistol for this phase and now access to new technology, data and services is accelerating at breakneck speed.

More recently the real enablers of a more agile and service-based technology have been the introduction of virtualisation and orchestration technologies which allowed for compute to be tapped into on demand and removed the friction between software and hardware.

The impact of this cannot be underestimated. The removal of the needed to manually configure and provision new compute environments was a huge step forwards, and one which continues with developments in Infrastructure as Code (“IaC”), micro services and server-less technology.

However, whilst these technologies continually disrupt the market, the corresponding changes to the overall operating models has in our view lagged (this is particularly true in larger organisations which have struggled to shift from the old to the new).

If you take a peek into organisation structures today they often still resemble those of the late 90’s where capabilities in infrastructure were organised by specialists such as data centre, storage, service management, application support etc. There have been changes, specifically more recently with the shift to devops and continuous integration and development, but there is still a long way go.

Our recent Technology Futures Survey provided a great insight into how our clients (290) are responding to the shifting technology services landscape.

“What will your IT department look like in 5-7 years’ time?”

There were no surprises in the large majority of respondents agreeing that the organisation would look different in the near future. The big shift is to a more service focused, vendor led technology model, with between 53%-65% believing that this is the direction of travel.

One surprise was a relatively low consensus on the impact that Artificial Intelligence (“AI”) would have on management of live services, with only 10% saying it would be very likely. However, the providers of technology and services formed a smaller proportion of our respondents (28%) and naturally were more positive about the impact of AI.

The Broadgate view is that the changing shape of digital service delivery is challenging previous models and applying tension to organisations and providers alike.  There are two main areas where we see this;

  1. With the shift to cloud based and on-demand services, the need for any provider, whether internal or external, has diminished
  2. Automation, AI and machine learning are developing new capabilities in self-managing technology services

We expect that the technology organisation will shift to focus more on business products and procuring the best fit service providers. Central to this is AI and ML which, where truly intelligent (and not just marketing), can create a self-healing and dynamic compute capability with limited human intervention.

Cloud, machine learning and RPA will remove much of the need to manage and develop code

To really understand how the organisation model is shifting, we have to look at the impact that technology is having the on the whole supply chain. We’ve long outsourced the delivery of services. However, if we look the traditional service providers (IBM, DXC, TCS, Cognizant etc.) that in the first instance acted as brokers to this new digital technology innovations we see that they are increasingly being disintermediated, with provisioning and management now directly in the hands of the consumer.

Companies like Microsoft, Google and Amazon have superior technical expertise and they are continuing to expose these directly to the end consumer. Thus, the IT department needs to think less about how to either build or procure from a third party, but more how to build a framework of services which “knits together” a service model which can best meet their business needs with a layered, end-to-end approach. This fits perfectly with a more business product centric approach.

We don’t see an increase for in-house technology footprints with maybe the exception of truly data driven organisations or tech companies themselves.

In our results, the removal of cyber security issues was endorsed by 28% with a further 41% believing that this was a possible outcome. This represents a leap of faith given the current battle that organisations are undertaking to combat data breaches! Broadgate expect that organisations will increasingly shift the management of these security risks to third party providers, with telecommunication carriers also taking more responsibilities over time.

As the results suggest, the commercial and vendor management aspects of the IT department will become more important. This is often a skill which is absent in current companies, so a conscious strategy to develop capability is needed.

Organisations should update their operating model to reflect the changing shape of technology services, with the closer alignment of products and services to technology provision never being as important as it is today.

Indeed, our view is that even if your model serves you well today, by 2022 it is likely to look fairly stale. This is because what your company currently offers to your customers is almost certain to change, which will require fundamental re-engineering across, and around, the entire IT stack.

The Opportunity for Intelligent Process Automation in KYC / AML

Posted on : 28-06-2018 | By : richard.gale | In : compliance, Data, Finance, FinTech, Innovation

Tags: , , , , , , , , , , ,

0

Financial services firms have had a preoccupation with meeting the rules and regulations for fighting Financial Crime for the best part of the past decade. Ever since HSBC received sanction from both UK and US regulators in 2010, many other firms have also been caught short in failing to meet society’s expectations in this space. There have been huge programmes of change and remediation, amounting to 10’s of Billions of any currency you choose, to try to get Anti-Financial Crime (AFC) or Know Your Customer (KYC) / Anti-Money Laundering (AML) policies, risk methodologies, data sources, processes, organisation structures, systems and client populations into shape, at least to be able to meet the expectations of regulators, if not exactly stop financial crime.

The challenge for the industry is that Financial Crime is a massive and complex problem to solve. It is not just the detection and prevention of money laundering, but also needs to cover terrorist financing, bribery & corruption and tax evasion. Therefore, as the Banks, Asset Managers and Insurers have been doing, there is a need to focus upon all elements of the AFC regime, from education to process, and all the other activities in-between. Estimates as to the scale of the problem vary but the consensus is that somewhere between $3-5 trillion is introduced into the financial systems each year.

However, progress is being made. Harmonisation and clarity of industry standards and more consistency has come from the regulators with initiatives such as the 4th EU AML Directive. The appreciation and understanding of the importance of the controls are certainly better understood within Financial Services firms and by their shareholders. Perhaps what has not yet progressed significantly are the processes of performing client due diligence and monitoring of their subsequent activity. Most would argue that this is down to a number of factors, possibly the greatest challenge being the disparate and inconsistent nature of the data required to support these processes. Data needs to be sourced in many formats from country registries, stock exchanges, documents of incorporation, multiple media sources etc… Still today many firms have a predominantly manual process to achieve this, even when much of the data is available in digital form. Many still do not automatically ingest data into their work flows and have poorly defined processes to progress onboarding, or monitoring activities. This is for the regulations as they stand today, in the future this burden will further increase as firms will be expected to take all possible efforts to determine the integrity of their clients i.e. by establishing linkages to bad actors through other data sources such as social media and the dark web not evident in traditional sources such as company registries.

There have been several advances in recent years with technologies that have enormous potential for supporting the AFC cause. Data vendors have made big improvements in providing a broader and higher quality of data. The Aggregation solutions, such as Encompass offer services where the constituents of a corporate ownership structure can be assembled, and sanctions & PEP checks undertaken in seconds, rather than the current norm of multiple hours. This works well where the data is available from a reliable electronic source. However, does not work where there are no, or unreliable sources of digital data, as is the case for Trusts or in many jurisdictions around the world. Here we quickly get back to the world of paper and PDFs’ which still require human horsepower to review and decision.

Getting the information in the first instance can be very time consuming with complex interactions between multiple parties (relationship managers, clients, lawyers, data vendors, compliance teams etc) and multiple communications channels i.e. voice, email and chat in its various forms. We also have the challenge of Adverse Media, where thousands of news stories are generated every day on Corporates and Individuals that are the clients of Financial firms. The news items can be positive or negative but consumes tens of thousands of people to review, eliminate or investigate this mountain of data each day. The same challenges come with transaction monitoring, where individual firms can have thousands of ‘hits’ every day on ‘unusual’ payment patterns or ‘questionable’ beneficiaries. These also require review, repair, discounting or further investigation, the clear majority of which are false positives that can be readily discarded.

What is probably the most interesting opportunity for allowing the industry to see the wood for the trees in this data heavy world, is the maturing of Artificial Intelligence (AI) based, or ‘Intelligent’ solutions. The combination of Natural Language Processing with Machine Learning can help the human find the needles in the haystack or make sense of unstructured data that would ordinarily require much time to read and record. AI on its own is not a solution but combined with process management (workflow) and digitised, multi-channel communications, and even Robotics can achieve significant advances. In summary ‘Intelligent’ processing can address 3 of the main data challenges with the AFC regimes within financial institutions;

  1. Sourcing the right data – Where data is structured and digitally obtainable it can be readily harvested but needs to be integrated into the process flows to be compared, analysed, accepted or rejected as part of a review process. Here AI can be used to perform these comparisons, support analysis and look for patterns of common or disparate Data. Where the data is unstructured i.e. embedded in a paper document (email / PDF / doc etc.) then AI NLP and Machine Learning can be used to extract the relevant data and turn the unstructured into structured form for onward processing
  2. Filtering – with both Transaction Monitoring and Adverse Media reviews there is a tsunami of data and events presented to Compliance and Operations teams for sifting, reviewing, rejecting or further investigation. The use of AI can be extremely effective at performing this sifting and presenting back only relevant results to users. Done correctly this can reduce this burden by 90+% but perhaps more importantly, never miss or overlook a case so providing reassurance that relevant data is being captured
  3. By using Intelligent workflows, processes can be fully automated where simple decision making is supported by AI, thereby removing the need for manual intervention in many tasks being processed. Leaving the human to provide value in the complex end of problem solving

Solutions are now emerging in the industry, such as OPSMATiX, one of the first Intelligent Process Automation (IPA) solutions. Devised by a group of industry business experts as a set of technologies that combine to make sense of data across different communication channels, uses AI to turn the unstructured data into structured, and applies robust workflows to optimally manage the resolution of cases, exceptions and issues. The data vendors, and solution vendors such as Encompass are also embracing AI techniques and technologies to effectively create ‘smart filters’ that can be used to scour through thousands, if not millions of pieces of news and other media to discover, or discount information of interest. This can be achieved in a tiny fraction of the time, and therefore cost, and more importantly with far better accuracy than the human can achieve. The outcome of this will be to liberate the human from the process, and firms can either choose to reduce the costs of their operations or use people more effectively to investigate and analyse those events, information and clients that maybe of genuine cause for concern, rather than deal with the noise.

Only once the process has been made significantly more efficient, and the data brought under control can Financial firms really start to address the insidious business of financial crime. Currently all the effort is still going into meeting the regulations, and not societies actual demand which is to combat this global menace, Intelligent process should unlock this capability

 

Guest Author : David Deane, Managing Partner of FIMATIX and CEO of OPSMATiX. David has had a long and illustrious career within Operations and Technology global leadership with Wholesale Banks and Wealth Managers. Before creating FIMATIX and OPSMATiX, he was recently the Global Head of KYC / AML Operations for a Tier 1 Wholesale Bank.

david.deane@fimatix.com

Be aware of “AI Washing”

Posted on : 26-01-2018 | By : john.vincent | In : Cloud, Data, General News, Innovation

Tags: , , , ,

0

I checked and it’s almost 5 years ago now that we wrote about the journey to cloud and mentioned “cloud washing“, the process by which technology providers were re-positioning previous offerings to be “cloud enabled”, “cloud ready” and the like.

Of course, the temptation to do this is natural. After all, if the general public can trigger a 200% increase in share price simply by re-branding your iced tea company to “Long Blockchain“, then why not.

And so we enter another “washing” phase, this time in the form of a surge in Artificial Intelligence (AI) powered technologies. As the enterprise interest in AI and machine learning gathers pace, software vendors are falling over each other to meet the market demands.

Indeed, according to Gartner by 2020;

AI technologies will be virtually pervasive in almost every new software product and service

This is great news and the speed of change is outstanding. However, it does pose some challenges for technology leaders and decision makers as the hype continues.

Firstly, we need to apply the “so what?” test against the claims of AI enablement. The fact that a product has AI capabilities doesn’t propel it automatically to the top of selection criteria. It needs to be coupled with a true business value rather than simply a sales and marketing tool.

Whilst that sounds obvious, before you cry “pass me another egg Vincent”, it does warrant a pause and reflection. Human behaviour and the pressures on generating business value against a more difficult backdrop can easier drive a penchant for the latest trend (anyone seen “GDPR compliant” monikers appearing?)

In terms of the bandwagon jumping, Gartner says;

Similar to greenwashing, in which companies exaggerate the environmental-friendliness of their products or practices for business benefit, many technology vendors are now “AI washing” by applying the AI label a little too indiscriminately

The second point, is to ask the question “Is this really AI or Automation?”. I’ve sat in a number of vendor presentations through 2017 where I asked exactly that. After much deliberation, pontification and several “well umms” we agreed that it was actually the latter we were discussing. Indeed, there terms are often interchanged at will during pitches which can be somewhat disconcerting.

The thing is, Automation doesn’t have the “blade runner-esc” cachet of AI, which conjures up the usual visions that the film industry has imprinted on our minds (of course, to counter this we’ve now got Robotic Process Automation!)

So what’s the difference between AI and Automation? The basic definition is;

  • Automation is software that follows pre-programmed ‘rules’.
  • Artificial intelligence is designed to simulate human thinking.

Automation is everywhere and been an important part of industry for decades. It enables machines to perform repetitive, monotonous tasks thus freeing up time for human beings to focus on the activities that require more reasoning, rationale and personal touch. This drives efficiency and a more productive and efficient business and personal life.

The difference with Automation is that is requires manual configuration and set up. It is smart, but it has to follow set instructions and workflow.

AI however is not developed simply to follow a set of predefined instructions. It is designed to mimic human behaviour to continuously seek patterns, learn from it data and “experiences” and determine the appropriate course of action or responses based on these parameters. This all comes under the general heading of “machine learning”.

The common “fuel” that drives both Automation and AI is Data. It is the lifeblood of the organisation and we now live is an environment where we talk about “data driven” technologies at the centre of the enterprise.

Whilst it’s hard to ignore all the hype around AI it is important for decision makers to think carefully not only in terms of what they want to achieve, but also how to filter out the “AI washing”.

Are we addicted to “Digital”?

Posted on : 28-02-2017 | By : john.vincent | In : Cloud, Data, Innovation, IoT, Uncategorized

Tags: , , , , , , , ,

0

There’s no getting away from it. The speed of technology advancement is now a major factor in changing how we interact with the world around us. For the first time, it seems that innovation in technology is being applied across every industry to drive innovation, increase efficiency and open new market possibilities, whilst in our daily lives we rely more and more on a connected existence. This is seen in areas such as the increase in wearable tech and the Internet of Things.

But what is the impact on business and society of this technology revolution regarding human interaction?

Firstly, let’s get the “Digital” word out on the table. Like cloud before it, the industry seems to have adopted a label on which we can pin everything related to advancement in technology. Whilst technically relating to web, mobile, apps etc. it seems every organisation has a “digital agenda”, likely a Chief Digital Officer and often a whole department in which some sort of alchemy takes place to create digital “stuff”. Meanwhile, service providers and consultancies sharpen their marketing pencils to ensure we are all enticed by their “digital capabilities”. Did I miss the big analogue computing cut-over in the last few years?

What “digital” does do (I guess) is position the narrative away from just technology to a business led focus, which is a good thing.

So how is technology changing the way that we interact on a human level? Before we move on to the question of technology dependence, let’s look at some other applications.

Artificial Intelligence (AI) is a big theme today. We’ve discussed the growth of AI here before and the impact on future jobs. However, one of the areas relating social interaction which is interesting, is the development of emotionally intelligent AI software. This is most evident in call centres where some workers can now receive coaching from software in real-time which analyses their conversations with customers. During the call the software can recommend changes such as with style, pace, warning about the emotional state of the customer etc.

Clever stuff, and whilst replacing call centre agents with robots is still something that many predict is a way off (if at all) it does offer an insight into the way that humans and AI might interact in the future. By developing AI to understand mental states from facial expressions, vocal nuances, body posture and gesture software can make decisions such as adapting the way that navigational systems might work depending on the drivers mental condition (for example, lost or confused) or picking the right moment to sell something based on emotional state. The latter does, however, raise wider ethical issues.

So what about the increase in digital dependency and the social impacts? Anyone who has been in close proximity to “millennial gatherings” will have witnessed the sight of them sitting together, head bowed, thumbs moving at a speed akin to Bradley Coopers character in Limitless punctuated by the odd murmuring, comment or interjection. Seems once we drop in a bit of digital tech and a few apps we stifle the art of conversation.

In 2014 a programmer called Kevin Holesh developed an app called Moment which measures the time that a user is interacting with a screen (it doesn’t count time on phone calls). The results interesting, with 88% of those that downloaded the app using their phone for more than an hour a day, with the average being three hours. Indeed, over a 24 hour period, the average user checked their phone 39 times. By comparison, just 6 years earlier in 2008 (before the widespread use of smartphones) people spent just 18 minutes a day on their phone.

It’s the impact on students and the next generation that has raised a few alarm bells. Patricia Greenfield, distinguished professor of psychology and director of the UCLA Children’s Digital Media Center in a recent study found that college students felt closest (or “bonded”) to their friends when they discussed face to face and most distant from them when they text-messaged. However, the students still most often communicated by text.

“Being able to understand the feelings of other people is extremely important to society,” Greenfield said. “I think we can all see a reduction in that.”

Technology is changing everything about how we interact with each other, how we arrange our lives, what we eat, where and how we travel, how we find a partner, how we exercise etc… It is what makes up the rich fabric of the digitised society and will certainly continue to evolve at a pace. Humans, however, may be going the other way.

Let’s think Intelligently about AI.

Posted on : 17-01-2017 | By : richard.gale | In : Uncategorized

Tags: , , , ,

0

Currently there is a daily avalanche of artificial intelligence (AI) related news clogging the internet. Almost every new product, service or feature has an AI, ‘Machine Learning’ or ‘Robo something’  angle to it. So what is so great about AI? What is different about it and how can it improve the way we live and work? We think there has been an over emphasis on ‘machine learning’ relying on crunching huge amounts of information via a set of algorithms. The actual ‘intelligence’ part has been overlooked, the unsupervised way humans learn through observation and modifying our behaviour based on changes to our actions is missing. Most ‘AI’ tools today work well but have a very narrow range of abilities and have no ability to really think creatively and as wide ranging as a human (or animal) brain.

Origins

Artificial Intelligence as a concept has been around for hundreds of years. That human thought, learning, reasoning and creativity could be replicated in some form of machine. AI as an academic practice really grew out of the early computing concepts of Alan Turing and the first AI research lab was created in Dartmouth college in 1956. The objective seemed simple, create a machine as intelligent as a human being. The original team quickly found they had grossly underestimated the complexity of the task and progress in AI moved gradually forward over the next 50 years.

Although there are a number of approaches to AI, all generally rely on learning, processing information about the environment, how it changes, the  frequency and type of inputs experienced. This can result in a huge amount of data to be absorbed. The combination of the availability of vast amounts of computing power & storage with massive amounts of information (from computer searches and interaction) has enabled AI, sometimes known as machine learning to gather pace. There are three main types of learning in AI;

  • Reinforcement learning — This is focused on the problem of how an AI tool ought to act in order to maximise the chance of solving a problem. In a particular situation, the machine picks an action or a sequence of actions, and progresses. This is frequently used when teaching machines to play and win chess games. One issue is that in its purest form, reinforcement learning requires an extremely large number of repetitions to achieve a level of success.
  • Supervised learning —  The programme is told what the correct answer is for a particular input: here is the image of a kettle the correct answer is “kettle.” It is called supervised learning because the process of an algorithm learning from the labelled training data-set is similar to showing a picture book to a young child. The adult knows the correct answer and the child makes predictions based on previous examples. This is the most common technique for training neural networks and other machine learning architectures. An example might be: Given the descriptions of a large number of houses in your town together with their prices, try to predict the selling price of your own home.
  • Unsupervised learning / predictive learning — Much of what humans and animals learn, they learn it in the first hours, days, months, and years of their lives in an unsupervised manner: we learn how the world works by observing it and seeing the result of our actions. No one is here to tell us the name and function of every object we perceive. We learn very basic concepts, like the fact that the world is three-dimensional, that objects don’t disappear spontaneously, that objects that are not supported fall. We do not know how to do this with machines at the moment, at least not at the level that humans and animals can. Our lack of AI technique for unsupervised or predictive learning is one of the factors that limits the progress of AI at the moment.

How useful is AI?

We are constantly interacting with AI. There are a multitude of programmes, working, helping and predicting  your next move (or at least trying to). Working out the best route is an obvious one where Google uses feedback from thousands of other live and historic journeys to route you the most efficient way to work. It then updates its algorithms based on the results from yours. Ad choices, ‘people also liked/went on to buy’ all assist in some ways to make our lives ‘easier’. The way you spend money is predictable so any unusual behaviour can result in a call from your bank to check a transaction. Weather forecasting uses machine learning (and an enormous amount of processing power combined with historic data) to provide improving short and medium term forecasts.

One of the limitations with current reinforcement and supervised models of learning is that, although we can build a highly intelligent device it has very limited focus. The chess computer ‘Deep Blue’ could beat Grand-master human chess players but, unlike them, it cannot drive a car, open a window or describe the beauty of a painting.

What’s next?

So could a machine ever duplicate or move beyond the capabilities of a human brain. The short answer is ‘of course’. Another short answer is ‘never’… Computers and programmes are getting more powerful, sophisticated and consistent each year. The amount of digital data is doubling on a yearly basis and the reach of devices is expanding at extreme pace. What does that mean for us? Who knows is the honest answer. AI and intelligent machines will become a part of all our daily life but the creativity of humans should ensure we partner and use them to enrich and improve our lives and environment.

Deep Learning‘ is the latest buzz term in AI. Some researchers explain this as ‘working just like the brain’ a better explanation from Jan LeCun (Head of AI at Facebook) is ‘machines that learn to represent the world’. So more general purpose machine learning tools rather than highly specialised single purpose ones. We see this as the next likely direction for AI in the same way, perhaps, that the general purpose Personal Computer (PC) transformed computing from dedicated single purpose to multi-purpose business tools.