Artificial Intelligence – Explaining the Unexplainable

Posted on : 23-09-2019 | By : kerry.housley | In : Finance, FinTech, General News, Innovation

Tags: , , , , , ,

0

The rise of Artificial Intelligence (AI) is dramatically changing the way businesses operate and provide their services. The acceleration of intelligent automation is enabling companies to operate more efficiently, promote growth, deliver greater customer satisfaction and drive up profits. But what exactly is AI? How does it reach its decisions? How can we be sure it follows all corporate, regulatory and ethical guideline? Do we need more human control? 

Is it time for AI to explain itself? 

The enhancement of human intelligence with AI’s speed and precisiomeans a gigantic leap forward for productivity. The ability to feed data into an algorithm black box and return results in a fraction of the time a human could compute, is no longer sci fi fantasy but now a reality.  

However, not everyone talks about AI with such enthusiasmCritics are concerned that the adoption of AI machines will lead to the decline of the human role rather than freedom and enhancement for workers.   

Ian McEwan in his latest novel Machines Like Me writes about a world where machines take over in the face of human decline. He questions machine learning referring to it as

“the triumph of humanism or the angel of death?” 

Whatever your view, we are not staring at the angel of death just yet!  AI has the power to drive a future full of potential and amazing discovery. If we consider carefully all the aspects of AI and its effects, then we can attempt to create a world where AI works for us and not against us. 

Let us move away from the hype and consider in real terms the implications of the shift from humans to machines. What does this really mean? How far does the shift go?  

If we are to operate in world where we are relying on decisions made by software, we must understand how this decision is calculated in order to have faith in the result.   

In the beginning the AI algorithms were relatively simple as humans learned how to define them. As time has moved on, algorithms have evolved and become more complex. If you add to this machine learning, and we have a situation where we have machines that can “learn behaviour patterns thereby altering the original algorithm. As humans don’t have access to the algorithms black box we are no longer in charge of the process.   

The danger is that where we do not understand what is going on in the black box and can therefore no longer be confident in the results produced.

If we have no idea how the results are calculated, then we have lost trust in the process. Trust is the key element for any business, and indeed for society at large. There is a growing consensus around the need for AI to be more transparent. Companies need to have a greater understanding of their AI machines. Explainable AI is the idea that an AI algorithm should be able to explain how it reached its conclusion in a way that humans can understand. Often, we can determine the outcome but cannot explain how it got there!  

Where that is the case, how can we trust the result to be true, and how can we trust the result to be unbiased?  The impact of this is not the same in every case, it depends on whether we are talking about low impact or high impact outcomes. For example, an algorithm that decides what time you should eat your breakfast is clearly not as critical as an algorithm which determines what medical treatment you should have.  

As we witness a greater shift from humans to machines, the greater the need for the explainability.  

Consensus for more explainable AI is one thing, achieving it is quite another. Governance is an imperative, but how can we expect regulators to dig deep into these algorithms to check that they comply, when the technologists themselves don’t understand how to do this. 

One way forward could be a “by design” approach – i.e., think about the explainable element at the start of the process. It may not be possible to identify each and every step once machine learning is introduced but a good business process map will help the users the define process steps.  

The US government have been concerned about this lack of transparency for some time and have introduced the Algorithmic Accountability Act 2019. The Act looks at automated decision making and will require companies to show how their systems have been designed and built. It only applies to the large tech companies with turnover of more than $50M dollars, but it provides a good example that all companies would be wise to follow.  

Here in the UK, the Financial Conduct Authority is working very closely with the Alan Turing Institute to ascertain what the role of the regulator should be and how governance can be  appropriately introduced.

The question is how explainable and how accurate the explanation needs to be in each case, depending on the risk and the impact.  

With AI moving to ever increasing complexity levels, its crucial to understand how we get to the results in order to trust the outcome. Trust really is the basis of any AI operation. Everyone one involved in the process needs to have confidence in the result and know that AI is making the right decision, avoiding manipulationbias and respecting ethical practices. It is crucial that the AI operates within public acceptable boundaries.  

Explainable AI is the way forward if we want to follow good practice guidelines, enable regulatory control and most importantly build up trust so that the customer always has confidence in the outcome.   

AI is not about delegating to robots, it is about helping people to achieve more precise outcomes more efficiently and more quickly.  

If we are to ensure that AI operates within boundaries that humans expect then we need human oversight at every step.