Why are we still getting caught by the ‘Phisher’men?

Posted on : 26-09-2019 | By : kerry.housley | In : Cyber Security, data security, Finance, Innovation

Tags: , , , , , , ,

0

Phishing attacks have been on the increase and have overtaken malware as the most popular cyber attack method. Attackers are often able to convincingly impersonate users and domains, bait victims with fake cloud storage links, engage in social engineering and craft attachments that look like ones commonly used in the organisation.

Criminal scammers are using increasingly sophisticated methods by employing more complex phishing site infrastructures that can be made to look more legitimate to the target. These include the use of well-known cloud hosting and document sharing services, established brand names which users believe are secure simply due to name recognition. For example, Microsoft, Amazon and Facebook are top of the hackers list. Gone are the days when phishing simply involved the scammer sending a rogue email and tricking the user into clicking on a link!

And while we mostly associate phishing with email, attackers are taking advantage of a wide variety of attack methods to trick their victims. Increasingly, employees are being subjected to targeted phishing attacks directly in their browser with highly legitimate looking sites, ads, search results, pop-ups, social media posts, chat apps, instant messages, as well as rogue browser extensions and free web apps

HTML phishing is a particularly effective means of attack where it can be delivered straight into browsers and apps, bypassing secure email gateways, next-generation antivirus endpoint security systems and advanced endpoint protections. These surreptitious methods are capable of evading URL inspections and domain reputation checking.

To make matters worse, the lifespan of a phishing URL has decreased significantly in recent years. To evade detection, phishing gangs can often gather valuable personal information in around 45 minutes. The bad guys know how current technologies are trying to catch them, so they have devised imaginative new strategies to evade detection. For instance, they can change domains and URLs fast enough so the blacklist-based engines cannot keep up. In other cases, malicious URLs might be hosted on compromised sites that have good domain reputations. Once people click on those sites, the attackers have already collected all the data they need within a few minutes and moved on.

Only the largest firms have automated their detection systems to spot potential cyberattacks. Smaller firms are generally relying on manual processes – or no processes at all. This basic lack of protection is a big reason why phishing for data has become the first choice for the bad actors, who are becoming much more sophisticated. In most cases, employees can’t even spot the fakes, and traditional defences that rely on domain reputation and blacklists are not enough.

By the time the security teams have caught up, those attacks are long gone and hosted somewhere else. Of the tens of thousands of new phishing sites that go live each day, the majority are hosted on compromised but otherwise legitimate domains. These sites would pass a domain reputation test, but they’re still hosting the malicious pages. Due to the fast-paced urgency of this threat, financial institutions should adopt a more modern approach to defend their data. This involves protections that can immediately determine the threat level in real-time and block the phishing hook before they draw out valuable information..

  • Always check the spelling of the URLs in email links before you click or enter sensitive information
  • Watch out for URL redirects, where you’re subtly sent to a different website with identical design
  • If you receive an email from a source you know but it seems suspicious, contact that source with a new email, rather than just hitting reply
  • Don’t post personal data, like your birthday, vacation plans, or your address or phone number, publicly on social media

We have started to work with Ironscales, a company which provides protection utilising machine learning to understand normal behaviours of users email interactions. It highlights (and can automatically remove) emails from the user’s inbox before they have time to open them. They cross reference this information with a multiple of other sources and the actions of their other client’s SOC analysts. This massively reduces the overhead in dealing with phishing or potential phishing emails and ensures that users are aware of the risks. Some great day to day examples include the ability to identify that an email has come from a slightly different email address or IP source. The product is being further developed to identify changes in grammar and language to highlight where a legitimate email address from a known person may have been compromised. We really like the ease of use of the technology and the time saved on investigation & resolution.

If you would like to try Ironscales out, then please let us know?

 

Phishing criminals will continue to devise creative new ways of attacking your networks and your employees. Protecting against such attacks means safeguarding those assets with equal amounts of creativity.

Artificial Intelligence – Explaining the Unexplainable

Posted on : 23-09-2019 | By : kerry.housley | In : Finance, FinTech, General News, Innovation

Tags: , , , , , ,

0

The rise of Artificial Intelligence (AI) is dramatically changing the way businesses operate and provide their services. The acceleration of intelligent automation is enabling companies to operate more efficiently, promote growth, deliver greater customer satisfaction and drive up profits. But what exactly is AI? How does it reach its decisions? How can we be sure it follows all corporate, regulatory and ethical guideline? Do we need more human control? 

Is it time for AI to explain itself? 

The enhancement of human intelligence with AI’s speed and precisiomeans a gigantic leap forward for productivity. The ability to feed data into an algorithm black box and return results in a fraction of the time a human could compute, is no longer sci fi fantasy but now a reality.  

However, not everyone talks about AI with such enthusiasmCritics are concerned that the adoption of AI machines will lead to the decline of the human role rather than freedom and enhancement for workers.   

Ian McEwan in his latest novel Machines Like Me writes about a world where machines take over in the face of human decline. He questions machine learning referring to it as

“the triumph of humanism or the angel of death?” 

Whatever your view, we are not staring at the angel of death just yet!  AI has the power to drive a future full of potential and amazing discovery. If we consider carefully all the aspects of AI and its effects, then we can attempt to create a world where AI works for us and not against us. 

Let us move away from the hype and consider in real terms the implications of the shift from humans to machines. What does this really mean? How far does the shift go?  

If we are to operate in world where we are relying on decisions made by software, we must understand how this decision is calculated in order to have faith in the result.   

In the beginning the AI algorithms were relatively simple as humans learned how to define them. As time has moved on, algorithms have evolved and become more complex. If you add to this machine learning, and we have a situation where we have machines that can “learn behaviour patterns thereby altering the original algorithm. As humans don’t have access to the algorithms black box we are no longer in charge of the process.   

The danger is that where we do not understand what is going on in the black box and can therefore no longer be confident in the results produced.

If we have no idea how the results are calculated, then we have lost trust in the process. Trust is the key element for any business, and indeed for society at large. There is a growing consensus around the need for AI to be more transparent. Companies need to have a greater understanding of their AI machines. Explainable AI is the idea that an AI algorithm should be able to explain how it reached its conclusion in a way that humans can understand. Often, we can determine the outcome but cannot explain how it got there!  

Where that is the case, how can we trust the result to be true, and how can we trust the result to be unbiased?  The impact of this is not the same in every case, it depends on whether we are talking about low impact or high impact outcomes. For example, an algorithm that decides what time you should eat your breakfast is clearly not as critical as an algorithm which determines what medical treatment you should have.  

As we witness a greater shift from humans to machines, the greater the need for the explainability.  

Consensus for more explainable AI is one thing, achieving it is quite another. Governance is an imperative, but how can we expect regulators to dig deep into these algorithms to check that they comply, when the technologists themselves don’t understand how to do this. 

One way forward could be a “by design” approach – i.e., think about the explainable element at the start of the process. It may not be possible to identify each and every step once machine learning is introduced but a good business process map will help the users the define process steps.  

The US government have been concerned about this lack of transparency for some time and have introduced the Algorithmic Accountability Act 2019. The Act looks at automated decision making and will require companies to show how their systems have been designed and built. It only applies to the large tech companies with turnover of more than $50M dollars, but it provides a good example that all companies would be wise to follow.  

Here in the UK, the Financial Conduct Authority is working very closely with the Alan Turing Institute to ascertain what the role of the regulator should be and how governance can be  appropriately introduced.

The question is how explainable and how accurate the explanation needs to be in each case, depending on the risk and the impact.  

With AI moving to ever increasing complexity levels, its crucial to understand how we get to the results in order to trust the outcome. Trust really is the basis of any AI operation. Everyone one involved in the process needs to have confidence in the result and know that AI is making the right decision, avoiding manipulationbias and respecting ethical practices. It is crucial that the AI operates within public acceptable boundaries.  

Explainable AI is the way forward if we want to follow good practice guidelines, enable regulatory control and most importantly build up trust so that the customer always has confidence in the outcome.   

AI is not about delegating to robots, it is about helping people to achieve more precise outcomes more efficiently and more quickly.  

If we are to ensure that AI operates within boundaries that humans expect then we need human oversight at every step. 

Are you able to access all the data across your organisation?

Posted on : 31-03-2019 | By : richard.gale | In : Data, Finance

0

For many years data has been the lifeblood of the organisation and more recently, the value of this commodity has been realised by many companies (see our previous article “Data is like oil”).

Advances in technology, processing power and analytics means that companies can collect and process data in real time. Most businesses are sitting on vast amounts of data and those that can harness it effectively can gain a much deeper understanding of their customers, better predict and improve their customer experience.

Our survey revealed that whilst most companies understand the value of their data and the benefits it can bring, many clients revealed a level of frustration in the systems and processes that manage it. Some respondents did qualify that “most of the data” was available, whilst others admitted some was stranded.

 “Data is in legacy silos, our long-term goal is to provide access through a consistent data management framework”

The deficiencies that we also discuss in this newsletter regarding legacy systems are partly responsible for this, although not wholly. This is a particular issue in financial services where many organisations are running on old systems that are too complex and too expensive to replace. Critical company data is trapped in silos, disconnected and incompatible with the rest of the enterprise.

These silos present a huge challenge for many companies. Recalling a comment of one Chief Data Office at a large institution;

“If I ask a question in more than one place, I usually get more than one answer!”

Data silos are expanding as companies collect too much data which they hold onto for longer than they need to. Big data has been a buzz word for a while now, but it is important that companies distinguish between big data and big bad data! The number of data sources are increasing all the time so the issue must be addressed if the data is to be used effectively to return some business value. Collecting a virtually unlimited amount of data needs to be managed properly to ensure that all data stored has a purpose and can be protected.

Shadow data further exacerbates the issue. This data is unverified, often inaccurate and out of date. Oversharing of this data results in it being stored in areas that are unknown and unable to be traced. Creating yet more data silos hidden from the wider enterprise. This data is viewed as a valid data source relied upon and then used as input into other systems, which can ultimately lead to bad business decisions being made.

A robust data governance and management strategy is something which the importance of cannot be underestimated, particularly for those serious about the digital agenda and customer experience. This is also a topic where the combination of business and IT leadership aligning on the product strategy and underlying “data plumbing” is a must.  This is not just about systems but also about the organisation’s attitude to data and its importance in the life of every business process. It is important that companies implement a data management strategy which encompasses not only the internal platforms and governance but also the presentation layer for business users, consumers and data insights.

The ultimate way to move beyond trading latency?

Posted on : 29-03-2019 | By : richard.gale | In : Finance, Uncategorized

Tags: , , , , , , ,

0

A number of power surges and outages have been experienced in the East Grinstead area of the UK in recent months. Utility companies involved have traced the cause to one of three  high capacity feeds to a Global Investment bank’s data centre facility.

The profits created by the same bank’s London based Propriety Trading group has increased tenfold in the same time.

This bank employs 1% of the world’s best post-doctoral theoretical Physics graduates  to help build its black box trading systems

Could there be a connection? Wild & unconfirmed rumours have been circulating within  the firm that a major breakthrough in removing the problem of latency – the physical limitation the time it takes a signal to transfer down a wire – ultimately governed by of the speed of light.

For years traders have been trying to reduce execution latency to provide competitive advantage in a highly competitive fast moving environment. The focus has moved from seconds to milli and now microsecond savings.

Many Financial Services & technology organisations have attempted to solve this problem through reducing  data hopping, routing, and going as far as placing their hardware physically close to the source of data (such as in an Exchange’s data centre) to minimise latency but no one has solved the issue – yet.

It sounds like this bank may have gone one step further. It is known that at the boundary of the speed of light – physics as we know it -changes (Quantum mechanics is an example where the time/space continuum becomes ‘fuzzy’). Conventional physics states that travelling faster than the speed of light and see into the future would require infinite energy and so is not possible.

Investigation with a number of insiders at the firm has resulted in an amazing and almost unbelievable insight. They have managed to build a device which ‘hovers’ over the present and immediate future – little detail is known about it but it is understood to be based on the previously unproven ‘Alcubierre drive’ principle. This allows the trading system to predict (in reality observe) the next direction in the market providing invaluable trading advantage.

The product is still in test mode as the effects of trading ahead of the data they have already traded against is producing outages in the system as it then tries to correct the error in the future data which again changes the data ad finitum… The prediction model only allows a small glimpse into the immediate future which also limits the window of opportunity for trading.

The power requirements for the equipment are so large that they have had to been moved to the data centre environment where consumption can be more easily hidden (or not as the power outages showed).

If the bank does really crack this problem then they will have the ultimate trading advantage – the ability to see into the future and trade with ‘inside’ knowledge legally. Unless another bank is doing similar in the ‘trading arms race’ then the bank will quickly become dominant and the other banks may go out of business.

The US Congress have apparently discovered some details of this mechanism and are requesting the bank to disclose details of the project. The bank is understandably reluctant to do this as it has spent over $80m developing this and wants to make some return on its investment.

If this system goes into true production mode surely it cannot be long before Financial Regulators outlaw the tool as it will both distort and ultimately destroy the markets.

Of course the project has a codename…. Project Tachyons

No one from the company was available to comment on the accuracy of the claims.

Do you believe that your legacy systems are preventing digital transformation?

Posted on : 14-03-2019 | By : richard.gale | In : Data, Finance, FinTech, Innovation, Uncategorized

Tags: , , , , , , , ,

0

According to the results of our recent Broadgate Futures Survey more than half of our clients agreed that digital transformation within their organisation was being hampered by legacy systems. Indeed, no one “strongly disagreed” confirming the extent of the problem.

Many comments suggested that this was not simply a case of budget constraints, but the sheer size, scale and complexity of the transition had deterred organisations in fear of the fact that they were not adequately equipped to deliver successful change.

Legacy systems have a heritage going back many years to the days of the mega mainframes of the 70’s and 80’s. This was a time when banks were the masters of technological innovation. We saw the birth of ATMs, BACS and international card payments. It was an exciting time of intense modernisation. Many of the core systems that run the finance sector today are the same ones that were built back then. The only problem is that, although these systems were built to last they were not built for change.

The new millennium experienced another significant development with the introduction of the internet, an opportunity the banks could have seized and considered developing new, simpler, more versatile systems. However, instead they decided to adopt a different strategy and modify their existing systems, in their eyes there was no need to reinvent the wheel. They made additions and modifications as and when required. As a result, most financial organisations have evolved over the decades into organisations of complex networks, a myriad of applications and an overloaded IT infrastructure.

The Bank of England itself has recently been severely reprimanded by a Commons Select Committee review who found the Bank to be drowning in out of date processes in dire need of modernisation. Its legacy systems are overly complicated and inefficient, following a merger with the PRA in 2014 their IT estate comprises of duplicated systems and extensive data overload.

Budget, as stated earlier is not the only factor in preventing digital transformation, although there is no doubt that these projects are expensive and extremely time consuming. The complexity of the task and the fear of failure is another reason why companies hold on to their legacy systems. Better the devil you know! Think back to the TSB outage (there were a few…), systems were down for hours and customers were unable to access their accounts following a system upgrade. The incident ultimately led to huge fines from the Financial Conduct Authority and the resignation of the Chief Executive.

For most organisations abandoning their legacy systems is simply not an option so they need to find ways to update in order to facilitate the connection to digital platforms and plug into new technologies.

Many of our clients believe that it is not the legacy system themselves which are the barrier, but it is the inability to access the vast amount of data which is stored in its infrastructure.  It is the data that is the key to the digital transformation, so accessing it is a crucial piece of the puzzle.

“It’s more about legacy architecture and lack of active management of data than specifically systems”

By finding a way to unlock the data inside these out of date systems, banks can decentralise their data making it available to the new digital world.

With the creation of such advancements as the cloud and API’s, it is possible to sit an agility layer between the existing legacy systems and newly adopted applications. HSBC has successfully adopted this approach and used an API strategy to expand its digital and mobile services without needing to replace its legacy systems.

Legacy systems are no longer the barrier to digital innovation that they once were. With some creative thinking and the adoption of new technologies legacy can continue to be part of your IT infrastructure in 2019!

https://www.finextra.com/newsarticle/33529/bank-of-england-slammed-over-outdated-it-and-culture

M&A – Cyber Security Due Diligence

Posted on : 31-08-2018 | By : richard.gale | In : Cyber Security, data security, Finance

Tags: , ,

0

Following the discovery of two data breaches affecting more than 1 billion Yahoo Inc. users, Verizon Communications Inc. reduced its offer by $350 million to acquire the company in 2017. This transaction illustrates how a companies’ reputation and future are impacted by cybersecurity, failure to investigate these measures during mergers and acqusitions could lead to costly integration, unexpected liability and higher overall enterprise risk.

We can see almost daily the effect a data breach can have with companies losing millions in terms of direct losses, reputational damage and customer loyalty. A hurried or limited cybersecurity vetting process may miss exposures or key indicators of an existing or prior breach.

It is crucial to understand cybersecurity vulnerabilities, the damage that may occur in the event of a breach, and the effectiveness of the infrastructure that the target business has in place. An appropriate evaluation of these areas could significantly impact the value that the acquirer places on the target company and how the deal is structured. It is therefore crucial to perform a security assessment on the to-be-acquired company.

It wasn’t that long ago that mergers and acquisition deals were conducted in a paper-based room secured and locked down to only those with permitted access.  These days the process has moved on and is now mostly online, with the secure virtual data room being the norm. Awareness of cyber security in the information gathering part of the deal making process is well established. It is the awareness and need to look at the cyber security of the target company itself that has traditionally been under emphasised, looking more at the technical and practical job of integrating the merged companies’ infrastructure.

Deal makers acquiring must assess the cyber risk of an organisation in the same way that it would assess overall financial risk. Due diligence is all about establishing the potential liabilities of the company you are taking on.  According to the Verizon Data Breach survey it takes an average of 206 days to discover a breach. Often companies are breached without ever knowing. It is therefore important to look at the cyber risk not just in terms of have they been breached but what is the likelihood and impact of a breach.  An acquisition target company that looks good at the time of closing the deal may not look quite so good a few months later.

The main reason for this lack of importance given to the cyber threat is that M&A teams find it hard to quantify the cyber risk particularly given the time pressures involved.  A cyber risk assessment at the M&A stage is crucial if the acquiring company wants to protect its investment. The ability to carry out this assessment and to quantify the business impact of a likely cyber breach with a monetary value is invaluable to deal makers. Broadgate’s ASSURITY Assessment provides this information in a concise, value specific way using business language to measure risks, likelihood and cost of resolution.

A cyber security assessment should be part of every M&A due diligence process. If you don’t know what you are acquiring in terms of intellectual property and cyber risk how can you can possibly know the true value of what you are acquiring!

 

The Opportunity for Intelligent Process Automation in KYC / AML

Posted on : 28-06-2018 | By : richard.gale | In : compliance, Data, Finance, FinTech, Innovation

Tags: , , , , , , , , , , ,

0

Financial services firms have had a preoccupation with meeting the rules and regulations for fighting Financial Crime for the best part of the past decade. Ever since HSBC received sanction from both UK and US regulators in 2010, many other firms have also been caught short in failing to meet society’s expectations in this space. There have been huge programmes of change and remediation, amounting to 10’s of Billions of any currency you choose, to try to get Anti-Financial Crime (AFC) or Know Your Customer (KYC) / Anti-Money Laundering (AML) policies, risk methodologies, data sources, processes, organisation structures, systems and client populations into shape, at least to be able to meet the expectations of regulators, if not exactly stop financial crime.

The challenge for the industry is that Financial Crime is a massive and complex problem to solve. It is not just the detection and prevention of money laundering, but also needs to cover terrorist financing, bribery & corruption and tax evasion. Therefore, as the Banks, Asset Managers and Insurers have been doing, there is a need to focus upon all elements of the AFC regime, from education to process, and all the other activities in-between. Estimates as to the scale of the problem vary but the consensus is that somewhere between $3-5 trillion is introduced into the financial systems each year.

However, progress is being made. Harmonisation and clarity of industry standards and more consistency has come from the regulators with initiatives such as the 4th EU AML Directive. The appreciation and understanding of the importance of the controls are certainly better understood within Financial Services firms and by their shareholders. Perhaps what has not yet progressed significantly are the processes of performing client due diligence and monitoring of their subsequent activity. Most would argue that this is down to a number of factors, possibly the greatest challenge being the disparate and inconsistent nature of the data required to support these processes. Data needs to be sourced in many formats from country registries, stock exchanges, documents of incorporation, multiple media sources etc… Still today many firms have a predominantly manual process to achieve this, even when much of the data is available in digital form. Many still do not automatically ingest data into their work flows and have poorly defined processes to progress onboarding, or monitoring activities. This is for the regulations as they stand today, in the future this burden will further increase as firms will be expected to take all possible efforts to determine the integrity of their clients i.e. by establishing linkages to bad actors through other data sources such as social media and the dark web not evident in traditional sources such as company registries.

There have been several advances in recent years with technologies that have enormous potential for supporting the AFC cause. Data vendors have made big improvements in providing a broader and higher quality of data. The Aggregation solutions, such as Encompass offer services where the constituents of a corporate ownership structure can be assembled, and sanctions & PEP checks undertaken in seconds, rather than the current norm of multiple hours. This works well where the data is available from a reliable electronic source. However, does not work where there are no, or unreliable sources of digital data, as is the case for Trusts or in many jurisdictions around the world. Here we quickly get back to the world of paper and PDFs’ which still require human horsepower to review and decision.

Getting the information in the first instance can be very time consuming with complex interactions between multiple parties (relationship managers, clients, lawyers, data vendors, compliance teams etc) and multiple communications channels i.e. voice, email and chat in its various forms. We also have the challenge of Adverse Media, where thousands of news stories are generated every day on Corporates and Individuals that are the clients of Financial firms. The news items can be positive or negative but consumes tens of thousands of people to review, eliminate or investigate this mountain of data each day. The same challenges come with transaction monitoring, where individual firms can have thousands of ‘hits’ every day on ‘unusual’ payment patterns or ‘questionable’ beneficiaries. These also require review, repair, discounting or further investigation, the clear majority of which are false positives that can be readily discarded.

What is probably the most interesting opportunity for allowing the industry to see the wood for the trees in this data heavy world, is the maturing of Artificial Intelligence (AI) based, or ‘Intelligent’ solutions. The combination of Natural Language Processing with Machine Learning can help the human find the needles in the haystack or make sense of unstructured data that would ordinarily require much time to read and record. AI on its own is not a solution but combined with process management (workflow) and digitised, multi-channel communications, and even Robotics can achieve significant advances. In summary ‘Intelligent’ processing can address 3 of the main data challenges with the AFC regimes within financial institutions;

  1. Sourcing the right data – Where data is structured and digitally obtainable it can be readily harvested but needs to be integrated into the process flows to be compared, analysed, accepted or rejected as part of a review process. Here AI can be used to perform these comparisons, support analysis and look for patterns of common or disparate Data. Where the data is unstructured i.e. embedded in a paper document (email / PDF / doc etc.) then AI NLP and Machine Learning can be used to extract the relevant data and turn the unstructured into structured form for onward processing
  2. Filtering – with both Transaction Monitoring and Adverse Media reviews there is a tsunami of data and events presented to Compliance and Operations teams for sifting, reviewing, rejecting or further investigation. The use of AI can be extremely effective at performing this sifting and presenting back only relevant results to users. Done correctly this can reduce this burden by 90+% but perhaps more importantly, never miss or overlook a case so providing reassurance that relevant data is being captured
  3. By using Intelligent workflows, processes can be fully automated where simple decision making is supported by AI, thereby removing the need for manual intervention in many tasks being processed. Leaving the human to provide value in the complex end of problem solving

Solutions are now emerging in the industry, such as OPSMATiX, one of the first Intelligent Process Automation (IPA) solutions. Devised by a group of industry business experts as a set of technologies that combine to make sense of data across different communication channels, uses AI to turn the unstructured data into structured, and applies robust workflows to optimally manage the resolution of cases, exceptions and issues. The data vendors, and solution vendors such as Encompass are also embracing AI techniques and technologies to effectively create ‘smart filters’ that can be used to scour through thousands, if not millions of pieces of news and other media to discover, or discount information of interest. This can be achieved in a tiny fraction of the time, and therefore cost, and more importantly with far better accuracy than the human can achieve. The outcome of this will be to liberate the human from the process, and firms can either choose to reduce the costs of their operations or use people more effectively to investigate and analyse those events, information and clients that maybe of genuine cause for concern, rather than deal with the noise.

Only once the process has been made significantly more efficient, and the data brought under control can Financial firms really start to address the insidious business of financial crime. Currently all the effort is still going into meeting the regulations, and not societies actual demand which is to combat this global menace, Intelligent process should unlock this capability

 

Guest Author : David Deane, Managing Partner of FIMATIX and CEO of OPSMATiX. David has had a long and illustrious career within Operations and Technology global leadership with Wholesale Banks and Wealth Managers. Before creating FIMATIX and OPSMATiX, he was recently the Global Head of KYC / AML Operations for a Tier 1 Wholesale Bank.

david.deane@fimatix.com

Welcoming Robots to the Team

Posted on : 30-05-2018 | By : richard.gale | In : Finance, FinTech, Innovation

Tags: , , , , ,

1

Research suggests that that the adoption of Robotic Process Automation (RPA) and AI technologies is set to double by 2019. This marks a fundamental change in how organisations work and the potential impact on employees should not be underestimated.

For many years we have seen robots on the factory floor where manual processes have been replaced by automation. This has drastically changed the nature of manufacturing and has inevitably led to a reduction in these workforces.  It is understandable therefore, that we can hear the trembling voices of city workers shouting, “the robots are coming!”

Robotic software should not be thought of as the enemy but rather as a friendly addition to the IT family.  A different approach is needed. If you were replacing an excel spreadsheet with a software program an employee would see this as advantage, as it makes their job quicker and easier to do, therefore welcome the change. Looking at RPA in the same way will change the way employees view its implementation and how they feel about it.

There is no doubt that in some cases RPA is intended as a cost saver but organisations that see RPA as simply a cost saving solution will reap the least rewards. For many companies who have already completed successful RPA programmes, the number one priority has been to eliminate repetitive work that employees didn’t want or need to do. Approaching an RPA project in a carefully thought out and strategic manner will provide results that show that RPA and employees can work together.

Successful transformation using RPA relies on an often used but very relevant phrase  “it’s all about the People Process and Technology”.  You need all three in the equation. It is undeniable that automation is a disruptive technology which will affect employees outlook and affect the way they work. Change management is key in managing these expectations. If robots are to be a part of your organisation, then your employees must be prepared and included.

Perhaps it’s time to demystify RPA, and see it for what is really is, just another piece of software! Automation is about making what you do easier to execute, with less mistakes and greater flexibility. It is important to demonstrate to your staff that RPA is part of a much wider strategic plan of growth and new opportunities.

It is vital to communicate with staff at every level, explaining the purpose of RPA and what it will mean for them. Ensure everyone understands the implications and the benefits of the transition to automation. Even though activities and relationships within an organisation may change this does not necessarily mean a change for the worst.

Employees must be involved from the start of the process. Those individuals who have previously performed the tasks to be automated will be your subject matter experts. You will need to train several existing employees in RPA to manage the process going forward.  Building an RPA team from current employees will ensure that you have their buy- in which is crucial if the implementation is to be a success.

With any new software training is often an afterthought. In the case of RPA training is more important than ever, ensuring that the robots and employees understand each other and can work efficiently together. Working to train RPA experts internally will result in a value-added proposition for the future when it comes to maintaining or scaling your solution.

When analysing the initial RPA requirements, a great deal of thought must be given to the employees who are being replaced and where their skills can be effectively be redeployed. Employee engagement increases when personnel feel that their contribution to the organisation is meaningful and widespread.

Consultation and collaboration throughout the entire process will help to ensure a smoother transition where everyone can feel the benefits. Following a successful RPA implementation share the results with everyone in your organisation.  Share the outcomes and what you have learnt, highlight those employees and teams that have helped along the way.

The robots are coming! They are here to help and at your service!

OK Google, Alexa, Hey Siri – The Rise of Voice Control Technology

Posted on : 30-04-2018 | By : kerry.housley | In : Consumer behaviour, Finance, FinTech, Innovation, Predictions

Tags: , , , , ,

0

OK Google, Alexa, Hey Siri…. All too familiar phrases around the home now, but it was not that long ago that we did not know what a ‘smart phone’ was! Today most people could not live without one. Imagine not being able to check your email, instant message friends or watch a movie whilst on the move.  How long will it be before we no will no longer need a keyboard, instead talking to your computer will be the norm!

The development of voice activated technology in the home will ultimately revolutionise the way we command and control our computers. Google Home has enabled customers to shop with its partners, pay for the transaction and have goods delivered all without the touch of a keyboard. How useful could this be integrated into the office environment? Adding a voice to mundane tasks will enable employees to be more productive and free up time allowing them to manage their workflow and daily tasks more efficiently.

Voice-based systems has grown more powerful with the use of artificial intelligence, machine learning, cloud-based computing power and highly optimised algorithms. Modern speech recognition systems, combined with almost pristine text-to-speech voices that are almost indistinguishable from human speech, are ushering in a new era of voice-driven computing. As the technology improves and people become more accustomed to speaking to their devices, digital assistants will change how we interact with and think about technology.

There are many areas of business where this innovative technology will be most effective. Using voice control in customer service will transform the way businesses interact with their customers and improve the customer experience.

Many banks are in the process of, if they haven’t done so already, of introducing voice biometric technology. Voice control enables quick access to telephone banking without the need to remember a password every time you call or log in. No need to wade through pages of bank account details or direct debits to make your online payments instead a digital assistant makes the payment for you.

Santander has trialled a system that allows customers to make transfers to existing payees on their account by using voice recognition. Customers access the process by speaking into an application on their mobile device.

Insurance companies are also realising the benefits voice control can bring to their customers. HDFC  Insurance, an Indian firm, has announced the launch of its AI enabled chatbot on Amazon’s cloud-based voice service, Alexa. It aims to offer a 24/7 customer assistance with instant solutions to customer queries. Thereby creating an enhanced customer service experience, allowing them to get easy access to information about policies, simply with the use of voice commands.

It could also help to streamline the claims process where inefficiencies in claims documentation take up insurers’ time and money. Claims processors spend as much as 50% of their day typing reports and documentation; speech recognition could rapidly reduce the time it takes to complete the process. US company Nuance claims that their Dragon Speech Recognition Solution can enable agents to dictate documents three times faster than typing with up to 99% accuracy. They can use simple voice commands to collapse the process further.

Retailers too are turning to this technology. With competition so tough on the high street retailers are always looking for the ultimate customer experience and many believe that voice control is a great way to achieve this. Imagine a mobile app where you could scan shopping items, then pay using a simple voice command or a selfie as you leave the store. No more queuing at the till.

Luxury department store Liberty is a big advocate of voice control and uses it for their warehouse stock picking. Using headsets and a voice controlled application, a voice controlled app issues commands to a central server about which products should be picked. For retailers voice control is hit on and off the shop floor.

So, how accurate is voice recognition? Accuracy rates are improving all the time with researchers commenting that some systems could be better than human transcription. In 1995 the error rate was 43%, today the major vendors claim an error rate of just 5%.

Security is a major factor users still face with verification requiring two factor authentication with mobile applications. However, as the technology develops there should be less of a need to confirm an individual’s identity before commands can be completed.

As advances are made in artificial intelligence and machine learning the sky will be limit for Alexa and her voice control friends. In future stopping what you are doing and typing in a command or search will start to feel a little strange and old-fashioned.

 

How long will it be before you can pick up your smart phone talk to your bank and ask it to transfer £50 to a friend, probably not as far away prospect as you might think!!

How is Alternative Data Giving Investment Managers the Edge?

Posted on : 29-03-2018 | By : richard.gale | In : Consumer behaviour, Data, data security, Finance, FinTech, Innovation

Tags: , , ,

0

Alternative data (or ‘Alt-Data’) refers to data that is derived from a non-traditional source covering a whole array of platforms such as social media, newsfeeds, satellite tracking and web traffic.  There is vast amount of data in cyber space which, until recently remained untouched.  Here we shall look at the role of these unstructured data sets.

Information is the key to the success of any investment manager and information that can give the investor the edge is by no means a new phenomenon.  Traditional financial data, such as stock price history and fundamentals has been the standard for determining the health of a stock. However, alternative data has the potential to reveal insights about a stock’s health before traditional financial data. This has major implications for investors.

If information is power, then unique information sourced from places not-yet-sourced is giving those players the edge in a highly competitive market. Given that we’re in what we like to call a data revolution, where nearly every move we make can be digitized, tracked, and analysed, every company is now a data company. Everyone is both producing and consuming immense amounts of data in the race to make more money. People are well connected on social media platforms and information is available to them is many different forms. Add geographical data into the mix and that’s a lot of data about whose doing what and why. Take Twitter, it is a great tool for showing what’s happening in the world and what is being talked about. Being able to capture sentiment as well as data is a major advance in the world of data analytics.

Advanced analytical procedures can pull all this data together using machine learning and cognitive computing. Using this technology, we can take the unstructured data and transform it into useable data sets at rapid speed.

Hedge funds have been the early adopters and investment managers have now seen the light are expected to spend $7bn by 2020 on alternative data.  All asset managers realise that this data can produce valuable insight and give them the edge in a highly competitive market place.

However, it could be said that if all investment managers research data in this way, then that will put them all on the same footing and the competitive advantage is lost. Commentators have suggested that given the data pool is so vast and the combinations and permutations analysis is of data complex, it is still highly likely that this data can be uncovered that has not been uncovered by someone else. It all depends on the data scientist and where they decide to look. Far from creating a level playing field, where more readily available information simply leads to greater market efficiency, the impact of the information revolution is the opposite. It is creating hard-to access pockets for long-term alpha generation for those players with the scale and resources to take advantage of it.

Which leads us to our next point. A huge amount of money and resource is required to research this data, and this will mean only the strong survive. A report last year by S&P found that 80% of asset managers plan to increase their investments in big data over the next 12 months. Only 6% of asset managers argue that it is not important. Where does this leave the 6%?

Leading hedge fund bosses have warned fund managers they will not survive if they ignore the explosion of big data that is changing the way investors beat the markets. They are

Investing a lot of time and money to develop machine learning in areas of its business where humans can no longer keep up.

There is however one crucial issue which all investors should be aware of and that is the area of privacy. Do you know where that data originates from? Did that vendor have the right to sell the information in the first place?  We have seen this illustrated over the last few weeks with the Facebook “data breach” where Facebook sold on some of its users’ data to Cambridge Analytica without the users’ knowledge. This has wiped $100bn off the Facebook value so we can see the negative impact of using data without the owner’s permission.

The key question in the use of alternative data ultimately is, does it add value? Perhaps too early to tell. Watch this space!