Extreme Outsourcing: A Dangerous Sport?

Recently I’ve thought about an event I attended in the early 2000’s, at which there was a speech that really stuck in my mind. The presenter gave a view on a future model of how companies would source their business operations, specifically the ratio of internally managed against that which would be transitioned to external providers (I can’t remember exactly the event, but it was in Paris and the keynote was someone you might remember, named Carly Fiorina…).

What I clearly remember, at the time, was a view that I considered to be a fairly extreme view of the potential end game. He asked the attendees:

Can you tell me what you think is the real value of organisations such as Coca Cola, IBM or Disney?

Answer: The brand.

It’s not the manufacturing process, or operations, or technology systems, or distribution, or marketing channels, or, or… Clearly everything that goes into the intellectual property to build the brand/product (such as the innovation and design) is important, but ultimately, how the product is built, delivered and operated offers no intrinsic value to the organisation. In these areas it’s all about efficiency.

In the future, companies like these would be a fraction of the size in terms of the internal staff operations.

Fast forward to today and perhaps this view is starting to gain some traction…at least to start the journey. For many decades, areas such as technology services have be sourced through external delivery partners. Necessity, fashion and individual preference have all driven CIOs into various sourcing models. Operations leaders have implemented Business Process Outsourcing (BPO) to low cost locations, as have other functions such the HR and Finance back offices.

But perhaps there are two more fundamental questions that CEOs or organisations should ask as they survey their business operations;

  • 1) What functions that we own actually differentiate us from our competitors?
  • 2) Can other companies run services better than us?

It is something that rarely gets either asked or answered in a way that is totally objective. That is of course a natural part of the culture, DNA and political landscape of organisations, particularly those that have longevity and legacy in developing internal service models. But is isn’t a question that can be kicked into the long grass anymore.

Despite the green shoots of economic recovery, there are no indications that the business environment is going to return to the heady days of large margins and costs being somewhat “consequential”. It’s going to be a very different competitive world, with increased external oversight and challenges/threats to companies, such as through regulation, disruptive business models and innovative new entrants.

We also need to take a step back and ask a third question…

  • 3) If we were building this company today, would we build and run it this way?

Again a difficult, and some would argue, irrelevant question. Companies have legacy operations and “technical debt” and that’s it…we just need to deal with it over time. The problem is, time may not be available.

In our discussions with clients, we are seeing that realisation may have dawned. Whilst many companies in recent years have reported significant reductions in staff numbers and costs, are we still just delaying the “death by a thousand cuts”? Some leaders, particularly in technology, have realised that not only running significant operations is untenable, but also that a more radical approach should be taken to move the bar much closer up the operating chain towards where the real business value lies.

Old sourcing models looked at drawing the line at functions such as Strategy, Architecture, Engineering, Security, Vendor Management, Change Management and the like. These were considered the valuable organisational assets. Now. I’m not saying that is incorrect, but what often has happened is that have been treated holistically and not broken down into where the real value lies. Indeed, for some organisations we’ve heard of Strategy & Architecture having between 500-1000 staff! (…and, these are not technology companies).

Each of these functions need to be assessed and the three questions asked. If done objectively, then I’m sure a different model would emerge for many companies with trusted service providers running much on the functions previously thought of as “retained”. It is both achievable, sensible and maybe necessary.

On the middle and front office side, the same can be asked. When CEOs look at the revenue generating business front office, whatever the industry, there are key people, processes and IP that make the company successful. However, there are also many areas where it was historically a necessity to run internally but actually adds no business value (although, of course still very key). If that’s the case, then it makes sense to source it from specialist provider where the economies of scale and challenges in terms of service (such as from “general regulatory requirements”) can be managed without detracting from the core business.

So, if you look at some of the key brands and their staff numbers today in the 10’s/100’s of thousands, it might only be those that focus on key business value and shed the supporting functions, that survive tomorrow.

Posted on : 27-09-2019 | By : kerry.housley | In : Uncategorized

Tags: , , , ,

0

Why are we still getting caught by the ‘Phisher’men?

Phishing attacks have been on the increase and have overtaken malware as the most popular cyber attack method. Attackers are often able to convincingly impersonate users and domains, bait victims with fake cloud storage links, engage in social engineering and craft attachments that look like ones commonly used in the organisation.

Criminal scammers are using increasingly sophisticated methods by employing more complex phishing site infrastructures that can be made to look more legitimate to the target. These include the use of well-known cloud hosting and document sharing services, established brand names which users believe are secure simply due to name recognition. For example, Microsoft, Amazon and Facebook are top of the hackers list. Gone are the days when phishing simply involved the scammer sending a rogue email and tricking the user into clicking on a link!

And while we mostly associate phishing with email, attackers are taking advantage of a wide variety of attack methods to trick their victims. Increasingly, employees are being subjected to targeted phishing attacks directly in their browser with highly legitimate looking sites, ads, search results, pop-ups, social media posts, chat apps, instant messages, as well as rogue browser extensions and free web apps

HTML phishing is a particularly effective means of attack where it can be delivered straight into browsers and apps, bypassing secure email gateways, next-generation antivirus endpoint security systems and advanced endpoint protections. These surreptitious methods are capable of evading URL inspections and domain reputation checking.

To make matters worse, the lifespan of a phishing URL has decreased significantly in recent years. To evade detection, phishing gangs can often gather valuable personal information in around 45 minutes. The bad guys know how current technologies are trying to catch them, so they have devised imaginative new strategies to evade detection. For instance, they can change domains and URLs fast enough so the blacklist-based engines cannot keep up. In other cases, malicious URLs might be hosted on compromised sites that have good domain reputations. Once people click on those sites, the attackers have already collected all the data they need within a few minutes and moved on.

Only the largest firms have automated their detection systems to spot potential cyberattacks. Smaller firms are generally relying on manual processes – or no processes at all. This basic lack of protection is a big reason why phishing for data has become the first choice for the bad actors, who are becoming much more sophisticated. In most cases, employees can’t even spot the fakes, and traditional defences that rely on domain reputation and blacklists are not enough.

By the time the security teams have caught up, those attacks are long gone and hosted somewhere else. Of the tens of thousands of new phishing sites that go live each day, the majority are hosted on compromised but otherwise legitimate domains. These sites would pass a domain reputation test, but they’re still hosting the malicious pages. Due to the fast-paced urgency of this threat, financial institutions should adopt a more modern approach to defend their data. This involves protections that can immediately determine the threat level in real-time and block the phishing hook before they draw out valuable information..

  • Always check the spelling of the URLs in email links before you click or enter sensitive information
  • Watch out for URL redirects, where you’re subtly sent to a different website with identical design
  • If you receive an email from a source you know but it seems suspicious, contact that source with a new email, rather than just hitting reply
  • Don’t post personal data, like your birthday, vacation plans, or your address or phone number, publicly on social media

We have started to work with Ironscales, a company which provides protection utilising machine learning to understand normal behaviours of users email interactions. It highlights (and can automatically remove) emails from the user’s inbox before they have time to open them. They cross reference this information with a multiple of other sources and the actions of their other client’s SOC analysts. This massively reduces the overhead in dealing with phishing or potential phishing emails and ensures that users are aware of the risks. Some great day to day examples include the ability to identify that an email has come from a slightly different email address or IP source. The product is being further developed to identify changes in grammar and language to highlight where a legitimate email address from a known person may have been compromised. We really like the ease of use of the technology and the time saved on investigation & resolution.

If you would like to try Ironscales out, then please let us know?

 

Phishing criminals will continue to devise creative new ways of attacking your networks and your employees. Protecting against such attacks means safeguarding those assets with equal amounts of creativity.

Posted on : 26-09-2019 | By : kerry.housley | In : Cyber Security, data security, Finance, Innovation

Tags: , , , , , , ,

0

Artificial Intelligence – Explaining the Unexplainable

The rise of Artificial Intelligence (AI) is dramatically changing the way businesses operate and provide their services. The acceleration of intelligent automation is enabling companies to operate more efficiently, promote growth, deliver greater customer satisfaction and drive up profits. But what exactly is AI? How does it reach its decisions? How can we be sure it follows all corporate, regulatory and ethical guideline? Do we need more human control? 

Is it time for AI to explain itself? 

The enhancement of human intelligence with AI’s speed and precisiomeans a gigantic leap forward for productivity. The ability to feed data into an algorithm black box and return results in a fraction of the time a human could compute, is no longer sci fi fantasy but now a reality.  

However, not everyone talks about AI with such enthusiasmCritics are concerned that the adoption of AI machines will lead to the decline of the human role rather than freedom and enhancement for workers.   

Ian McEwan in his latest novel Machines Like Me writes about a world where machines take over in the face of human decline. He questions machine learning referring to it as

“the triumph of humanism or the angel of death?” 

Whatever your view, we are not staring at the angel of death just yet!  AI has the power to drive a future full of potential and amazing discovery. If we consider carefully all the aspects of AI and its effects, then we can attempt to create a world where AI works for us and not against us. 

Let us move away from the hype and consider in real terms the implications of the shift from humans to machines. What does this really mean? How far does the shift go?  

If we are to operate in world where we are relying on decisions made by software, we must understand how this decision is calculated in order to have faith in the result.   

In the beginning the AI algorithms were relatively simple as humans learned how to define them. As time has moved on, algorithms have evolved and become more complex. If you add to this machine learning, and we have a situation where we have machines that can “learn behaviour patterns thereby altering the original algorithm. As humans don’t have access to the algorithms black box we are no longer in charge of the process.   

The danger is that where we do not understand what is going on in the black box and can therefore no longer be confident in the results produced.

If we have no idea how the results are calculated, then we have lost trust in the process. Trust is the key element for any business, and indeed for society at large. There is a growing consensus around the need for AI to be more transparent. Companies need to have a greater understanding of their AI machines. Explainable AI is the idea that an AI algorithm should be able to explain how it reached its conclusion in a way that humans can understand. Often, we can determine the outcome but cannot explain how it got there!  

Where that is the case, how can we trust the result to be true, and how can we trust the result to be unbiased?  The impact of this is not the same in every case, it depends on whether we are talking about low impact or high impact outcomes. For example, an algorithm that decides what time you should eat your breakfast is clearly not as critical as an algorithm which determines what medical treatment you should have.  

As we witness a greater shift from humans to machines, the greater the need for the explainability.  

Consensus for more explainable AI is one thing, achieving it is quite another. Governance is an imperative, but how can we expect regulators to dig deep into these algorithms to check that they comply, when the technologists themselves don’t understand how to do this. 

One way forward could be a “by design” approach – i.e., think about the explainable element at the start of the process. It may not be possible to identify each and every step once machine learning is introduced but a good business process map will help the users the define process steps.  

The US government have been concerned about this lack of transparency for some time and have introduced the Algorithmic Accountability Act 2019. The Act looks at automated decision making and will require companies to show how their systems have been designed and built. It only applies to the large tech companies with turnover of more than $50M dollars, but it provides a good example that all companies would be wise to follow.  

Here in the UK, the Financial Conduct Authority is working very closely with the Alan Turing Institute to ascertain what the role of the regulator should be and how governance can be  appropriately introduced.

The question is how explainable and how accurate the explanation needs to be in each case, depending on the risk and the impact.  

With AI moving to ever increasing complexity levels, its crucial to understand how we get to the results in order to trust the outcome. Trust really is the basis of any AI operation. Everyone one involved in the process needs to have confidence in the result and know that AI is making the right decision, avoiding manipulationbias and respecting ethical practices. It is crucial that the AI operates within public acceptable boundaries.  

Explainable AI is the way forward if we want to follow good practice guidelines, enable regulatory control and most importantly build up trust so that the customer always has confidence in the outcome.   

AI is not about delegating to robots, it is about helping people to achieve more precise outcomes more efficiently and more quickly.  

If we are to ensure that AI operates within boundaries that humans expect then we need human oversight at every step. 

Posted on : 23-09-2019 | By : kerry.housley | In : Finance, FinTech, General News, Innovation

Tags: , , , , , ,

0

AI in Cyber Security – Friend or Foe?

Artificial intelligence has been welcomed by the cyber security industry as an invaluable tool in the fight against cyber crime, but is it a doubleedged sword? One that is both a powerful defender but potentially a potent weapon for the cyber criminals.

The same artificial intelligence technologies that are used to power speech recognition and self-driving cars have the capability to be turned to other uses, such as creating viruses that morph faster than antivirus companies can keep up, phishing emails that are indistinguishable from real messages written by humans, and intelligently attacking an organisation’s entire defence infrastructure to find the smallest vulnerability and exploit any gap.

Just like any other technology, AI has both strengths and weaknesses that can be abused when in the wrong hands.  

In the AI-fuelled security wars, the balance of power is currently in the hands of the good guys, but undoubtedly set to change.  

Until now, attackers have been relying on mass distribution and sloppy security. The danger is that we will start to see more adversaries, especially those that are well funded, start to leverage these advanced tools and methods more frequently. It is concerning to know that nation-state attackers like Russia and China have almost unlimited resources to develop these tools and make maximum use of them. 

The dark web acts as a clearing house for the cyber criminals where all manner of crypto software is available.  

There are many ways in which the hackers seek to benefit from your information but the biggest reward is the password which opens up their world to a whole new set of vulnerabilities to exploit. Algorithms can crack millions of passwords within minutes.  

Threat Analytics firm Dark Trace has seen evidence of malware programs showing signs of contextual awareness in trying to steal data and hold systems to ransom. They know what to look for and how to find it by closely observing the infrastructure and they can then work out the best way for them to avoid detection. This means the program no longer needs to maintain contact with the hacker through command and control servers or other means, which is usually one of the most effective means of tracking the perpetrator.

Recently, Microsoft was able to spot an attempted hack of it’s Azure cloud when the AI in the security system identified a false intrusion from a fake site. Without the introduction of AI this would have gone unnoticed had they been using rule based protocols.  AI’s ability to learn and adapt itself to new threats should dramatically improve the enterprise’s ability to protect itself even as data and infrastructure push past the traditional firewall into the cloud and the internet of things. 

Human effort won’t scale – there are too many threats, too many changes, and too many network interactions. 

As cybercrime becomes more and more technologically advanced, there is no doubt that we will witness the bad guys employing AI in various additional sophisticated scenarios. 

It’s time for cybersecurity managers to make sure they’re doing everything they can to reduce their attack surface as much as possible, put cutting-edge defenses in place, and replace time-consuming cybersecurity tasks with automation. 

We should all be concerned that as we begin to see AI-powered chatbots, and extensive influence weaving through social media, we face the prospect of the internet as a weapon to undermine trust and control public opinionThis is a very worrying situtuation indeed!  

Posted on : 28-06-2019 | By : richard.gale | In : Uncategorized

0

When a picture tells a 1000 words – An image is not quite what it seems

Steganography is not a new concept, the ancient Greeks and Romans used hidden messages to outsmart their opponents and thousands of years later nothing has changed. People have always found ways of hiding secrets in a message in such a way that only the sender can understand. This is different from cryptography as rather than trying to obscure content so it cannot be read by anyone other than the intended, steganography’s aim is to conceal the fact that the content actually exists in the first place. If you take a look at two images one with cryptography and one without there will be no visible difference. It is a great way of sending secure messages where the sender can be assured of confidentiality and not be concerned about unauthorised viewing in the wrong hands. However, like so many technologies today, steganography can be used for good or for bad. When the bad guys get in on the act we have yet another threat to explore in the cyber landscape!

Hackers are increasingly using this method to trick internet users and smuggle in malicious code past security scanners and firewalls. This code can be hidden in harmless software and jump out at the users when they least expect it. The attackers download the file with the hidden data, extract for use in the next step of the attack.

Malvertising is one way in which the cyber criminals exploit the use of steganography. They buy advertising space on trustworthy websites, post their ads which appear legitimate, hiding their harmful code inside. Bad ads can redirect users to malicious websites or install malware on their computers or mobile devices. One of the most concerning aspects of this technique is that users get infected even if they don’t click on the image, often just loading the image is enough. Earlier this year, millions of Apple Mac users were hit when hackers used advertising campaigns to hide malicious code in ad images to avoid detection on the laptops. Some very famous names such as the New York Times and Spotify have inadvertently displayed theses criminal ads, putting their users at risk.

Botnets are another way in which hackers use steganography by using the hidden code to communicate on the inbound traffic flow and download malicious code to general malware. Botnet controllers employ steganography techniques to control target endpoints. They hide commands in plain view – perhaps within images or music files distributed through file sharing or social networking websites. This allows the criminals to surreptitiously issue instructions to their botnets without relying on an ISP to host their infrastructure and minimising the chances of discovery.

It’s not only the cyber criminals who have realised the potential of steganography, the malicious insider too is an enthusiast!  Last year a Chinese engineer was able to exfiltrate sensitive information  from General Electric by stegging it into images of sunsets. He was only discovered when GE Security officials became suspicious of him for an unrelated reason and started to monitor his office computer.

Organisations should be concerned about the rise of the steganography from both malicious outsiders and insiders. The battle between the hackers and security teams is on and one that the hackers are currently winning.  There are so many different steganography techniques that it is almost impossible to find one detection solution that can deal with them all. So, until the there is a detection solution it’s the same old advice. Always be aware of what you are loading and what you are clicking.

There is an old saying “the camera never lies” but sometimes maybe it does!

Posted on : 28-06-2019 | By : richard.gale | In : Uncategorized

0

How secure are your RPA Processes?

Robotic Process Automation is an emerging technology with many organisations looking at how they might benefit from automating some or all, of their business processes. However, in some companies there is a common misconception that letting robots loose on the network could pose a significant security risk. The belief being that robots are far less secure users than their human counterparts.  

In reality, a compelling case could be made that robots are inherently more secure than people 

Provided your robots are treated in the same way as their human teammates i.e. inherit the security access and profile of the person/role they are programmed to simulate there is no reason why a robot should have be any less secure. In other words, the security policies and access controls suitable for humans should be applied to the software robots in just the same way.  

There are many security advantages gained from introducing a robot into your organisation.  

  • Once a robot has been trained to perform a task, it never deviates from the policies, procedures and business rules in place
  • Unlike human users, robots lack curiosity (so they won’t be tempted to open phishing emails), cannot be tricked into revealing information or downloading unauthorised software. 
  • Robots have no motives which might could turn them into a disgruntled employee by ignoring existing policies and procedures.  

So, we can see that on the contrary- in many ways the predictable behaviour of the robot makes them your most trusted employee! 

RPA certainly represents an unprecedented level of transformation and disruption to “business as usual” – one that requires careful preparation and planning. But while caution is prudent, many of the security concerns related to RPA implementation are overstated. 

The issue of data security can be broken down into two points;  

  • Data Security 
  • Access Security 

This means ensuring that the data being accessed and processed by the robot remains secure and confidential. Access management of the robots must be properly assigned and reviewed similar to the review and management of existing human user accounts. 

Here are some of the key security points to consider: 

  1. Segregating access to data is not any different than when granting access to normal users, which is based on what the robot should actually do, and not providing domain admin permissions and/or elevated access, unless absolutely necessary. 
  2. Passwords should be maintained in a password vault and service accounts’ access should be reviewed periodically. 
  3. Monitoring the activity of the robots and logon information via a “control room” (e.g. monitoring of logon information and any errors). 
  4. An RPA environment should be strictly customised via active directory integration, which will increase business efficiency as access management is centralised. 
  5. Encryption of credentials. 
  6. Performing independent code audits and reviews, no different than with any other IT environment. 
  7. Robots are programmed using secure programming methods. 
  8. Security testing against policy controls. 

 

All these points must be considered from the outset. This is security by design, that must be embedded in the RPA process from the start. It must be re-emphasised that the security of RPA is not just about protecting access to the data but securing the data itself. 

Overall, RPA lowers security-related efforts associated with training employees and teaching them security practices (e.g. password management, applications of privacy settings etc) because it ensures a zero-touch environment. By eliminating manual work, automation minimizes security risks at a macro level, if the key controls are implemented at the beginning. 

In addition, an automated environment removes biases, variability and human error. The lack of randomness and variability can increase uniform compliance of company requirements built in the workflows and tasks of the automation. 

Besides security risks, the zero-touch environment of RPA also helps mitigate other human-related risks in business operations. An automated environment is free from biases, prejudices or variability, all of which are human work with the risk of error. Because of this, RPA ensures less risky and consistent work with trustworthy data. 

Therefore, RPA should be wisely implemented, which basically amounts to a choice of a stable RPA product or provider, backed by proper, constant monitoring of security measures. Providing role-based access to confidential data, monitoring access and data encryption are the most salient means to deal with security risks. 

Posted on : 17-06-2019 | By : richard.gale | In : Uncategorized

0

Are you able to access all the data across your organisation?

For many years data has been the lifeblood of the organisation and more recently, the value of this commodity has been realised by many companies (see our previous article “Data is like oil”).

Advances in technology, processing power and analytics means that companies can collect and process data in real time. Most businesses are sitting on vast amounts of data and those that can harness it effectively can gain a much deeper understanding of their customers, better predict and improve their customer experience.

Our survey revealed that whilst most companies understand the value of their data and the benefits it can bring, many clients revealed a level of frustration in the systems and processes that manage it. Some respondents did qualify that “most of the data” was available, whilst others admitted some was stranded.

 “Data is in legacy silos, our long-term goal is to provide access through a consistent data management framework”

The deficiencies that we also discuss in this newsletter regarding legacy systems are partly responsible for this, although not wholly. This is a particular issue in financial services where many organisations are running on old systems that are too complex and too expensive to replace. Critical company data is trapped in silos, disconnected and incompatible with the rest of the enterprise.

These silos present a huge challenge for many companies. Recalling a comment of one Chief Data Office at a large institution;

“If I ask a question in more than one place, I usually get more than one answer!”

Data silos are expanding as companies collect too much data which they hold onto for longer than they need to. Big data has been a buzz word for a while now, but it is important that companies distinguish between big data and big bad data! The number of data sources are increasing all the time so the issue must be addressed if the data is to be used effectively to return some business value. Collecting a virtually unlimited amount of data needs to be managed properly to ensure that all data stored has a purpose and can be protected.

Shadow data further exacerbates the issue. This data is unverified, often inaccurate and out of date. Oversharing of this data results in it being stored in areas that are unknown and unable to be traced. Creating yet more data silos hidden from the wider enterprise. This data is viewed as a valid data source relied upon and then used as input into other systems, which can ultimately lead to bad business decisions being made.

A robust data governance and management strategy is something which the importance of cannot be underestimated, particularly for those serious about the digital agenda and customer experience. This is also a topic where the combination of business and IT leadership aligning on the product strategy and underlying “data plumbing” is a must.  This is not just about systems but also about the organisation’s attitude to data and its importance in the life of every business process. It is important that companies implement a data management strategy which encompasses not only the internal platforms and governance but also the presentation layer for business users, consumers and data insights.

Posted on : 31-03-2019 | By : richard.gale | In : Data, Finance

0

The ultimate way to move beyond trading latency?

A number of power surges and outages have been experienced in the East Grinstead area of the UK in recent months. Utility companies involved have traced the cause to one of three  high capacity feeds to a Global Investment bank’s data centre facility.

The profits created by the same bank’s London based Propriety Trading group has increased tenfold in the same time.

This bank employs 1% of the world’s best post-doctoral theoretical Physics graduates  to help build its black box trading systems

Could there be a connection? Wild & unconfirmed rumours have been circulating within  the firm that a major breakthrough in removing the problem of latency – the physical limitation the time it takes a signal to transfer down a wire – ultimately governed by of the speed of light.

For years traders have been trying to reduce execution latency to provide competitive advantage in a highly competitive fast moving environment. The focus has moved from seconds to milli and now microsecond savings.

Many Financial Services & technology organisations have attempted to solve this problem through reducing  data hopping, routing, and going as far as placing their hardware physically close to the source of data (such as in an Exchange’s data centre) to minimise latency but no one has solved the issue – yet.

It sounds like this bank may have gone one step further. It is known that at the boundary of the speed of light – physics as we know it -changes (Quantum mechanics is an example where the time/space continuum becomes ‘fuzzy’). Conventional physics states that travelling faster than the speed of light and see into the future would require infinite energy and so is not possible.

Investigation with a number of insiders at the firm has resulted in an amazing and almost unbelievable insight. They have managed to build a device which ‘hovers’ over the present and immediate future – little detail is known about it but it is understood to be based on the previously unproven ‘Alcubierre drive’ principle. This allows the trading system to predict (in reality observe) the next direction in the market providing invaluable trading advantage.

The product is still in test mode as the effects of trading ahead of the data they have already traded against is producing outages in the system as it then tries to correct the error in the future data which again changes the data ad finitum… The prediction model only allows a small glimpse into the immediate future which also limits the window of opportunity for trading.

The power requirements for the equipment are so large that they have had to been moved to the data centre environment where consumption can be more easily hidden (or not as the power outages showed).

If the bank does really crack this problem then they will have the ultimate trading advantage – the ability to see into the future and trade with ‘inside’ knowledge legally. Unless another bank is doing similar in the ‘trading arms race’ then the bank will quickly become dominant and the other banks may go out of business.

The US Congress have apparently discovered some details of this mechanism and are requesting the bank to disclose details of the project. The bank is understandably reluctant to do this as it has spent over $80m developing this and wants to make some return on its investment.

If this system goes into true production mode surely it cannot be long before Financial Regulators outlaw the tool as it will both distort and ultimately destroy the markets.

Of course the project has a codename…. Project Tachyons

No one from the company was available to comment on the accuracy of the claims.

Posted on : 29-03-2019 | By : richard.gale | In : Finance, Uncategorized

Tags: , , , , , , ,

0

Do you believe that your legacy systems are preventing digital transformation?

According to the results of our recent Broadgate Futures Survey more than half of our clients agreed that digital transformation within their organisation was being hampered by legacy systems. Indeed, no one “strongly disagreed” confirming the extent of the problem.

Many comments suggested that this was not simply a case of budget constraints, but the sheer size, scale and complexity of the transition had deterred organisations in fear of the fact that they were not adequately equipped to deliver successful change.

Legacy systems have a heritage going back many years to the days of the mega mainframes of the 70’s and 80’s. This was a time when banks were the masters of technological innovation. We saw the birth of ATMs, BACS and international card payments. It was an exciting time of intense modernisation. Many of the core systems that run the finance sector today are the same ones that were built back then. The only problem is that, although these systems were built to last they were not built for change.

The new millennium experienced another significant development with the introduction of the internet, an opportunity the banks could have seized and considered developing new, simpler, more versatile systems. However, instead they decided to adopt a different strategy and modify their existing systems, in their eyes there was no need to reinvent the wheel. They made additions and modifications as and when required. As a result, most financial organisations have evolved over the decades into organisations of complex networks, a myriad of applications and an overloaded IT infrastructure.

The Bank of England itself has recently been severely reprimanded by a Commons Select Committee review who found the Bank to be drowning in out of date processes in dire need of modernisation. Its legacy systems are overly complicated and inefficient, following a merger with the PRA in 2014 their IT estate comprises of duplicated systems and extensive data overload.

Budget, as stated earlier is not the only factor in preventing digital transformation, although there is no doubt that these projects are expensive and extremely time consuming. The complexity of the task and the fear of failure is another reason why companies hold on to their legacy systems. Better the devil you know! Think back to the TSB outage (there were a few…), systems were down for hours and customers were unable to access their accounts following a system upgrade. The incident ultimately led to huge fines from the Financial Conduct Authority and the resignation of the Chief Executive.

For most organisations abandoning their legacy systems is simply not an option so they need to find ways to update in order to facilitate the connection to digital platforms and plug into new technologies.

Many of our clients believe that it is not the legacy system themselves which are the barrier, but it is the inability to access the vast amount of data which is stored in its infrastructure.  It is the data that is the key to the digital transformation, so accessing it is a crucial piece of the puzzle.

“It’s more about legacy architecture and lack of active management of data than specifically systems”

By finding a way to unlock the data inside these out of date systems, banks can decentralise their data making it available to the new digital world.

With the creation of such advancements as the cloud and API’s, it is possible to sit an agility layer between the existing legacy systems and newly adopted applications. HSBC has successfully adopted this approach and used an API strategy to expand its digital and mobile services without needing to replace its legacy systems.

Legacy systems are no longer the barrier to digital innovation that they once were. With some creative thinking and the adoption of new technologies legacy can continue to be part of your IT infrastructure in 2019!

https://www.finextra.com/newsarticle/33529/bank-of-england-slammed-over-outdated-it-and-culture

Posted on : 14-03-2019 | By : richard.gale | In : Data, Finance, FinTech, Innovation, Uncategorized

Tags: , , , , , , , ,

0

Has the agile product delivery model has been too widely adopted?

As a consultancy, we have the benefit of working with many clients across almost all industry verticals. Specifically, over the last 7-8 years we have seen a huge uptake in the shift from traditional project delivery models towards more agile techniques.

The combination of people, process and technology with this delivery model has been hugely beneficial in increasing both the speed of execution and alignment of business requirements with products. That said, in more recent years we have observed an almost “religious like” adoption of agile often, in our view, at the expense of pragmatism and execution focus. A purist approach to agile—where traditional development is completely replaced in one fell swoop— results in failure for many organisations, especially those that rely on tight controls, rigid structures and cost-benefit analysis.

Despite its advantages, many organisations struggle to successfully transition to agile, leading to an unnecessarily high agile project failure rate. While there are several common causes for this failure rate, one of the top causes—if not the leading cause—is the lack of an agile-ready culture.

This has been evident with our own client discussions which have centred around “organisational culture at odds with agile values” and “lack of business customer or product owner availability” as challenges for adopting and scaling agile.  Agile as a methodology does require a corresponding agile culture to ensure success.  It’s no good committing to implementing in an agile way when the organisation is anything but agile!

Doing Agile v Being Agile

Adopting an Agile methodology in an organisation which has not fully embraced Agile can still reap results (various estimates but benchmark around a 20% increase in benefits). If, on the other hand, the firm has truly embraced an agile approach in the organisation from CEO to receptionist then the sky is the limit and improvements of 200% plus have been experienced!

Investing in the change management required to build an agile culture is the key to making a successful transition to agile and experiencing all of the competitive advantages it affords. Through this investment, your business leadership, IT leadership and IT teams can align, collaborate and deliver quality solutions for customers, as well as drive organisational transformation—both today and into the future.

There are certain projects, where shoehorning them into agile processes just serves to slow down the delivery with no benefit. Some of this may come from the increase in devops delivery but we see it stifling many infrastructure or underpinning projects, which still lend themselves to a more waterfall delivery approach.

The main difference between agile methodologies and waterfall methodologies is the phased approach that waterfall takes (define requirements, freeze requirements, begin coding, move to testing, etc.) as opposed to the iterative approach of agile. However, there are different ways to implement a waterfall methodology, including iterative waterfall, which still practices the phased approach but delivers in smaller release cycles.

Today, more and more teams would say that they are using an agile methodology. When in fact, many of those teams are likely to be using a hybrid model that includes elements of several agile methodologies as well as waterfall.

It is crucial to bring together people, processes and technologies and identify where it makes business sense to implement agile; agile is not a silver bullet. An assessment of the areas where agile would work best is required, which will then guide the transition. Many organisations kick off an agile project without carrying out this assessment and find following this path is just too difficult. A well-defined transitional approach is a prerequisite for success.

We all understand that today’s business units need to be flexible and agile to survive but following an agile delivery model is not always the only solution.

Posted on : 30-01-2019 | By : richard.gale | In : Uncategorized

Tags: , , , ,

0