Extreme Outsourcing: A Dangerous Sport?

Posted on : 27-09-2019 | By : kerry.housley | In : Uncategorized

Tags: , , , ,

0

Recently I’ve thought about an event I attended in the early 2000’s, at which there was a speech that really stuck in my mind. The presenter gave a view on a future model of how companies would source their business operations, specifically the ratio of internally managed against that which would be transitioned to external providers (I can’t remember exactly the event, but it was in Paris and the keynote was someone you might remember, named Carly Fiorina…).

What I clearly remember, at the time, was a view that I considered to be a fairly extreme view of the potential end game. He asked the attendees:

Can you tell me what you think is the real value of organisations such as Coca Cola, IBM or Disney?

Answer: The brand.

It’s not the manufacturing process, or operations, or technology systems, or distribution, or marketing channels, or, or… Clearly everything that goes into the intellectual property to build the brand/product (such as the innovation and design) is important, but ultimately, how the product is built, delivered and operated offers no intrinsic value to the organisation. In these areas it’s all about efficiency.

In the future, companies like these would be a fraction of the size in terms of the internal staff operations.

Fast forward to today and perhaps this view is starting to gain some traction…at least to start the journey. For many decades, areas such as technology services have be sourced through external delivery partners. Necessity, fashion and individual preference have all driven CIOs into various sourcing models. Operations leaders have implemented Business Process Outsourcing (BPO) to low cost locations, as have other functions such the HR and Finance back offices.

But perhaps there are two more fundamental questions that CEOs or organisations should ask as they survey their business operations;

  • 1) What functions that we own actually differentiate us from our competitors?
  • 2) Can other companies run services better than us?

It is something that rarely gets either asked or answered in a way that is totally objective. That is of course a natural part of the culture, DNA and political landscape of organisations, particularly those that have longevity and legacy in developing internal service models. But is isn’t a question that can be kicked into the long grass anymore.

Despite the green shoots of economic recovery, there are no indications that the business environment is going to return to the heady days of large margins and costs being somewhat “consequential”. It’s going to be a very different competitive world, with increased external oversight and challenges/threats to companies, such as through regulation, disruptive business models and innovative new entrants.

We also need to take a step back and ask a third question…

  • 3) If we were building this company today, would we build and run it this way?

Again a difficult, and some would argue, irrelevant question. Companies have legacy operations and “technical debt” and that’s it…we just need to deal with it over time. The problem is, time may not be available.

In our discussions with clients, we are seeing that realisation may have dawned. Whilst many companies in recent years have reported significant reductions in staff numbers and costs, are we still just delaying the “death by a thousand cuts”? Some leaders, particularly in technology, have realised that not only running significant operations is untenable, but also that a more radical approach should be taken to move the bar much closer up the operating chain towards where the real business value lies.

Old sourcing models looked at drawing the line at functions such as Strategy, Architecture, Engineering, Security, Vendor Management, Change Management and the like. These were considered the valuable organisational assets. Now. I’m not saying that is incorrect, but what often has happened is that have been treated holistically and not broken down into where the real value lies. Indeed, for some organisations we’ve heard of Strategy & Architecture having between 500-1000 staff! (…and, these are not technology companies).

Each of these functions need to be assessed and the three questions asked. If done objectively, then I’m sure a different model would emerge for many companies with trusted service providers running much on the functions previously thought of as “retained”. It is both achievable, sensible and maybe necessary.

On the middle and front office side, the same can be asked. When CEOs look at the revenue generating business front office, whatever the industry, there are key people, processes and IP that make the company successful. However, there are also many areas where it was historically a necessity to run internally but actually adds no business value (although, of course still very key). If that’s the case, then it makes sense to source it from specialist provider where the economies of scale and challenges in terms of service (such as from “general regulatory requirements”) can be managed without detracting from the core business.

So, if you look at some of the key brands and their staff numbers today in the 10’s/100’s of thousands, it might only be those that focus on key business value and shed the supporting functions, that survive tomorrow.

Could You Boost Your Cybersecurity With Blockchain?

Posted on : 28-11-2017 | By : Tom Loxley | In : Blockchain, Cloud, compliance, Cyber Security, Data, data security, DLT, GDPR, Innovation

Tags: , , , , , , , , , , , , , , ,

0

Securing your data, the smart way

 

The implications of Blockchain technology are being felt across many industries, in fact, the disruptive effect it’s having on Financial Services is changing the fundamental ways we bank and trade. Its presence is also impacting Defense, Business Services, Logistics, Retail, you name it the applications are endless, although not all blockchain applications are practical or worth pursuing. Like all things which have genuine potential and value, they are accompanied by the buzz words, trends and fads that also undermine them as many try to jump on the bandwagon and cash in on the hype.

However, one area where tangible progress is being made and where blockchain technology can add real value is in the domain of cybersecurity and in particular data security.

Your personal information and data are valuable and therefore worth stealing and worth protecting and many criminals are working hard to exploit this. In the late 90’s the data collection began to ramp up with the popularity of the internet and now the hoarding of our personal, and professional data has reached fever pitch. We live in the age of information and information is power. It directly translates to value in the digital world.

However, some organisations both public sector and private sector alike have dealt with our information in such a flippant and negligent way that they don’t even know what they hold, how much they have, where or how they have it stored.

Lists of our information are emailed to multiple people on spreadsheets, downloaded and saved on to desktops, copied, chopped, pasted, formatted into different document types and then uploaded on to cloud storage systems then duplicated in CRM’s (customer relationship management systems) and so on…are you lost yet? Well so is your information.

This negligence doesn’t happen with any malice or negative intent but simply through a lack awareness and a lack process or procedure around data governance (or a failure to implement what process and procedure do exist).

Human nature dictates we take the easiest route, combine this with deadlines needing to be met and a reluctance to delete anything in case we may need it later at some point and we end up with information being continually copied and replicated and stored in every nook and cranny of hard drives, networks and clouds until we don’t know what is where anymore. As is this wasn’t bad enough this makes it nearly impossible to secure this information.

In fact, for most, it’s just easier to buy more space in your cloud or buy a bigger hard drive than it is to maintain a clean, data-efficient network.

Big budgets aren’t the key to securing data either. Equifax is still hurting from an immense cybersecurity breach earlier this year. During the breach, cybercriminals accessed the personal data of approximately 143 million U.S. Equifax consumers. Equifax isn’t the only one, if I were able to list all the serious data breaches over the last year or two you’d end up both scarred by and bored with the sheer amount. The sheer scale of numbers here makes this hard to comprehend, the amounts of money criminals have ransomed out of companies and individuals, the amount of data stolen, or even the numbers of companies who’ve been breached, the numbers are huge and growing.

So it’s no surprise that anything in the tech world that can vastly aid cybersecurity and in particular securing information is going to be in pretty high demand.

Enter blockchain technology

 

The beauty of a blockchain is that it kills two birds with one stone, controlled security and order.

Blockchains provide immense benefits when it comes to securing our data (the blockchain technology that underpins the cryptocurrency Bitcoin has never been breached since its inception over 8 years ago).

Blockchains store their data on an immutable record, that means once the data is stored where it’s not going anywhere. Each block (or piece of information) is cryptographically chained to the next block in a chronological order. Multiple copies of the blockchain are distributed across a number of computers (or nodes) if an attempted change is made anywhere on the blockchain all the nodes become are aware of it.

For a new block of data to be added, there must be a consensus amongst the other nodes (on a private blockchain the number of nodes is up to you). This means that once information is stored on the blockchain, in order to change or steel it you would have to reverse engineer near unbreakable cryptography (perhaps hundreds of times depending on how many other blocks of information were stored after it), then do that on every other node that holds a copy of the blockchain.

That means that when you store information on a blockchain it is all transparently monitored and recorded. Another benefit to using blockchains for data security is that because private blockchains are permissioned, therefore accountability and responsibly are enforced by definition and in my experience when people become accountable for what they do they tend to care a lot more about how they do it.

One company that has taken the initiative in this space is Gospel Technology. Gospel Technology has taken the security of data a step further than simply storing information on a blockchain, they have added another clever layer of security that further enables the safe transfer of information to those who do not have access to the blockchain. This makes it perfect for dealing with third parties or those within organisations who don’t hold permissioned access to the blockchain but need certain files.

One of the issues with blockchains is the user interface. It’s not always pretty or intuitive but Gospel has also taken care of this with a simple and elegant platform that makes data security easy for the end user.  The company describes their product Gospel® as an enterprise-grade security platform, underpinned by blockchain, that enables data to be accessed and tracked with absolute trust and security.

The applications for Gospel are many and it seems that in the current environment this kind of solution is a growing requirement for organisations across many industries, especially with the new regulatory implications of GDPR coming to the fore and the financial penalties for breaching it.

From our point of view as a consultancy in the Cyber Security space, we see the genuine concern and need for clarity, understanding and assurance for our clients and the organisations that we speak to on a daily basis. The realisation that data and cyber security is now something that can’t be taken lighted has begun to hit home. The issue for most businesses is that there are so many solutions out there it’s hard to know what to choose and so many threats, that trying to stay on top of it without a dedicated staff is nearly impossible. However, the good news is that there are good quality solutions out there and with a little effort and guidance and a considered approach to your organisation’s security you can turn back the tide on data security and protect your organisation well.

Preparing for the emerging upturn

Posted on : 31-07-2014 | By : jo.rose | In : Innovation

Tags: , , , , , , , , ,

0

In this article, we consider the timing of the upturn and the implications for the type of change activity to be planned for and some key principles that help ensure success.

I thought I would start with a few sobering statistical observations: Post the 2007-2008 crash, the Gross Value Added by Financial Services and Insurance to the UK economy fell to below that of manufacturing and retail. Further, since GDP bottomed out in 2009, Financial Services output has generally failed to keep track with both GDP growth and other services output. Unsurprisingly, as the crash unfolded, participants in the industry decimated their change budgets as part of wider belt tightening measures.

More recently, official statistics point to the downturn being well and truly over.  However, are the effects of the crash still in evidence in respect of change investment confidence?

We would assert that many companies within the Financial Services industry remain uncertain about their medium term economic prospects as evidenced by a focus on investment in non-discretionary change, such as regulatory requirements, M&A integration, Business-As-Usual maintenance and on smaller scale efficiency or client improvements.

We appear to be sauntering, tentatively up to a turning point in the industry. The latest CBI / PwC Financial Services survey clearly shows optimism, employment and business volumes up across the sector, despite headwinds regarding banking capacity, regulatory and risk management investment, consumer distrust of debt and increased competition from new entrants and as a result of the internet.

As a result there will likely be some relaxation of change budgets, but from a much shrunken base. In the face of increased demand but probably only modestly increased budget, the requirement to invest wisely is more pressing than ever. This means doing only the right things and doing them well.

With a view to the future and a likely phase of modest economic growth, what additional considerations are there for change investment?

With reference to the latest CBI / PwC Financial Services survey, competition has emerged as the primary expected constraint on business growth over the next year. The business environment remains uncertain and so revenue growth will be both difficult to quantify and near impossible to ensure. This suggests investment priorities should be towards reducing the cost base where the impact upon the P&L will be easier to quantify and more realistic to achieve.

Nevertheless, the upturn means that investment in new sources of revenue will need to be made. However, given the business case uncertainty involved, it will be essential that any investments have measurable targets, are aligned with clear strategic goals and have strong sponsorship and accountability.

We would always advocate two broad principles to underpin any organisation’s change programme:

(1)       Sort out the macro agenda first. There is no point ensuring you are super efficient if you’re actually doing the wrong thing. It’s critical that the strategic agenda of the organisation can be traced down to every aspect of the change portfolio, however technical the change may appear at first glance.

(2)       Every aspect of the change portfolio should be owned by an internal customer. Those responsible for delivery aren’t great sponsors because they inevitably have to second-guess the needs of the business or function they are serving. Even overtly technical changes such as network resilience or data architecture improvements really should be owned by the business functions that rely on them.

In our experience, most of the good behaviours you need from an organisation derive from these two principles of operation, but few organisations fully observe them. Taking the time to consider business ambitions over, say, 3 years and to gain the buy-in to every aspect of the change portfolio really pays dividends in the longer term.

In summary, an inoculation plan for uncertain times should include:

  1. Define Strategic Plan: Ensuring the organisation has a clear sense of growth ambitions and business targets to act as focus for all organisational activity. This means having a clear understanding of how strategic targets will be achieved within every department and ensuring those departments have the process and infrastructure capacity to meet anticipated demand.
  2. Go for efficiency: Targeting operational efficiency as a means to maximising profitability. Given increased competition, asset growth by organic means might be difficult to achieve, so operational efficiency, whether it be through outsourcing, industry collaboration, reducing the product mix or process improvement should be on the agenda.
  3. Ensure delivery effectiveness: Having determined what market segments to target, speed to market, as determined by product development processes and infrastructure implementation needs to be optimal. Are contributing departments integrated and working as required?

Although easier said than done, these things can all be achieved through a combination of strategic planning, capability improvement and robust policy implementation.

Thanks to Graham Dash at Luminosity Services for this viewpoint.

If you would like to debate or add to any of the points raised in this article, feel free to get in touch through any of the communications channels below.

Email: graham.dash@citylsl.com

LinkedIn: graham dash

 

Graham is a Director of Luminosity Services Ltd, founded in 2009 to provide specialist consulting services to asset and wealth management companies, investment banks and the organisations that service them. Our leadership team, comprised of Graham Dash and Syd Wilkinson, have more than five decades collective experience in change management, most of it in leadership positions in tier 1 banks, asset managers and consultancies. Our specialisation is “change” – the ability to plan, manage and deliver business improvements. Please visit our website for more details.

Cloud as an “Innovation Enabler”

Posted on : 30-06-2014 | By : john.vincent | In : Cloud

Tags: , , , , , , ,

0

It seems that most people we come across in our daily activities now agree that cloud computing is a key disrupter to “traditional” technology service delivery. We no longer start conversations with “let’s define what we mean by cloud computing”, “cloud means different things to different people” or having to ensuring all documents have descriptions of public, private and hybrid cloud as laid out by NIST (the  National Institute of Standards and Technology).

People get cloud now. Of course, there are still naysayers and those that raise the security, compliance or regulatory card, but those voices are now becoming fainter (Indeed, if you look closer you’ll often find that what it actually stems from is a cultural fear, such as loss of control).

If we look at the evolution and adoption of cloud technology, it has predominantly been focused around two business drivers, efficiency and agility. The first of these took some time just from an economic business case perspective. As with most new technologies or ideologies, economies of scale create the tipping point for accelerating adoption but we have now reached the point where the pure cost benefits of on-demand infrastructure are compelling when compared to the internally managed alternative.

The agility angle requires more of a shift in the operating model and mindset for technology organisations. CIOs are generally used to owning and managing infrastructure in “tranches” – deploying additional compute capability for new applications or removing it for consolidation, rationalisation and changes in business strategy.

What cloud technologies provide is the capability for matching demand and supply of compute resource without step changes. To deliver this, however, requires improved forecasting, provisioning and monitoring processes within the technology organisation.

So that’s where most organisations have positioned the cloud. However, what about using cloud to drive business innovation?

A recent McKinsey study on Cloud and Innovation made the following point:

The problem in many cases is that adopting cloud technologies is an IT initiative, which means that cloud solutions are all around improving IT and IT productivity. But that’s not where growth is going to come from. . . Incremental investments in productivity don’t drive growth. . . Investments need to go into innovation and disruptive business models . . . Unless companies are asking themselves how to use the cloud to disrupt their own business models or someone else’s, then adopting the cloud is just another IT project.

This observation encapsulates the current situation well – we often see cloud in the category of “another IT project”. We also saw similar with the whole “Big Data” hype (not that we like that label) in recent years when some IT organisations were building capabilities with products like Hadoop without really knowing what the business objectives or value were. Sound familiar?

Building further on this, we see the problem with driving innovation through cloud based technology as two-fold.

Firstly, many organisations still struggle to foster innovation, whether within the company boundaries or via external ventures and partnerships. We have written about this in previous articles (here as related to innovation in banks). Although things are developing with companies building “Digital Business Units” as a complete separate entity (staffed with both business and IT stakeholders), or sponsoring/funding start-up programmes, it is still too slow. Sadly, innovation is too often just an objective on a performance appraisal which was “achieved” through something fairly uninspiring.

The second point is that relating cloud technology as an enabler to innovation requires a high degree of abstraction between current and future state. It needs people to work together that understand and can shape;

  • The current value of a business and history from a people, process, asset and customer perspective
  • How cloud technology can innovate and underpin new digital channels, such as mobile, social, payments, the internet-of-things and the like
  • How to change the mindset of peer C’level executives to embrace the “art of the possible” – to take decisions that will bring a step change in the companies client services

The challenge facing many organisations is that the shift to innovative cloud based services, which connects clients, services, data and devices on a potentially huge scale, is not supported by traditional technology architectures. It jars with the old, tried and tested way of designing technology infrastructure within a defined boundaries.

However, if organisations do not adapt and innovate then the real threat comes from those companies who know nothing more than “innovating in the cloud”. They started there and use it not only as an efficiency and agility tool but to deliver new and disruptive cloud based business services. To compete, traditional organisations will need to evolve their cloud based innovation.

Is it the time for Joint Shared Services?

Posted on : 29-11-2013 | By : john.vincent | In : Innovation

Tags: , , , , , , , , , , , , ,

1

Last month we wrote about how the rate of technology change is outpacing the internal IT departments of organisations. It certainly seems that the “squeeze” is on with cloud and external providers offering more agile compute services at the infrastructure level (now at an on-demand cost which can compete), and the business consumers procuring what they need, when they need it and of course where the need it through Software as a Service (SaaS) providers.

Two years ago the ability for CIOs to raise the virtual “Red Card” at these external forces through risk, compliance, data security, cost and the like still existed, particularly in areas such as financial services (although we constantly heard anecdotes of technology services being brought on credit cards in the front office and expensed back). However, today it is more a case or working out how to protect digital assets and company reputation from the increased decentralisation of technology governance (business/end-user empowerment), whilst continuing to deliver operational services against a backdrop of having to justify value.

So, whilst this move of technology governance to the corporate edges continues, the question is “What approach should organisations take to sourcing their underpinning infrastructure commodity services?”

We have seen decades of ebb and flow for the sourcing of technology services….Outsourcing, off shoring, near shoring, right shoring (we may have finally run out of prefixes…), managed services and the like. Internally, organisations have coupled this operating model with shared service functions such as Finance, Human Resource and Operations to deliver further efficiencies. What is less prevalent, however, is collaboration between client organisations.

Large service providers have shown the benefits through economies of scale to running client technology platforms. However, whatever your position is on outsourcing technology, many would argue that the clients themselves do not benefit fully from these efficiencies. This is of course natural where there is a fragmented delivery chain and limited client side collaboration. So, is the time right to extend the shared service model and create shared service models, or joint ventures, between peer organisations?

If you take the infrastructure layer then we think…YES. As we said in our previous article, where is the business (or more importantly brand) value in having technicians crafting infrastructure services? There are pockets/exceptions, but typically the “compute plumbing” supporting business applications does not drive competitive advantage. However, in todays fast moving landscape it is very easy to erode value through rigid or elongated timescales for service provisioning.

The pace of change is clearly illustrated by the transformed data centre market. Back in 2005/2006, many large corporate CIOs were scrambling to purchase their own data centres as space and power became scarce. Fast Forward to today and many of those same organisations are sitting with surplus capacity.

In the space of a few years, driven by new the revolution in virtualisation and cloud computing, it would now seem a bad strategy to build and manage your own client facility. 

The question to ask is how organisations can collaborate together to source their compute requirements together for mutual benefit. For back office processing there have been “carve outs”, collaborations or joint ventures such as in the investment management and insurance markets. Leading on from this, there is no reason why peer organisations couldn’t combine to create a SPV/JV for their underlying infrastructure requirements. This has the potential to bring many benefits, including:

  • Increased market leverage for commodity service pricing
  • Reduced fixed overheads and move from Capex to Opex
  • Improved standards and policies in areas such as security and risk management (through collective influence)
  • Increased agility and time to market
  • Enhanced technology innovation 
  • Improved focus on core business competencies

There are many others (and no doubt many counter arguments, which happy to receive…)

So what stops organisations proceeding? Well, most of all we are talking about a cultural shift which, if driven from the technology organisation themselves (CIO), is unlikely to get much traction. This level of change is not something that can be technology driven. This needs to be a top down, business led discussion.

It also doesn’t apply only to technology. Many years ago (I think late 90’s) I attended a conference where the speaker talked about measuring real company value and how organisations would over time “jettison” those operations that didn’t contribute to the customer proposition. What is left in the final end game? In the extreme example it is simply those creating the Strategy and Brand alone, with everything else sourced from the market. When you think about it, it does make sense.

Every year previously we have produced our predictions for the coming 12 months. We don’t see this happening in that timeframe but at least opening up the discussion should be on the CEOs “to-do” list in 2014…

The next Banking crisis? Too entangled to fail…

Posted on : 30-10-2013 | By : richard.gale | In : Finance

Tags: , , , , , , , , , , ,

0

Many miles of newsprint (& billions of pixels) have been generated discussing the reasons for the near collapse of the financial systems in 2008. One of the main reasons cited was that each of the ‘mega’ banks had such a large influence on the market that they were too big to fail, a crash of one could destroy the entire banking universe.

Although the underlying issues still exist; there are a small number of huge banking organisations, vast amounts of time and legislation has been focused on reducing the risks of these banks by forcing them to hoard capital to reduce the external impact of failure. An unintended consequence of this has been that banks are less likely to lend so constricting firms ability to grow and so slowing the recovery but that’s a different story.

We think, the focus on capital provisions and risk management, although positive, does not address the fundamental issues. The banking system is so interlinked and entwined that one part failing can still bring the whole system down.

Huge volumes of capital is being moved round on a daily basis and there are trillions of dollars ‘in flight’ at any one time. Most of this is passing between banks or divisions of banks. One of the reasons for the UK part of Lehman’s collapse was that it sent billions of dollars (used to settle the next days’ obligations) back to New York each night. On the morning of 15th September 2008 the money did not come back from the US and the company shut down. The intraday flow of capital is one of the potential failure points with the current systems.

Money goes from one trading organisation in return for shares, bonds, derivatives, FX but the process is not instant and there are usually other organisations involved in the process and the money and/or securities are often in the possession of different organisations in that process.

This “Counterparty Risk” is now one of the areas that banks and regulators are focussing in on. What would happen if a bank performing an FX transaction on behalf of a hedge fund stopped trading. Where would the money go? Who would own it and, as importantly, how long would it take for the true owner to get it back. The other side of the transaction would still be in flight and so where would the shares/bonds go? Assessing the risk of a counterparty defaulting whilst ensuring the trading business continues is a finely balanced tightrope walk for banks and other trading firms.

So how do organisations and governments protect against this potential ‘deadly embrace’?

Know your counterparty; this has always been important and is a standard part of any due diligence for trading organisations, what is as important is;

Know the route and the intermediaries involved; companies need as much knowledge of the flow of money, collateral and securities as they do for the end points. How are the transactions being routed and who holds the trade at any point in time. Some of these flows will only pause for seconds with one firm but there is always a risk of breakdown or failure of an organisation so ‘knowing the flow’ is as important as knowing the client.

Know the regulations; of course trading organisations spend time & understand the regulatory framework but in cross-border transactions especially, there can be gaps, overlaps and multiple interpretations of these regulations with each country or trade body having different interpretation of the rules. Highlighting these and having a clear understanding of the impact and process ahead of an issue is vital.

Understanding the impact of timing and time zones; trade flows generally can run 24 hours a day but markets are not always open in all regions so money or securities can get held up in unexpected places. Again making sure there are processes in place to overcome these snags and delays along the way are critical.

Trading is getting more complex, more international, more regulated and faster. All these present different challenges to trading firms and their IT departments. We have seen some exciting and innovative projects with some of our clients and we are looking forward to helping others with the implementation of systems and processes to keep the trading wheels oiled…

Self Diagnosing and Self Healing Systems

Posted on : 27-03-2013 | By : richard.gale | In : Innovation

Tags: , , , , , , , , , , ,

0

Medical internet sites are leading the charge on self-diagnosis – working through a set of symptoms to produce a number of likely outcomes. In automotive and aeronautical industries the concept of voting based systems for ‘mission critical’ decisions are well established (the Airbus has three sets of applications performing the same function developed and tested by separate teams. 99.9% of the time they all make the same decision correctly but if there is a dispute the majority ‘wins’).

Many business systems rely on an army of people to change, fix, tune and oil the huge number of systems, applications and processes that reside in organisations. Why have the ideas used in other disciplines not been transferred to general business?

We think that there is a lot of long term potential but there is long way to go and the reasons are as follows:

Homogeneity  – most systems use similar components or software but the business complexities result in very diverse implementations and one firm’s trade processing flow will look very different from another. So the ability to produce a generic solution for understanding issues and resolving them automatically currently outweighs the benefits. One major bank we know has identified that 70% of its risk systems across investment banking and corporate have the same functions but is not going to consolidate through a combination of politics, strategic focus and potential regulatory impacts. If it did have the desire (and nerve!) to do this it would be a perfect opportunity to build in some simple feedback and decision making abilities into the applications (we think anyway…)

Impact – although large they do not generally have the same impact or coverage. A medical self-diagnosis system requires human interaction but also will be generally the same for seven billion people, if an Airbus 320 crashes due to systems failure then the number of people directly impacted is low but the effect on the manufacturer, airline and air travel generally is very visible and high.

Desire – Most of these systems are ‘good enough’ and it is accepted practice to utilise a large team to support an application. Organisations look for efficiencies through standardisation, scaling, outsourcing and generally using lower cost staff to support them. Organisations benchmark themselves against their peers and if similar organisations are doing things a similar way then then the desire for radical changes can be reduced.

Risk – or fear of the unknown. There has been a great deal of research and experimentation with self diagnosis/healing in electronic control systems but the field is still young in the business applications space. Being an early-mover could result in a very expensive failure and so risk adverse CIOs are unlikely to step up to this challenge without one of their peers going first.

Knowledge – this is, perhaps, the deciding factor in the usage of self-diagnosis – electronic system that control planes, although being immensely complicated usually only have a small number of potential outcomes, financial systems with multiple forms of inputs, transformations, calculations, manual overrides, legacy and diverse systems can have an almost infinite number of outcomes or issues. No trading system is fully tested before it goes live as the complexity of the testing process would mean the system would be obsolete before it was signed off. Couple that with a 10 year old accounting engine written by 100 people (95 of which who have left the company), a bought in messaging system and an outsourced settlement function and it is little surprise why the inventive, creative minds of experienced human resources are needed to identify and resolve the myriad of issues emerging from the infrastructure.

So, for the short term at least, we think the armies of support staff across IT and business support are here to stay. But as technology continues to move forward we think there is a great opportunity for organisations to make a step-change in their support models and start building in self diagnosis and correction into their applications. The results in terms of operational efficiencies and reduced costs through errors and manual intervention could be enormous.

 

 


Too big to fail…or too big to succeed?

Posted on : 30-11-2012 | By : jo.rose | In : Finance

Tags: , , , , , , , , ,

2

In a recent blog we touched on what the future might hold for retail banking and some of the challenges facing them that have been played out in the open around brand and reputation, plus how potential new entrants could disrupt their traditional business model.

This month we thought we’d explore a little more the “legacy” that is inherent within the larger financial services organisations, specifically from the angle of the infrastructure cost burden that banks carry with them.

Most firms realised a while ago that their ratio of back-office to front-office expenses had become imbalanced or needed to be addressed. Years of growth, increases in business applications, product complexity and acquisition have added layers of cost elements. Of course, application and infrastructure consolidation is mooted along the way, but is often not executed.

Consequently, cost savings initiatives and operational efficiency have been part of the objectives of technology departments for many years (10 plus for some). Operating models have been modified, organisations have been de-layered, contracts consolidated, software and hardware standardised, development work best-shored, operations departments moved to Eastern Europe, infrastructure outsourced, contingent workforce rates reduced etc…

This is all good stuff, but one question remains (particularly if you’ve been a front office, revenue generating, observer during this period)…Are we going to run out of time?

It’s a serious question and one which we think is valid to ask against the new technology landscape. Let’s explore a few (maybe uncomfortable) points.

  • Could (or should) more have been done at a faster pace over the years? Financial services technology departments have always been a cost centre/enabler and therefore demonstrating tangible benefits through driving operating efficiencies has been a good way (sometimes the only way) to demonstrate value. So with technology leaders building careers around this, it’s good to “hold some cards in the hand” for next years’ bonus, right?
  • A second point is around the size and scale of the infrastructures within the big banks. With this complexity building up over many years is it just too big to fix or make competitive? Technology leaders will measure against their peers in terms of delivery and efficiency, but is that actually the problem? The oil tanker analogy is often used and yes, there’s a lot of hull to turn, but should we not be issuing the abandon ship order in favour of a more nimble vessel?
  • Lastly, what about the operational resourcing. Whilst sourcing models have changed and target ratios of perm to contract tinkered with, how often is the question of really matching demand to supply looked into? Or measure productivity and shift unused resource capacity based on peaks in demand? It’s not easy, but perhaps more could be done. Indeed, whilst pointing to reductions in workforce in a measure of driving down costs it often detracts from the still substantial size of the remaining organisation.

Again, it’s a difficult problem solve but the risk is that there is a positive answer to the question of running out of time. Are we simply been delaying the inevitable…a death by a thousand cuts?

Part of the problem is that for a large percentage of retail customers what they need now in terms of banking has changed significantly. We know that online/digital is where most customers want to transact and the more complex branch based services and advice is declining.

Some banks are adjusting their services portfolios to address this, such as Swedbank who recently announced completely hiving off its digital banking operations into a separate business entity. A recent Telegraph article also talked about the importance of improving online services in order to retain customers, with a third citing frustration with the current offerings. And then there’s the new generation who most likely will have a completely fresh set of banking needs.

So there’s a lot that needs to be done in order to retain customers, stay competitive and at the same time tackle the legacy and at the same time keep an eye out for new entrants, with more lean online services and no “baggage”.

But its not all doom and gloom. Ironically, it could be that the ability to absorb the infrastructure requirements of increased banking regulation might actually deter competition…

Don’t forget to keep a handle on your “Technical Debt”

Posted on : 31-10-2012 | By : john.vincent | In : Finance

Tags: , , , , , , , ,

1

Whilst catching up recently with a CIO after a technology briefing, I asked what topics he would like to see covered in future events? He responded fairly quickly with “How to measure Technical Debt within the organisation”.

This is an interesting topic. Whilst many use risk methodologies to quantify potential impacts, the monitoring and measurement of technical debt is typically given less focus. So first of all, what do we mean by “technical debt”?

Typically it refers to incomplete or avoided changes to an evolving software architecture/development which amount to a debt that needs to be paid off at some point in the future. Basically it is the gap between;

  • Making a change perfectly – preserving architectural integrity, standards and testing etc…
  • Making a change that works – ensuring it is functionally correct, implemented quickly with as few resources as possible

Debt can build up through lack of documentation, lack of coordination of parallel development resulting in multiple code branches, code quality, design deficiencies, time pressures for new functionality etc.. which means “Interest payments” are needed to remediate the situation at some point in the future.

However, we also believe that the term can be applied to other areas of technology, such as infrastructure, 0rganisation and processes. Operational systems need to be Fixed, Enhanced and Adapted…as do the processes and resources supporting them. They require constant modification in order to remain fit for purpose…and of course all this uses up scarce resources which is much more difficult to allocate in terms of ROI. Could some of the more public technology outages of 2012 be related to a build up of technical debt?

Measuring technical debt interest is difficult (there are several theories and computational algorithms out there if you’re interested).  However, taken simply, it is the cost of fixing the shortcuts taken during the release cycle to bring all components up to the same level, be that coding, documentation, infrastructure, design or whatever…

Note: it is generally accepted that a level of technical debt is unavoidable…indeed in some cases building up a degree of technical debt can be good in terms of time to market or to elicit quick feedback…but, this shouldn’t be confused with “cutting corners” and should always allow for the ability to payback in the future

However, it isn’t simple. In the development world there are platforms such as CAST which can measure architectural flaws and allow organisations to estimate technical debt from the results. However, there are many other factors as we touched on earlier, not least being subjective elements which cannot be mechanised and require a more creative approach – technical debt is not a science.

That said there are certain elements that should be monitored, tracked and in some cases measured. Some of these to keep an eye on are:

  • Time to market for new features. If routine enhancements or new features, that appear simple, turn out to be complex or cause seemingly unrelated components to break, that is a sign of too much technical debt. Unwieldy systems and software, or the feeling that support around them is a “black art”, is a key indicator of under investment in development rigor/maintenance.
  • Loss of stakeholder engagement. Easy to spot…we’ve all seen or been aware of systems which gradually lose interest/sponsorship, either from a development or support perspective. This can occur when staff move on, business revenues switch (such as Investment Banking vs Retail and the respective underlying systems), or cost efficiency pressures shift priorities elsewhere.
  • System performance degradation. Obvious maybe, but can sometimes get overlooked and not differentiated from capacity management. Potential impacts of technical debt can be found by conducting load testing, monitoring memory utilization, disk reads and writes, CPU usage, network activity, thread creation, etc. and measuring results/trends over time.
  • Communication breakdowns become more frequent. If people are forgetting to share information or keep the team informed of events that impact the project, they clearly have more important things on their minds.
  • Cost Efficiency Pressures. We are living through a difficult period in terms of declining revenues and subsequently increased focus on spend. Whilst discretionary budgets are cut to only mandatory projects, organisations still need to ensure that whilst seeking increased efficiency in underlying applications/infrastructure, shortcuts are not taken which store up future interest payment burdens which ultimately translate into stability issues.
  • Organisation. Likewise, staff are currently being asked to work harder than ever. However, if teams supporting certain applications are consistently in “fire fighting” mode then something isn’t right. An occasional burst of extra hours to meet a critical deadline is fine, or to deal with sporadic issues…if it becomes a trend it will lead to an increase in operational errors.
  • Monitoring team metrics. Even when teams measure their performance using something like velocity or earned value, they often ignore deteriorating performance. If you see dropping productivity and missed deadlines, it is likely only to get worse.

The point is that keeping a handle of technical debt across all areas of the operating model is very important – even more so when the economic pressures can encourage skipping a few payments…

Technology process and control frameworks – Time to Modernise

Posted on : 26-07-2012 | By : john.vincent | In : General News

Tags: , , , , , ,

2

It seems that the cuts are never ending. Only last week we heard of more reductions in staff headcount at a number of financial institutions as organisations continue to react to an erosion of business revenues and consequently further pressure on costs. The absolute number of city jobs lost is difficult to get (even with a reasonable tolerance margin), but regardless, it’s a lot.

Of course, removing people isn’t the only pressure. Infrastructure costs continue to be re-evaluated. Do we really need what would have been a previously routine upgrade to that front office trading system? What about the global finance system that we’ve always desired? New data centre anyone?…Maybe not this decade.

But whilst we (arguably) “return to reality” in the world of technology infrastructure and application services, what about the realignment of human capital? Despite a few well publicised missives during the cuts in technology resources, the reductions have gone largely unnoticed (unless you are unfortunate enough to be on the receiving end). But, like all of financial services, it does have the feel of a “numbers game”. Times are good, then Buy IT…times are bad, Sell IT.

Of course, there has been significant delayering of technology organisations, a removal of non-critical or discretionary/change functions and a focus on only what is designated mandatory (i.e. regulatory change). What next? We’ve touched previously on areas such as cloud service delivery and outsourcing, so let’s leave that for now. No, in order to both address the inevitable future efficiency demands, and also to build a platform for growth, technology organisations need to revisit the fundamental inner workings of their DNA.

Over the years we have all been schooled in the needs for good control frameworks and processes, such as ITIL, COBIT and CMMi. The table below list some of these and the historical timeline.

However, often technology organisations can go way too far down the adopting of such processes, creating over complication, reduced clarity and unwarranted resource overhead (and therefore cost). In March of last year we wrote an article called A Framework for Success in which we explored the need for a Quality Management Framework rather than an all-encompassing and embedded methodology.

The problem is that is some ways a “cottage industry” has developed around the whole control framework piece. In particular, some organisations and individuals have almost turned frameworks such as ITIL into a religion. To be clear, we do support ITIL and Service Management will play a big part as the delivery of technology services shifts to multiple execution venues in the coming years (see ITSM and the Cloud).

However, we strongly believe that over the last 10-15 years, what we have seen is an over complication and adoption of some of these methodologies, leaving the original purpose and value to the organisation way behind.

It is fairly straight forward…the main issue is that there is a direct correlation between the level of process adoption/maturity and the organisation to support it. This has spawned a new breed roles, responsibilities and job titles such as Environment Manager, Release Manager, Service Introduction Manager, Service Protection Manager etc… By doing this, an operating model of increased complexity and interdependence is created which, if left to grow organically, can become unwieldy, cumbersome and create processing inefficiencies. If you have gone down this path, ask yourself, is it clear on the value to the organisation in terms of business service? accountability? and level of FTE’s supporting?

As we have stated in our previous articles, organisations should apply techniques such as LEAN (or to be honest, simple practicality) to streamline and remove wastage in the implementation of these control frameworks, be that process, people or technology.  This is a tough job…particularly given the cultural and emotional issues of dismantling what is often somewhere between a “favourite son” or a “safety net”.

However, if implemented properly, pragmatically and with a degree of realism, it will not only drive short term efficiencies but provide a much need alignment for the future service delivery when the upturn in demand returns.