Investment Management – what’s left to outsource.

Posted on : 30-09-2014 | By : richard.gale | In : Finance

Tags: , , , , , ,


Many Investment Management (IM) firms have outsourced significant business functions: settlement, collateral management, accounting departments have been ‘lifted out’ of a significant number of IM companies and are being run as a service by a smaller number of specialised financial services organisations.

We think the next phase for outsourcing are the middle and some of the front office functions as focus for IM firms is on ability to out-perform, reduce time to market for new products and to reduce costs. Regulation is a key driver for this as the complexities of dealing with constant regulatory change is increasing costs and constraints on  IM firms ability to move into new, more profitable, markets. New investment themes such as liability driven investing and securities such as OTC derivatives are much more widely utilised in investment firms than, say, 5 years ago. There is also the avalanche of regulation in-flight (AIFM, Dodd-Frank, MiFIR & Solvency II to name a few)  to enforce reporting and risk management. This results in operational activities such as collateral management becoming much more complex than transacting with conventional securities.

A few months back we discussed the future of middle office outsourcing with Maha Khan Phillips in Best Execution magazine and we want to expand on those thoughts here.

Another trend we see is how the Investment Banking industry is starting to look at outsourcing the non-value-add functions to reduce costs and help streamline their business areas. They are being impacted in a similar way to IM firms at the turn of the century in terms of reduction in income and focus on cost reduction.

 Outsourcing history and developments

The first phase of outsourcing often was a simple ‘lift-out’ where the back office was separated as a whole – people, systems, and processes  with a line drawn across the organisation splitting the remaining front/middle office from the outsourced back office. This was driven by a number of factors but cost reduction and the drive to better returns was core.

As an approach the lift-out worked and enabled the IM organisation to focus on its core business of investing money.  Over time as the industry matures, the limitations of this approach are becoming clear. The ability to be responsive to new business requirements can be reduced:  flexibility in the operating model to react to new changes such as business focus, new asset classes and volume variations are often slowed by split between organisations. The outsourcers will have a number of clients with differing requirements and a limited ability to change which can impact speed of delivery.

These factors have led to some operational challenges and frictions between the client and supplier the result of which has led to a reassessment of the services and relationship. The client has a number of choices available and, as the earlier contracts mature, firms are identifying this period as an opportunity to review the current state vs. alternative strategies. The choices are broadly:

  1. Insource. To undo the lift-out and bring services back in-house. Some organisations have done this with varying degrees of success but the underlying rationale for outsourcing and the business case underpinning this needs to be closely examined.
  2. Migrate to new outsourcer. This is potentially one of the more complex solutions but also a possibility to re-engineer the business. Often there are complex interactions between the client/supplier that exist because of the way the outsource was constructed historically. This ‘web’ of interfaces, processes and procedures will need to be cleaned and logically split to migrate. Also the level of complexity from moving from one (client) organisation to an outsource supplier goes to a new level when migrating suppliers.
  3. Stay with existing and work together to improve service, relationship and capabilities.
  4. A combination of the above not excluding outsourcing more functions of the client firm.

Assuming the client strategically does not which to insource the functions then one of the most important activities is to grow the client/supplier relationship into an aligned partnership. This is the time when parties need to work together to construct a roadmap to move to a more efficient, cost effective and flexible model to deliver optimised services and capacity to grow.

This trend is gathering pace as firms look to ‘smarter’ outsourcing which bundles up groups of functions and let someone else look after the day to day management whilst enjoying a consistent service and pricing. Significant middle office functions are in-scope and included in those are what are traditionally seen as front office capabilities such as deal execution and compliance monitoring.

Interestingly the Buy-side has led the way on outsourcing. Investment banks have previously been too busy ‘running’ to keep up – growing new business areas and have been wary of outsourcing as a brake on their flexibility and ability to expand. The focus has been on IT infrastructure, testing & development and creating ‘captives’ in lower cost areas for operations. Now cost and regulatory pressures are proving a heavy burden then banks are now spending more time and energy looking into outsourcing their non-propriety functions. We think this is one of the trend areas for the next few years.

This is an updated version of our article first published in 2012. The thoughts are still very relevant and we wanted share them again.

Calculating Value at Risk using R

Posted on : 30-09-2014 | By : richard.gale | In : Data

Tags: , , , , , ,



My recent article focused on using R to perform some basic exploratory data analysis1.

The focus of this article will be to highlight some packages that focus on financial analytics (TTR, quantmod and PerformanceAnalytics) and a package that will allow us to build an interactive UI with a package called Shiny.

For this article we will focus on Value at Risk2, a common market risk measure developed by JP Morgan and most recently criticized by Nassim Taleb3.

Historical Simulation – Methodology

For the first part of this article I will walk through the methodology of calculating VaR for a single stock using the historical simulation method (as opposed to the Monte Carlo or parametric method)4.

VaR allows a risk manager to make a statement about a maximum loss over a specified horizon at a certain confidence level.

V will be the Value at Risk for a one day horizon at a 95% confidence level.

Briefly, this method is: retrieve and sort a returns timeseries from a specified period (usually 501 days) and take a specific quantile and you will have the Value at Risk for that position.

Note however this will only apply to a single stock, I will cover multiple stocks in a later article. Normally a portfolio will not only include multiple stocks, but forwards, futures and other derivative positions.

In R, we would proceed as follows.

 ##pre-requisite packages 

With the packages loaded we can now run through the algorithm:

 X <- c(0.95) 
stock <- c("AA") ##American Airlines 
## define the historical timeseries 
begin <- Sys.Date() - 501 
end <- Sys.Date() 
## first use of quantmod to get the ticker and populate our dataset with 
the timeseries of Adjusted closing price 
tickers <- getSymbols(stock, from = begin, to = end, auto.assign = TRUE) 
dataset <- Ad(get(tickers[1])) 
## now we need to convert the closing prices into a daily returns 
timeseries - we will use the Performance Analytics package 
returns_AA <- Return.calculate(dataset, method=c("simple"))

We now have the dataset and can start to do some elementary plotting, firstly the returns timeseries to have a quick look:



Now, we’ll convert the timeseries into a sorted list and apply the quantile function

 ##convert to matrix datatype as zoo datatypes can't be sorted, then sort ascending 
returns_AA.m <- as.matrix(returns_AA); sorted <- 
##calculate the 5th percentile, 
##na.rm=TRUE tells the function to ignore NA values (not available values) 
100*round(quantile(returns_AA.m[order(returns_AA.m[,1])], c(1-X), na.rm=TRUE), 4) 
## 5% 
## -2.14

This shows us that the 5% one day value at risk for a position in American Airlines is -2.14%, that is, for $100 of position, once every 20 days you would lose more than $2.14.

Building a UI

A worthwhile guide to using Shiny is available on the Shiny Website. (

In essence, we will need to define two files in one directory, server.R and UI.R.

We’ll start with the UI code, not that I have used the “Telephones by Region” as a template (

The basic requirements are:

  1. A drop-down box to choose the stock.
  2. A function that plots a histogram of the returns time-series and shows the VaR as a quantile on the histogram.
##get the dataset for the drop-down box, 
##we'll use the TTR package for downloading a vector of stocks, 
##and load this into the variable SYMs 
suppressWarnings(SYMs <- TTR::stockSymbols()) 
##use the handy sqldf package to query dataframes using SQL syntax
##we'll focus on Banking stocks on the NYSE. 
SYMs <- sqldf("select Symbol from SYMs where Exchange='NYSE' and Industry like '%Banks%'") 
# Define the overall UI, shamelessly stolen from the shiny gallery 
 # Use a fluid Bootstrap layout 
 # Give the page a title 
 titlePanel("NYSE Banking Stocks - VaR Calculator"), 
 # Generate a row with a sidebar, calling the sidebar "Instrument" and populating the choices with the vector SYMs 
 sidebarLayout( selectInput("Instrument", "Instrument:", choices=SYMs), 
# Create a spot for the histogram 

With the UI layout defined, we can now define the functions in the Server.R code:

shinyServer(function(input, output){ 
# Fill in the spot we created in UI.R using the code under "renderPlot" 
 ##use the code shown above to get the data for the chosen instrument captured in input$Instrument 
 begin <- Sys.Date() - 501 
 end <- Sys.Date() 
 tickers <- getSymbols(input$Instrument, from = begin, to = end, 
 auto.assign = TRUE) 
 dataset <- Ad(get(tickers[1])) 
 dataset <- dataset[,1]
 returns <- Return.calculate(dataset, method=c("simple")) 
 ##use the quantmod package that creates the histogram and adds 95% VaR using the add.risk method 
 chart.Histogram(returns, methods = c("add.risk")) 

In RStudio, you will then see the button “Run App”, which after clicking will run your new and Shiny app.


Guest author: Damian Spendel – Damian has spent his professional life bringing value to organisations with new technology. He is currently working for a global bank helping them implement big data technologies. You can contact Damian at


Extreme Outsourcing: Should companies just keep the tip of the iceberg?

Posted on : 30-09-2014 | By : john.vincent | In : General News

Tags: , , , , , , , , , ,


Recently I’ve thought about an event I attended in the early 2000’s, at which there was a speech that really stuck in my mind. The presenter gave a view on a future model of how companies would source their business operations, specifically the ratio of internally managed against that which would be transitioned to external providers (I can’t remember exactly the event, but it was in Paris and the keynote was someone you might remember, named Carly Fiorina…).

What I clearly remember, at the time, was a view that I considered to be a fairly extreme view of the potential end game. He asked the attendees:

Can you tell me what you think is the real value of organisations such as Coca Cola, IBM or Disney?

Answer: The brand.

It’s not the manufacturing process, or operations, or technology systems, or distribution, or marketing channels, or, or… Clearly everything that goes into the intellectual property to build the brand/product (such as the innovation and design) is important, but ultimately, how the product is built, delivered and operated offers no intrinsic value to the organisation. In these areas it’s all about efficiency.

In the future, companies like these would be a fraction of the size in terms of the internal staff operations.

Fast forward to today and perhaps this view is starting to gain some traction…at least to start the journey. For many decades, areas such as technology services have be sourced through external delivery partners. Necessity, fashion and individual preference have all driven CIOs into various sourcing models. Operations leaders have implemented Business Process Outsourcing (BPO) to low cost locations, as have other functions such the HR and Finance back offices.

But perhaps there’s are two more fundamental questions that CEOs or organisations should ask as they survey their business operations;

  • 1) What functions that we own actually differentiate us from our competitors?
  • 2) Can other companies run services better than us?

It is something that rarely gets either asked or answered in a way that is totally objective. That is of course a natural part of the culture, DNA and political landscape of organisations, particularly those that have longevity and legacy in developing internal service models. But is isn’t a question that can be kicked into the long grass anymore.

Despite the green shoots of economic recovery, there are no indications that the business environment is going to return to the heady days of large margins and costs being somewhat “consequential”. It’s going to be a very different competitive world, with increased external oversight and challenges/threats to companies, such as through regulation, disruptive business models and innovative new entrants.

We also need to take a step back and ask a third question…

  • 3) If we were building this company today, would we build and run it this way?

Again a difficult, and some would argue, irrelevant question. Companies have legacy operations and “technical debt” and that’s it…we just need to deal with it over time. The problem is, time may not be available.

In our discussions with clients, we are seeing that realisation may have dawned. Whilst many companies in recent years have reported significant reductions in staff numbers and costs, are we still just delaying the “death by a thousand cuts”? Some leaders, particularly in technology, have realised that not only running significant operations is untenable, but also that a more radical approach should be taken to move the bar much closer up the operating chain towards where the real business value lies.

Old sourcing models looked at drawing the line at functions such as Strategy, Architecture, Engineering, Security, Vendor Management, Change Management and the like. These were considered the valuable organisational assets. Now. I’m not saying that is incorrect, but what often has happened is that have been treated holistically and not broken down into where the real value lies. Indeed, for some organisations we’ve heard of Strategy & Architecture having between 500-1000 staff! (…and, these are not technology companies).

Each of these functions need to be assessed and the three questions asked. If done objectively, then I’m sure a different model would emerge for many companies with trusted service providers running much on the functions previously thought of as “retained”. It is both achievable, sensible and maybe necessary.

On the middle and front office side, the same can be asked. When CEOs look at the revenue generating business front office, whatever the industry, there are key people, processes and IP that make the company successful. However, there are also many areas where it was historically a necessity to run internally but actually adds no business value (although, of course still very key). If that’s the case, then it makes sense to source it from specialist provider where the economies of scale and challenges in terms of service (such as from “general regulatory requirements”) can be managed without detracting from the core business.

So, if you look at some of the key brands and their staff numbers today in the 10’s/100’s of thousands, it might only be those that focus on key business value and shed the supporting functions, that survive tomorrow.