Has technology outpaced internal IT departments?

Posted on : 31-10-2013 | By : john.vincent | In : Data

Tags: , , , , , , , , , , , ,

4

In technology we love to put a box around something, or define it in a clear and concise way. Indeed, it makes a lot of sense in many technical disciplines to do this, such as architecture, development, processes, policies, infrastructure and so on. We talk about “the stack”, or “technology towers“, or “Reference Architectures”…it provides a common language for us to define out compute needs. “This switch operates at layer 3 versus layer 2” etc…

In the same way we put our technology human capital into nice, neat boxes. Simple, repeatable stuff: 1) Open up Powerpoint…2) Insert SmartArt…3) Hierarchy-Organisation Chart…and away we go. CIO , next level… Head of Infrastructure, Head of Operations, CTO, Head of Applications, Head of Networks, Architecture, COO (can’t have enough of those)…

The general taxonomy of technology organisations has barely changed since the mid 1980’s and actually, until maybe the last 5 or so years, this has been fine. Whilst technology has evolved, it has done so “within the boxes”. We have gone through shifts in operating model and approach, from mainframe to distributed and back again, but the desktop, data, storage, server, mid range and so on services have remained and with it the support organisations around them.

However, things are somewhat different now. The pace of change through Consumerisation, Commoditisation and Cloud (the 3Cs) has redefined the way that businesses engage and capitalise on technology in work and home lives. At the forefront in comes down to three main business drivers:

  • Increased Agility – access to applications and service provisioning should be as close to instantaneous as the laws of physics will allow
  • Increased Mobility – the ability to access applications anywhere, on any device at any time
  • Increased Visibility – a rich data and application environment to improve business intelligence and decision making

To the end user, everything else is just noise. Security, availability, DR, performance, big data analytics…this just gets sorted. Apple does it. Amazon does it, therefore my IT organisation should be the same. In fact better.

So, how does the traditional IT organisation fit with the new paradigm? Well the 3c’s certainly provide significant challenges. The issue is that you have something that was previous contained within a silo now breaking down the barriers. Today’s compute requirements are “fluid” in nature and don’t fit well with the previous operating models. Data, once centralised, contained and controlled, is now moving the the organisational edges. Applications need to be accessible through multiple channels and deployed quickly. Resources need to scale up (and down) to meet, and more importantly match, business consumption.

How does the organisation react to these challenges? Does it still fit neatly into a “stack” or silo? Probably not. How many people, processes and departments does the service pass through in order to provision, operate and control? Many in most cases. Can we apply our well-constructed ITIL processes and a SLA? No. Can we scale quickly for new business requirements from a people perspective? Unlikely…

So what is the impact? Well, it wasn’t that long ago that CIOs spent much of their time declaring war on Shadow IT departments within business functions. With “Alex Ferguson-like” vigour they either moved them into the central technology organisation or squeezed them out, through cost or service risk.

However, it seems that the Shadow IT trend is back. Is this a reaction to the incumbent organisation being unable to provide the requisite level of service? Probably.

I guess the question that we should ask is whether the decentralised model giving more autonomy to business users, for certain functions, is actually where we should be heading anyway? Even within IT departments, the split between ownership, definition and execution of services has evolved through global standards and regional/local service deployment.  Now perhaps it’s time to go further and really align the business and technology service delivery with a much smaller central control of the important stuff, like security, architecture, under-pinning services (like networks), vendor management and disaster recovery.

And then there’s the question of who actually needs to run the underlying technology “compute”. The cloud naysayers are still there although the script is starting to wear a bit thin. There are very few sacred cows…can internal teams really compete long term? The forward thinking are laying out a clear roadmap with targets for cloud/on-demand consumption.

The old saying of “we are a [insert business vertical], not an IT company” is truer today than ever. It may be just that it took the 3cs to force the change.

The next Banking crisis? Too entangled to fail…

Posted on : 30-10-2013 | By : richard.gale | In : Finance

Tags: , , , , , , , , , , ,

0

Many miles of newsprint (& billions of pixels) have been generated discussing the reasons for the near collapse of the financial systems in 2008. One of the main reasons cited was that each of the ‘mega’ banks had such a large influence on the market that they were too big to fail, a crash of one could destroy the entire banking universe.

Although the underlying issues still exist; there are a small number of huge banking organisations, vast amounts of time and legislation has been focused on reducing the risks of these banks by forcing them to hoard capital to reduce the external impact of failure. An unintended consequence of this has been that banks are less likely to lend so constricting firms ability to grow and so slowing the recovery but that’s a different story.

We think, the focus on capital provisions and risk management, although positive, does not address the fundamental issues. The banking system is so interlinked and entwined that one part failing can still bring the whole system down.

Huge volumes of capital is being moved round on a daily basis and there are trillions of dollars ‘in flight’ at any one time. Most of this is passing between banks or divisions of banks. One of the reasons for the UK part of Lehman’s collapse was that it sent billions of dollars (used to settle the next days’ obligations) back to New York each night. On the morning of 15th September 2008 the money did not come back from the US and the company shut down. The intraday flow of capital is one of the potential failure points with the current systems.

Money goes from one trading organisation in return for shares, bonds, derivatives, FX but the process is not instant and there are usually other organisations involved in the process and the money and/or securities are often in the possession of different organisations in that process.

This “Counterparty Risk” is now one of the areas that banks and regulators are focussing in on. What would happen if a bank performing an FX transaction on behalf of a hedge fund stopped trading. Where would the money go? Who would own it and, as importantly, how long would it take for the true owner to get it back. The other side of the transaction would still be in flight and so where would the shares/bonds go? Assessing the risk of a counterparty defaulting whilst ensuring the trading business continues is a finely balanced tightrope walk for banks and other trading firms.

So how do organisations and governments protect against this potential ‘deadly embrace’?

Know your counterparty; this has always been important and is a standard part of any due diligence for trading organisations, what is as important is;

Know the route and the intermediaries involved; companies need as much knowledge of the flow of money, collateral and securities as they do for the end points. How are the transactions being routed and who holds the trade at any point in time. Some of these flows will only pause for seconds with one firm but there is always a risk of breakdown or failure of an organisation so ‘knowing the flow’ is as important as knowing the client.

Know the regulations; of course trading organisations spend time & understand the regulatory framework but in cross-border transactions especially, there can be gaps, overlaps and multiple interpretations of these regulations with each country or trade body having different interpretation of the rules. Highlighting these and having a clear understanding of the impact and process ahead of an issue is vital.

Understanding the impact of timing and time zones; trade flows generally can run 24 hours a day but markets are not always open in all regions so money or securities can get held up in unexpected places. Again making sure there are processes in place to overcome these snags and delays along the way are critical.

Trading is getting more complex, more international, more regulated and faster. All these present different challenges to trading firms and their IT departments. We have seen some exciting and innovative projects with some of our clients and we are looking forward to helping others with the implementation of systems and processes to keep the trading wheels oiled…

What is the true price of BYOD?

Posted on : 29-10-2013 | By : jo.rose | In : Innovation

Tags: , , , , , , , ,

0

“Nature is a mutable cloud which is always and never the same.”  Ralph Waldo Emerson

Our failure to enter into good commercial agreements in the past has hampered our chances of attaining the full value offered by new systems and technologies. The mutable clouds that stream towards us at increasing speeds offer greater potential; yet the commercial challenges are always the same. What are some of these commercial challenges posed by newer technologies? What can you do about them?

Let us consider an example: the trend for CIOs to adopt a Bring Your Own Device [BYOD] policy. Once the concerns about security, data privacy and access have been addressed, a BYOD policy is very attractive to both the user community and the CIO. However, a BYOD policy also starts the timer ticking on a cluster of time bombs: what software suppliers will do about business use of personal software.

Managing software audits properly has always been a difficult task. Many organisations over-deployed software within their environments or allowed software to be used in ways that were not covered by licences or enterprise agreements. How much more difficult does this become where business work is delivered using personal devices? How can the organisation track and report the use of personal devices? Will there be a single personal device used per employee or is business looking at individual instances for desktop, laptop and mobile devices?

One possible approach is for the business to tell the software supplier to pursue staff directly for inappropriately using their home edition software. Staff attitude surveys towards IT might well dip after such an event and the liability will likely return to the business corporation because that is where the benefit lies.

A second solution would be to let the issue drift until the software provider initiates an audit and then cut a deal. Most organisations took this approach to past software compliance liabilities. Given the difficulty of proving the right usage statistics from a BYOD policy, there needs to be plenty of space for a bigger number in the ‘Amount’ box of the settlement cheque.

The best approach is to review software agreements pro-actively. Pay particular attention to applications and data. You may be lucky and find that some of your agreements are based on headcount. Never, ever surrender a headcount clause. Where you do not have a headcount agreement with a software supplier then you can try asking for one, although a new headcount agreement is now likely to be prohibitively expensive with an incumbent supplier.

Assuming you do not have headcount clauses, when you review your software agreements the thought process should be something like this….

  • Can the supplier demonstrate that employees have used personal software to deliver business needs?
  • If the supplier can demonstrate this, we may have a liability.
  • Can we provide accurate statistics for how many instances/devices/employees are involved?
  • If we cannot provide accurate inventory then the liability might end up being a multiple of the number of employees, contractors, consultants and suppliers that work on our behalf.
  • We should be able to reduce the liability if we formulate a commercial stratagem that the supplier will accept.

So is this just scare mongering? US President Obama set up the Office of the Intellectual Property Enforcement Coordinator [IPEC] in 2010 and has significantly expanded enforcement powers in the US. Through negotiations with the European Union, G8 members and G20 members, the US continues to extend its Copyright and Intellectual Property models to the UK and other developed countries as part of a campaign to ‘Fight Worldwide Counterfeiting’, IP Theft and Copyright Infringement. Software suppliers coordinated their common interests through a trade body called the Business Software Alliance [BSA], founded in 1988. Since 2008, software suppliers have seen major reductions in their income because businesses cut back spend on new development projects. Suppliers replaced their lost development income with penalties for non-compliance gathered through more widespread software audits. Most of these are gathered in out of court settlements that are not widely reported.

There is some good news. The BSA and software suppliers focus much of their energy on countries where they see high levels of piracy and the UK is not one of those. The suppliers themselves are also generally amenable to working with businesses to find solutions where software costs remain reasonable. Once you have spotted a problem, work with your commercial or legal teams to formulate a stratagem and bring this in good faith to the supplier.

Do not take too long. Pressures on software firms’ revenues increase as their old products lose market share to new platforms like Android, new applications like Prezi and new productivity tools. The commercial solutions remain the same but the new clouds roll in faster.

 

Many thanks to Sean Pepper for contributing to this article – Sean is an interim manager and consultant with experience of leading Vendor Management and Procurement activities at major banks.

For any questions or more information, please contact: jo.rose@broadgateconsultants.com