Agile. Is it the new name for in-sourcing?

Posted on : 30-01-2015 | By : richard.gale | In : Innovation

Tags: , , , , , , , , , , , , , , ,

0

Business, IT, clothing are all similar in so much that they can lead and follow fashions & trends.

Looking at IT specifically there is a trend to commoditise and outsource as much as possible to concentrate on the core ‘business’ of growing a business. As we all know this has many advantages for the bottom line and keeps the board happy as there is a certainty of service & cost, headcount is down and the CIO has something to talk about in the exec meetings.

At the coalface the story is often a different one with users growing increasingly frustrated with the SLA driven service, business initiatives start to be strangled by a cumbersome change processes and support often rests in the hands of the dwindling number of IT staff with deep experience of the applications and organisation.

So a key question is –  How to tackle both the upward looking cost/headcount/service mentality whilst keeping the ability to support and change the business in a dynamic fulfilling way?

Agile is a hot topic in most IT and business departments, it emerged from several methodologies from the 1990’s with roots back to the ‘60s and has taken hold as a way of delivering change quickly to a rapidly changing business topology.

At its core Agile relies on:

  • Individuals & interaction – over process and tools
  • Customer communication & collaboration in the creation process – over agreeing scope/deliverables up front
  • Reactive to changing demands and environment – over a blinkered adherence to a plan

The basis of Agile though relies on a highly skilled, articulate, business & technology aware project team that is close to and includes the business. This in theory is not the opposite of an outsourced, commodity driven approach but in reality the outcome often is.

When we started working on projects in investment organisations in the early ‘90s most IT departments were small, focused on a specific part of the business and the team often sat next to the trader, accountant or fund manager. Projects were formal but the day to day interaction, prototyping, ideas and information gathering could be very informal with a mutual trust and respect between the participants. The development cycle was often lengthy but any proposed changes and enhancements could be story boarded and walked through on paper to ensure the end result would be close to the requirement.

In the front office programmers would sit next to the dealer and systems, changes and tweaks would be delivered almost real time to react to a change in trading conditions or new opportunities (it is true to say this is still the case in the more esoteric trading world where the split between trader and programmer is very blurry).  This world, although unstructured, is not that far away from Agile today.

Our thinking is that businesses & IT departments are increasingly using Agile not only for its approach to delivering projects but also, unconsciously perhaps,  as a method of bypassing the constraints of the outsourced IT model – the utilisation of experienced, skilled, articulate, geographically close resources who can think through and around business problems are starting to move otherwise stalled projects forward so enabling the business to develop & grow.

The danger is – of course – that as it becomes more fashionable – Agile will be in danger of becoming mainstream (some organisations have already built offshore Agile teams) and then ‘last years model’ or obsolete. We have no doubt that a new improved ‘next big thing’ will come along to supplant it.

 

Broadgate Big Data Dictionary

Posted on : 28-10-2014 | By : richard.gale | In : Data

Tags: , , , , , , , , , , ,

0

A couple of years back we were getting to grips with big data and thought it would be worthwhile putting a couple of articles together to help explain what the fuss was all about. Big Data is still here and the adoption of it is growing so we thought it would be worthwhile updating and re-publishing. Let us know what you think?

We have been interested in Big Data concepts and technology for a while. There is a great deal of interest and discussion with our clients and associates on the subject of obtaining additional knowledge & value from data.

As with most emerging ideas there are different interpretations and meanings for some of the terms and technologies (including the thinking that ‘big data’ isn’t new at all but just a new name for existing methods and techniques).

With this in mind we thought it would be useful to put together a few terms and definitions that people have asked us about recently to help frame Big Data.

We would really like to get feedback, useful articles & different views on these to help build a more definitive library of Big Data resources.

Analytics 

Big Data Analytics is the processing and searching through large volumes of unstructured and structured data to find hidden patterns and value. The results can be used to further scientific or commercial research, identify customer spending habits or find exceptions in financial, telemetric or risk data to indicate hidden issues or fraudulent activity.

Big Data Analytics is often carried out with software tools designed to sift and analyse large amounts of diverse information being produced at enormous velocity. Statistical tools used for predictive analysis and data mining are utilised to search and build algorithms.

Big Data

The term Big Data describes amounts of data that are too big for conventional data management systems to handle. The volume, velocity and variety of data overwhelm databases and storage. The result is that either data is discarded or unable to be analysed and mined for value.

Gartner has coined the term ‘Extreme Information Processing’ to describe Big Data – we think that’s a pretty good term to describe the limits of capability of existing infrastructure.

There has always been “big data” in the sense that data volumes have always exceeded the ability for systems to process it. The tool sets to store & analyse and make sense of the data generally lag behind the quantity and diversity of information sources.

The actual amounts and types of Big Data this relates to is constantly being redefined as database and hardware manufacturers are constantly moving those limits forward.

Several technologies have emerged to manage the Big Data challenge. Hadoop has become a favourite tool to store and manage the data, traditional database manufacturers have extended their products to deal with the volumes, variety and velocity and new database firms such as ParAccel, Sand & Vectorwise have emerged offering ultra-fast columnar data management systems. Some firms, such as Hadapt, have a hybrid solution utilising tools from both the relational and unstructured world with an intelligent query optimiser and loader which places data in the optimum storage engine.

Business Intelligence

The term Business Intelligence(BI) has been around for a long time and the growth of data and then Big Data has focused more attention in this space. The essence of BI is to obtain value from data to help build business benefits. Big Data itself could be seen as BI – it is a set of applications, techniques and technologies that are applied to an entities data to help produce insight and value from it’s data.

There are a multitude of products that help build Business Intelligence solutions – ranging from the humble Excel to sophisticated (aka expensive) solutions requiring complex and extensive infrastructure to support. In the last few years a number of user friendly tools such as Qlikview and Tableau have emerged allowing tech-savvy business people to exploit and re-cut their data without the need for input from the IT department.

Data Science

This is, perhaps, the most exciting area of Big Data. This is where the Big Value is extracted from the data. One of our data scientist friends described it as follows: ” Big Data is plumbing and that Data Science is the value driver…”

Data Science is a mixture of scientific research techniques, advance programming and statistical skills (or hacking), philosophical thinking (perhaps previously known as ‘thinking outside the box’) and business insight. Basically it’s being able to think about new/different questions to ask, be technically able to intepret them into a machine based format, process the result, interpret them and then ask new questions based from the results of the previous set…

A diagram by blogger Drew Conway  describes some of the skills needed – maybe explains the lack of skills in this space!

 

In addition Pete Warden (creator of the Data Science Toolkit) and others have raised caution on the term Data Science “Anything that needs science in the name is not a real science” but confirms the need to have a definition of what Data Scientists do.

Database

Databases can generally be divided into structured and unstructured.

Structured are the traditional relational database management systems such as Oracle, DB2 and SQL-Server which are fantastic at organising large volumes of transactional and other data with the ability to load and query the data at speed with an integrity in the transactional process to ensure data quality.

Unstructured are technologies that can deal with any form of data that is thrown at them and then distribute out to a highly scalable platform. Hadoop is a good example of this product and a number of firms now produce, package and support the open-source product.

Feedback Loops

Feedback loops are systems where the output from the system are fed back into it to adjust or improve the system processing. Feedback loops exist widely in nature and in engineering systems – think of an oven – heat is applied to warm to a specific temperature and is measured by a thermostat – once the correct temperature is reached the thermostat informs the heating element and it shuts down until feedback from the thermostat says it is getting too cold and it turns on again… and so on.

Feedback loops are an essential part of extracting value from Big Data. Building in feedback and then incorporating Machine Learning methods start to allow systems to become semi-autonomous, this allows the Data Scientists to focus on new and more complex questions whilst testing and tweaking the feedback from their previous systems.

Hadoop

Hadoop is one of the key technologies to support the storage and processing of Big Data. Hadoop emerged from Google and its distributed Google File System and Mapreduce processing tools. It is an open source product under the Apache banner but, like Linux, is distributed by a number of commercial vendors that add support, consultancy and advice on top of the products.

Hadoop is a framework for running applications on large clusters of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named map/reduce, where the application is divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster. In addition, it provides a distributed file system that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both map/reduce and the distributed file system are designed so that node failures are automatically handled by the framework.

So Hadoop could almost be seen as a (big) bucket where you can throw any form and quantity of data into it and it will organise and know where that data resides and can retrieve and process it. It also accepts that there may be holes in the bucket and can patch them up by using additional resources to patch itself up – all in all very clever bucket!!

Hadoop runs on a scheduling basis so when a question is asked it breaks up the query and shoots them out to different parts of the distributed network in parallel and then waits and collates the answers.

Hive

Hive provides a high level, simple, SQL type language to enable processing of and access to data stored in Hadoop files. Hive can provide analytical and business intelligence capability on top of Hadoop. The Hive queries are translated into a set of MapReduce jobs to run against the data. The technology is used by many large technology firms in their products including Facebook and Last.FM. The latency/batch related limitations of MapReduce are present in Hive too but the language allows non-Java programmers to access and manipulate large data sets in Hadoop.

Machine Learning

Machine learning is one of the most exciting concepts in the world of data. The idea is not new at all but the focus on utilising feedback loops of information and algorithms that take actions and change depending on the data without manual intervention could improve numerous business functions. The aim is to find new or previously unknown patterns & linkages between data items to obtain additional value and insight. An example of machine learning in action is Netflix which is constantly trying to improve its movie recommendation system based on a user’s previous viewing, their characteristics and also the features of their other customers with a similar set of attributes.

MapReduce

Mapreduce is a framework for processing large amounts of data across a large number of nodes or machines.

http://code.google.com/edu/parallel/img/mrfigure.png
Map Reduce diagram (courtesy of Google)

Mapreduce works by splitting out (or mapping) requests into multiple separate tasks to be performed on many nodes of the system and then collates and summarises the results back (or reduces) to the outputs.

Mapreduce based on the java language and is the basis of a number of the higher level tools (Hive, Pig) used to access and manipulate large data sets.

Google (amongst others) developed and use this technology to process large amounts of data (such as documents and web pages trawled by its web crawling robots). It allows the complexity of parallel processing, data location and distribution and also system failures to be hidden or abstracted from the requester running the query.

MPP

MPP stands for massively parallel processing and it is the concept which gives the ability to process the volumes (and velocity and variety) of data flowing through systems. Chip processing capabilities are always increasing but to cope with the faster increasing amounts of data processing needs to be split across multiple engines. Technology that can split out requests into equal(ish) chunks of work, manage the processing and then join the results has been difficult to develop.  MPP can be centralised with a cluster of chips or machines in a single or closely coupled cluster or distributed where the power of many distributed machines are used (think ‘idle’ desktop PCs overnight usage as an example). Hadoop utilises many distributed systems for data storage and processing and also has fault tolerance built in which enables processing to continue with the loss of some of those machines.

NoSQL

NoSQL really means ‘not only SQL’, it is the term used for database management systems that do not conform to the traditional RDBMS model (transactional oriented data management systems based on the ACID principle). These systems were developed by technology companies in response to challenges raised by the high volumes of data. Amazon, Google and Yahoo built NoSQL systems to cope with the tidal wave of data generated by their users.

Pig

Apache Pig is a platform for analysing huge data sets. It has a high-level language called Pig Latin which is combined with a data management infrastructure which allows high levels of parallel processing. Again, like Hive, the Pig Latin is compiled into MapReduce requests. Pig is also flexible so additional functions and processing can be added by users for their own specific needs.

Real Time

The challenges in processing the “V”‘s in big data (volume, velocity and variety) have meant that some requirements have been compromised. It the case of Hadoop and Mapreduce this has been the interactive or instant availability of the results. Mapreduce is batch orientated in the sense that requests are sent for processing where they are then scheduled to be run and then the output summarised. This works fine for the original purposes but now the ability to become more real-time or interactive are growing. With a ‘traditional’ database or application users expect the results to be available instantly or pretty close to instant. Google and others are developing more interactive interfaces to Hadoop. Google has Drill and Twitter has release Storm. We see this as one of the most interesting areas of development in the Big Data space at the moment.

 

Over the next few months we have some guest contributors penning their thoughts on the future for big data, analytics and data science.  Also don’t miss Tim Seears’s (TheBigDataPartnership) article on maximising value from your data “Feedback Loops” published here in June 2012.

For the technically minded Damian Spendel also published some worked examples using ‘R’ language on Data Analysis and Value at Risk calculations.

These are our thoughts on the products and technologies – we would welcome any challenges or corrections and will work them into the articles.

 

Is it possible to prevent those IT Failures?

Posted on : 30-05-2014 | By : richard.gale | In : Cyber Security

Tags: , , , , , , ,

0

Last month we counted down our Top 10 Technology Disasters. Here are some of our tips on project planning  which may help avoid failure in the future.

Objectives

What is the project trying to achieve. This should be clear and all involved in the project including the recipients of the solution need to know what they are. Having unclear or unstated goals will not only impact the chances of success but also it will be unclear what ‘success’ is if it occurs.

Value 

The value of the project to the organisation needs to be known and ‘obvious’. Too many projects start without this basic condition.

If the organisation is no better off after the project has been completed then there is little point starting it. Better off can be defined in many ways – business advantage/growth, cost savings/efficiency, internal/external push (e.g. something will break or an auditor or regulator requires it to be done).

Projects are too often initiated for unclear or obscure reasons ranging from “we have some budget to spend on something” through “we would like to play with this new technology and need a project to enable us to to this” to “We’ve started so we’ll finish” when the business has changed or has moved onto other priorities.

Having a clear understanding of the value of the work and a method of measuring success through and after the project has delivered should be a fundamental part of any change process.

Scale

Large projects are difficult. Some projects need to be large, there would be little point building half of London’s Cross Rail tunnels, but large projects seem more likely to fail (or at least get more publicity when they do). Complexity rises logarithmically as projects grow due to the rise in connectivity of the risks, issues, logistics and numbers of people  involved.

Breaking projects down into manageable pieces increases the likelihood of successful outcomes. The projects need to be woven into an overall programme or framework to ensure the sum of the parts does end up equalling the whole though.

Duration

In a similar vein as Scale above. Projects with an extended duration are less likely to achieve full value. Businesses are not static and they change over time and with that their objectives and goals change with them. The longer a project is running the more likely it is that what the  business requires now is not what is being delivered.

As outlined above, some projects are so large that they will run for multiple years, if they do they clear milestones need to be set on a much shorter time perspective to avoid a loss of control (in terms of scope, time, cost). Also regular review points should be built into lengthy projects to reconfirm that business objectives are still being met – ultimately that that change is still required…

Accountability

Nothing new here but someone with both interest in the success and seniority to ensure acceptance should be accountable for the success of the project. If the key stakeholder is not engaged in terms of ownership and driving the project along to completion then the chances of a successful outcome are greatly diminished.

Empowerment

The other side of Accountability is Empowerment. Successful projects need to have empowered teams that understand the objectives of their project, their important part within it and are able to make decisions to guide it to completion. Projects where there is a top-down or command-and-control philosophy may succeed but the person making all the decisions needs to be right all the time. Teams go into reactive or ‘follow without questioning’ modes of operating which will increase the likelihood that that the wrong decision will be made and accepted resulting in project failure.

In conclusion, make sure the project goals are clear, it is adding value to the business, keep it short, ensure senior leadership buy in and ensure the team can make the right decisions! If only it was this easy…

The aggregation of marginal gains – what can we learn from the sport of cycling?

Posted on : 30-09-2013 | By : richard.gale | In : General News

Tags: , , , , , , , , , ,

0

Sir David Brailsford is the major driver behind a revolution in the fortunes of British Cycling. The UK is now one of the most successful cycling nations with two successive Tour de France winners from Team Sky, a team that was put together barely 4 years ago. Fifteen years ago British cycling was languishing in the lower divisions, now it is riding high in the world rankings.

One of the most interesting techniques Brailsford has applied to cycle coaching is the “aggregation of marginal gains” the sum of analysing & making many small changes to an environment or training plan.  Many examples have been quoted such as heating bib shorts before use to keep the muscles warm, wiping tyres down with alcohol before the start of races to clean grit off and employing a chef to provide optimised meals for the riders.

One specific example of this is the Team Sky Bus. Every competitor has a bus but, before Brailsford and his team, none had thought about in the same way. Team Sky started from scratch and built it out to provide the perfect environment to support the riders on the tours. Every part of the rider’s routine was analysed and an environment was then designed to meet their needs perfectly. Riders need lots of clean, dry kit, the need lots of nutritious interesting food, they need somewhere private to discuss the days’ events and plan for the next one. So the bus included washing machines (muffled of course), meeting rooms, kitchen & sleeping areas customised for the riders.

The attention to detail (and an almost unlimited budget) showed through when two brand new Volvo coaches were torn apart and then 9000 man hours of kitting out took place. This process involved the coaches, riders and other staff with continuous feedback which refined the result into an additional pair of team members. Initially the rival teams dismissed the buses nicknamed “Death Stars” as just another bus (abet – expensive they ended up costing around £750k each)but as Sky’s daily results on the tours jumped up the leader boards they came to learn and respect the thought processes involved.

So what lessons can we learn on the Sky approach? Well the techniques they are using have been borrowed from business ideas but it is the consistent application of them which is making them work so well.

GB cycling & the Sky team have a similar philosophy based on the following core principles:

Setting ambitious goals

From a standing start in 2010 Brailsford said Team Sky would win the Tour de France within five years. This was seen as ludicrous by the cycling establishment. He disrupted conventional thinking by applying scientific methods to the sport and, with Bradley Wiggins victory in 2012, it actually took them three years.

We think this ‘shooting for the stars’ ambition can work for business just as well. Aiming for what could be done not what is being done changes the way people think within companies and, given the right environment, support, drive and that ambition does create winning organisations.

Focus on the end result

What is important? All around there is noise, interference and distractions so keeping the ‘blinkers’ on to aim for the end-game is critical. Saying that, blindly ignoring feedback or responses around you can be fatal too so ensuring you are aiming for the right end result is also critical.

Teamwork & Ensuring the whole team has one vision

All organisations have teams. Team GB & Sky have ensured the right mix of individuals form a team with a common, shared goal. This is something which is part directed, part in built and always reinforced. Everyone understands the obligations and rewards of having the single winning vision.

Analyse everything

Data is everything and unlocking its hidden value is another key to the team’s success. Everyone in the team understands the value of capturing as much information as possible and that data is analysed and replayed in as near time as possible. The Sky team sometimes forgo the glory of the ‘hands free’ roll over the finishing line to punch in the completion message on their bike computers.

Control & Discipline

There is a poster on the entrance to the team bus with the Team rules re-emphasises the importance of the vision and goals of the team. It does not spell out the penalties for infringement but a number of people have left the team after breaching rules either during or before their stint with Sky.

Grow the person

This is the aim of most businesses but both GB and Sky aim to get inside their team members’ heads to understand their motivations, desires and ambitions. This energy is then focussed in such a way to build and improve the team whilst maximising the personal objectives of the person.

Plan and plan flexibility

Team GB & Sky management and riders spend a large amount of their time planning for every eventuality including differing weather conditions, team strengths, rivals changing strategies and  any other factors that can influence the race. They then produce the strategic plan of the race, the day, the hour or the hill. The important piece is that any changing circumstances are fed into the plan to modify or indeed create a new plan as it is required. It is strong enough to hold up and work but flexible enough change and still be a success.

 

All these attributes can be applied to most business areas and it is the ability to plan and refine every detail which has provided British cycling and Sky with their continued success. Small continuous improvements bring marginal gains to both Sport and also Business teams.

What is also critical is that the strategy or ‘big picture’ is going in the right direction. There is no point bringing the right pillow if the bus is parked in the wrong town.

 

 

Broadgate Predicts 2013 – Survey Results

Posted on : 27-03-2013 | By : jo.rose | In : Data, Finance, General News, Innovation, IoT

Tags: , , , , , , , ,

0

In January we surveyed our clients, colleagues and partners against our predictions for 2013. We are pleased that we have now the results, the highlights of which are included below.

Key Messages

Infrastructure as a Service, Cloud and a shift to Data Centre & Hosted Services scored the highest, outlining the move from on-premise to a more utility based compute model.

Strategies to rationalise apps, infrastructure and organisations remains high on the priority list. However, removing the technology burden built over many years is proving difficult.

Many commented on the current financial constraints within organisations and the impact to the predictions in terms of technology advancement.

Response Breakdown

 

 

 

 

 

 

 

 

 

 

Of the total responses received, the vast majority concurred with the predictions for 2013. A total of 78% either “Agreed” or “Strongly Agreed” (broadly in line with the 2012 survey).

Ranking

 

 

 

 

 

 

 

 

 

 

The diagram above shows the results in order from highest scoring to lowest. The continued growth in Infrastructure as a Service had the top overall ranking with 91% and the least was Crowd-funding with 53% agreement.

Respondents

 

 

 

 

 

 

 

 

 

 

We sent our predictions out to over 700 of our clients and associates. Unlike our previous years’ survey, we wanted to get feedback from all levels and functions, so alongside CIOs, COOs and technology leaders we also surveyed SMEs on both the buy and sell side of service delivery organisations.

We would like to thank all respondents for their input and particularly for the many that provided additional insight and commentary.

If you would like a copy of the full report, please email jo.rose@broadgateconsultants.com.

Broadgate Predicts 2013 – Preview

Posted on : 29-01-2013 | By : john.vincent | In : Innovation

Tags: , , , , , , , , , , , , , ,

0

Last month we published our 2013 Technology Predictions and asked our readers to give us their view through a short survey. We have had a great response…so much so that we are keeping in open for 2 more weeks.

However, we thought we would share a few of the findings so far, prior to us producing the final report.

Current Ranking

As we stand, the predictions that generated the most agreement are;

  1. Infrastructure Services Continue to Commoditise
  2. Samsung/Android gain more ground over Apple
  3. Data Centre/Hosting providers continue to grow

Some interesting commentary against these;

Many companies have come to terms with the security/regulatory issues concerning commoditisation and cloud services, although still chose to build in-house for now. It will take some significant time to see IaaS address the legacy infrastructure burden.

On the Apple debate, respondents agreed enough to place in 2nd place but differed a lot in terms of how this will develop…there is a feeling that Apple are struggling to continue to innovate ahead of the market and consumers are wiser now, together with a cost pressure that, if it is relieved, will cause users to stay with them.

Regarding Data Centres, the importance of cloud and managed services continues to drive expansion. Within heavily regulated industries such as Financial Services there continues to be a desire to Build vs Buy, but respondents questioned for how long. Having your own DC is not a competitive advantage.

At the other end of the scale, the prediction that respondents disagreed most with was;

  • Instant Returns on Investment required (followed closely by)
  • More Rationalisation of IT Organisations

Again, a pick of some of the additional comments;

Whilst there still exists demand for long term and large corporate technology initiatives, the stance is starting to change somewhat towards more agile, focused investments. Unfortunately, legacy issues and organisational culture continue to block progress.

Whilst the market conditions and technology evolution is facilitating a reduction in workforce, respondents cited other equal forces in areas such as risk and control, plus offshore operations delivering less value than expected, working to counteract this.

Please continue to send us your thoughts before we close!

Interestingly the largest number of No Comments (40%) came against the prediction that “Crowd-funding services continue to gain market share”…maybe an article for February.

Broadgate Predicts – Survey Results

Posted on : 26-01-2012 | By : jo.rose | In : Data, General News

Tags: , , , , , , , , , , ,

0

Last month we published 10 Technology Predictions for 2012. We asked for readers to send us their views and also distributed a survey to over 400 clients and associates.

Over 120 people responded, made up of CIO’s, COO’s, Procurement, Technology Change Managers and Subject Matter experts across industries on both the buy and sell side.

 

 

 

 

 

 

Of the responses received, a total of 82% either “Agreed” or “Strongly Agreed” with the predictions. We received a total of 1203 answers to the questions and numerous additional comments.

 

 

 

 

 

 

The responses provided a great insight into the key strategy areas for the coming year. Some common themes were:

  1. Cloud Computing and the continued Commoditisation of IT scored highest in general agreement.
  2. Social Media and Cloud Computing generated the highest number of comments and continue to polarise opinion on the maturity and place, particularly within Financial Services.
  3. Many commented on the current financial constraints within organisations and the impact on the predictions. These were both positive in terms of driving efficiency and negative around funding any change.

If you would like to contribute or obtain a copy of the full report please contact jo.rose@broadgateconsultants.com.