Are you able to access all the data across your organisation?

For many years data has been the lifeblood of the organisation and more recently, the value of this commodity has been realised by many companies (see our previous article “Data is like oil”).

Advances in technology, processing power and analytics means that companies can collect and process data in real time. Most businesses are sitting on vast amounts of data and those that can harness it effectively can gain a much deeper understanding of their customers, better predict and improve their customer experience.

Our survey revealed that whilst most companies understand the value of their data and the benefits it can bring, many clients revealed a level of frustration in the systems and processes that manage it. Some respondents did qualify that “most of the data” was available, whilst others admitted some was stranded.

 “Data is in legacy silos, our long-term goal is to provide access through a consistent data management framework”

The deficiencies that we also discuss in this newsletter regarding legacy systems are partly responsible for this, although not wholly. This is a particular issue in financial services where many organisations are running on old systems that are too complex and too expensive to replace. Critical company data is trapped in silos, disconnected and incompatible with the rest of the enterprise.

These silos present a huge challenge for many companies. Recalling a comment of one Chief Data Office at a large institution;

“If I ask a question in more than one place, I usually get more than one answer!”

Data silos are expanding as companies collect too much data which they hold onto for longer than they need to. Big data has been a buzz word for a while now, but it is important that companies distinguish between big data and big bad data! The number of data sources are increasing all the time so the issue must be addressed if the data is to be used effectively to return some business value. Collecting a virtually unlimited amount of data needs to be managed properly to ensure that all data stored has a purpose and can be protected.

Shadow data further exacerbates the issue. This data is unverified, often inaccurate and out of date. Oversharing of this data results in it being stored in areas that are unknown and unable to be traced. Creating yet more data silos hidden from the wider enterprise. This data is viewed as a valid data source relied upon and then used as input into other systems, which can ultimately lead to bad business decisions being made.

A robust data governance and management strategy is something which the importance of cannot be underestimated, particularly for those serious about the digital agenda and customer experience. This is also a topic where the combination of business and IT leadership aligning on the product strategy and underlying “data plumbing” is a must.  This is not just about systems but also about the organisation’s attitude to data and its importance in the life of every business process. It is important that companies implement a data management strategy which encompasses not only the internal platforms and governance but also the presentation layer for business users, consumers and data insights.

Posted on : 31-03-2019 | By : richard.gale | In : Data, Finance

0

The ultimate way to move beyond trading latency?

A number of power surges and outages have been experienced in the East Grinstead area of the UK in recent months. Utility companies involved have traced the cause to one of three  high capacity feeds to a Global Investment bank’s data centre facility.

The profits created by the same bank’s London based Propriety Trading group has increased tenfold in the same time.

This bank employs 1% of the world’s best post-doctoral theoretical Physics graduates  to help build its black box trading systems

Could there be a connection? Wild & unconfirmed rumours have been circulating within  the firm that a major breakthrough in removing the problem of latency – the physical limitation the time it takes a signal to transfer down a wire – ultimately governed by of the speed of light.

For years traders have been trying to reduce execution latency to provide competitive advantage in a highly competitive fast moving environment. The focus has moved from seconds to milli and now microsecond savings.

Many Financial Services & technology organisations have attempted to solve this problem through reducing  data hopping, routing, and going as far as placing their hardware physically close to the source of data (such as in an Exchange’s data centre) to minimise latency but no one has solved the issue – yet.

It sounds like this bank may have gone one step further. It is known that at the boundary of the speed of light – physics as we know it -changes (Quantum mechanics is an example where the time/space continuum becomes ‘fuzzy’). Conventional physics states that travelling faster than the speed of light and see into the future would require infinite energy and so is not possible.

Investigation with a number of insiders at the firm has resulted in an amazing and almost unbelievable insight. They have managed to build a device which ‘hovers’ over the present and immediate future – little detail is known about it but it is understood to be based on the previously unproven ‘Alcubierre drive’ principle. This allows the trading system to predict (in reality observe) the next direction in the market providing invaluable trading advantage.

The product is still in test mode as the effects of trading ahead of the data they have already traded against is producing outages in the system as it then tries to correct the error in the future data which again changes the data ad finitum… The prediction model only allows a small glimpse into the immediate future which also limits the window of opportunity for trading.

The power requirements for the equipment are so large that they have had to been moved to the data centre environment where consumption can be more easily hidden (or not as the power outages showed).

If the bank does really crack this problem then they will have the ultimate trading advantage – the ability to see into the future and trade with ‘inside’ knowledge legally. Unless another bank is doing similar in the ‘trading arms race’ then the bank will quickly become dominant and the other banks may go out of business.

The US Congress have apparently discovered some details of this mechanism and are requesting the bank to disclose details of the project. The bank is understandably reluctant to do this as it has spent over $80m developing this and wants to make some return on its investment.

If this system goes into true production mode surely it cannot be long before Financial Regulators outlaw the tool as it will both distort and ultimately destroy the markets.

Of course the project has a codename…. Project Tachyons

No one from the company was available to comment on the accuracy of the claims.

Posted on : 29-03-2019 | By : richard.gale | In : Finance, Uncategorized

Tags: , , , , , , ,

0

Do you believe that your legacy systems are preventing digital transformation?

According to the results of our recent Broadgate Futures Survey more than half of our clients agreed that digital transformation within their organisation was being hampered by legacy systems. Indeed, no one “strongly disagreed” confirming the extent of the problem.

Many comments suggested that this was not simply a case of budget constraints, but the sheer size, scale and complexity of the transition had deterred organisations in fear of the fact that they were not adequately equipped to deliver successful change.

Legacy systems have a heritage going back many years to the days of the mega mainframes of the 70’s and 80’s. This was a time when banks were the masters of technological innovation. We saw the birth of ATMs, BACS and international card payments. It was an exciting time of intense modernisation. Many of the core systems that run the finance sector today are the same ones that were built back then. The only problem is that, although these systems were built to last they were not built for change.

The new millennium experienced another significant development with the introduction of the internet, an opportunity the banks could have seized and considered developing new, simpler, more versatile systems. However, instead they decided to adopt a different strategy and modify their existing systems, in their eyes there was no need to reinvent the wheel. They made additions and modifications as and when required. As a result, most financial organisations have evolved over the decades into organisations of complex networks, a myriad of applications and an overloaded IT infrastructure.

The Bank of England itself has recently been severely reprimanded by a Commons Select Committee review who found the Bank to be drowning in out of date processes in dire need of modernisation. Its legacy systems are overly complicated and inefficient, following a merger with the PRA in 2014 their IT estate comprises of duplicated systems and extensive data overload.

Budget, as stated earlier is not the only factor in preventing digital transformation, although there is no doubt that these projects are expensive and extremely time consuming. The complexity of the task and the fear of failure is another reason why companies hold on to their legacy systems. Better the devil you know! Think back to the TSB outage (there were a few…), systems were down for hours and customers were unable to access their accounts following a system upgrade. The incident ultimately led to huge fines from the Financial Conduct Authority and the resignation of the Chief Executive.

For most organisations abandoning their legacy systems is simply not an option so they need to find ways to update in order to facilitate the connection to digital platforms and plug into new technologies.

Many of our clients believe that it is not the legacy system themselves which are the barrier, but it is the inability to access the vast amount of data which is stored in its infrastructure.  It is the data that is the key to the digital transformation, so accessing it is a crucial piece of the puzzle.

“It’s more about legacy architecture and lack of active management of data than specifically systems”

By finding a way to unlock the data inside these out of date systems, banks can decentralise their data making it available to the new digital world.

With the creation of such advancements as the cloud and API’s, it is possible to sit an agility layer between the existing legacy systems and newly adopted applications. HSBC has successfully adopted this approach and used an API strategy to expand its digital and mobile services without needing to replace its legacy systems.

Legacy systems are no longer the barrier to digital innovation that they once were. With some creative thinking and the adoption of new technologies legacy can continue to be part of your IT infrastructure in 2019!

https://www.finextra.com/newsarticle/33529/bank-of-england-slammed-over-outdated-it-and-culture

Posted on : 14-03-2019 | By : richard.gale | In : Data, Finance, FinTech, Innovation, Uncategorized

Tags: , , , , , , , ,

0

Has the agile product delivery model has been too widely adopted?

As a consultancy, we have the benefit of working with many clients across almost all industry verticals. Specifically, over the last 7-8 years we have seen a huge uptake in the shift from traditional project delivery models towards more agile techniques.

The combination of people, process and technology with this delivery model has been hugely beneficial in increasing both the speed of execution and alignment of business requirements with products. That said, in more recent years we have observed an almost “religious like” adoption of agile often, in our view, at the expense of pragmatism and execution focus. A purist approach to agile—where traditional development is completely replaced in one fell swoop— results in failure for many organisations, especially those that rely on tight controls, rigid structures and cost-benefit analysis.

Despite its advantages, many organisations struggle to successfully transition to agile, leading to an unnecessarily high agile project failure rate. While there are several common causes for this failure rate, one of the top causes—if not the leading cause—is the lack of an agile-ready culture.

This has been evident with our own client discussions which have centred around “organisational culture at odds with agile values” and “lack of business customer or product owner availability” as challenges for adopting and scaling agile.  Agile as a methodology does require a corresponding agile culture to ensure success.  It’s no good committing to implementing in an agile way when the organisation is anything but agile!

Doing Agile v Being Agile

Adopting an Agile methodology in an organisation which has not fully embraced Agile can still reap results (various estimates but benchmark around a 20% increase in benefits). If, on the other hand, the firm has truly embraced an agile approach in the organisation from CEO to receptionist then the sky is the limit and improvements of 200% plus have been experienced!

Investing in the change management required to build an agile culture is the key to making a successful transition to agile and experiencing all of the competitive advantages it affords. Through this investment, your business leadership, IT leadership and IT teams can align, collaborate and deliver quality solutions for customers, as well as drive organisational transformation—both today and into the future.

There are certain projects, where shoehorning them into agile processes just serves to slow down the delivery with no benefit. Some of this may come from the increase in devops delivery but we see it stifling many infrastructure or underpinning projects, which still lend themselves to a more waterfall delivery approach.

The main difference between agile methodologies and waterfall methodologies is the phased approach that waterfall takes (define requirements, freeze requirements, begin coding, move to testing, etc.) as opposed to the iterative approach of agile. However, there are different ways to implement a waterfall methodology, including iterative waterfall, which still practices the phased approach but delivers in smaller release cycles.

Today, more and more teams would say that they are using an agile methodology. When in fact, many of those teams are likely to be using a hybrid model that includes elements of several agile methodologies as well as waterfall.

It is crucial to bring together people, processes and technologies and identify where it makes business sense to implement agile; agile is not a silver bullet. An assessment of the areas where agile would work best is required, which will then guide the transition. Many organisations kick off an agile project without carrying out this assessment and find following this path is just too difficult. A well-defined transitional approach is a prerequisite for success.

We all understand that today’s business units need to be flexible and agile to survive but following an agile delivery model is not always the only solution.

Posted on : 30-01-2019 | By : richard.gale | In : Uncategorized

Tags: , , , ,

0

What will the IT department look like in the future?

We are going through a significant change in how technology services are delivered as we stride further into the latest phase of the Digital Revolution. The internet provided the starting pistol for this phase and now access to new technology, data and services is accelerating at breakneck speed.

More recently the real enablers of a more agile and service-based technology have been the introduction of virtualisation and orchestration technologies which allowed for compute to be tapped into on demand and removed the friction between software and hardware.

The impact of this cannot be underestimated. The removal of the needed to manually configure and provision new compute environments was a huge step forwards, and one which continues with developments in Infrastructure as Code (“IaC”), micro services and server-less technology.

However, whilst these technologies continually disrupt the market, the corresponding changes to the overall operating models has in our view lagged (this is particularly true in larger organisations which have struggled to shift from the old to the new).

If you take a peek into organisation structures today they often still resemble those of the late 90’s where capabilities in infrastructure were organised by specialists such as data centre, storage, service management, application support etc. There have been changes, specifically more recently with the shift to devops and continuous integration and development, but there is still a long way go.

Our recent Technology Futures Survey provided a great insight into how our clients (290) are responding to the shifting technology services landscape.

“What will your IT department look like in 5-7 years’ time?”

There were no surprises in the large majority of respondents agreeing that the organisation would look different in the near future. The big shift is to a more service focused, vendor led technology model, with between 53%-65% believing that this is the direction of travel.

One surprise was a relatively low consensus on the impact that Artificial Intelligence (“AI”) would have on management of live services, with only 10% saying it would be very likely. However, the providers of technology and services formed a smaller proportion of our respondents (28%) and naturally were more positive about the impact of AI.

The Broadgate view is that the changing shape of digital service delivery is challenging previous models and applying tension to organisations and providers alike.  There are two main areas where we see this;

  1. With the shift to cloud based and on-demand services, the need for any provider, whether internal or external, has diminished
  2. Automation, AI and machine learning are developing new capabilities in self-managing technology services

We expect that the technology organisation will shift to focus more on business products and procuring the best fit service providers. Central to this is AI and ML which, where truly intelligent (and not just marketing), can create a self-healing and dynamic compute capability with limited human intervention.

Cloud, machine learning and RPA will remove much of the need to manage and develop code

To really understand how the organisation model is shifting, we have to look at the impact that technology is having the on the whole supply chain. We’ve long outsourced the delivery of services. However, if we look the traditional service providers (IBM, DXC, TCS, Cognizant etc.) that in the first instance acted as brokers to this new digital technology innovations we see that they are increasingly being disintermediated, with provisioning and management now directly in the hands of the consumer.

Companies like Microsoft, Google and Amazon have superior technical expertise and they are continuing to expose these directly to the end consumer. Thus, the IT department needs to think less about how to either build or procure from a third party, but more how to build a framework of services which “knits together” a service model which can best meet their business needs with a layered, end-to-end approach. This fits perfectly with a more business product centric approach.

We don’t see an increase for in-house technology footprints with maybe the exception of truly data driven organisations or tech companies themselves.

In our results, the removal of cyber security issues was endorsed by 28% with a further 41% believing that this was a possible outcome. This represents a leap of faith given the current battle that organisations are undertaking to combat data breaches! Broadgate expect that organisations will increasingly shift the management of these security risks to third party providers, with telecommunication carriers also taking more responsibilities over time.

As the results suggest, the commercial and vendor management aspects of the IT department will become more important. This is often a skill which is absent in current companies, so a conscious strategy to develop capability is needed.

Organisations should update their operating model to reflect the changing shape of technology services, with the closer alignment of products and services to technology provision never being as important as it is today.

Indeed, our view is that even if your model serves you well today, by 2022 it is likely to look fairly stale. This is because what your company currently offers to your customers is almost certain to change, which will require fundamental re-engineering across, and around, the entire IT stack.

Posted on : 29-01-2019 | By : john.vincent | In : Cloud, Data, General News, Innovation

Tags: , , , , , , , , , ,

0

The Challenges of Implementing Robotic Process Automation (RPA)

We recently surveyed our clients on their views around the future of technology in the workplace and the changes that they think are likely to shape their future working environment. 

One of the questions identified by many clients as a major challenge was around the adoption of RPA. We asked the question; 

“Do You Agree that RPA could improve the Efficiency of Your Business? 

Around 65% of the respondents to our survey agreed that RPA could improve the efficiency of their business, but many commented that they were put off by the challenges that needed to be overcome in order for RPA deployment to be a success. 

“The challenge is being able to identify how and where RPA is best deployed, avoiding any detrimental disruption 

In this article we will discuss in more detail the challenges, and what steps can be taken to ensure a more successful outcome. 

The benefits of RPA are:

  • Reduced operating costs
  • Increased productivity
  • Reduce employee’s workload to spend more time on higher value tasks
  • Get more done in less time! 

What Processes are Right for Automation? 

One of the challenges facing many organisations is deciding which processes are good for automation and which process to choose to automate first. This line from Bill Gates offers some good advice; 

automation applied to an inefficient operation will magnify the inefficiency” 

It follows therefore, that the first step in any automation journey is reviewing all of your business processes to ensure that they are all running as efficiently as possible.  You do not want to waste time, money and effort in implementing a robot to carry an inefficient process which will reap no rewards at all.  

Another challenge is choosing which process to automate first. In our experience, many clients have earmarked one of their most painful processes as process number one in order to heal the pain.  This fails more often than not because the most painful process is often one of the most difficult to automate.  Ideally, you want to pick a straightforward, highly repetitive process which will be easier to automate with simple results, clearly showing the benefits to automation. Buy-in at this stage from all stakeholders is critical if RPA is be successfully deployed further in the organisation. Management need to see the efficiency saving and employees can see how the robot can help them to do their job quicker and free up their time to do more interesting work. Employee resistance and onboarding should not be underestimated. Keeping workers in the loop and reducing the perceived threat is crucial to your RPA success.  

Collaboration is Key 

Successful RPA deployment is all about understanding and collaboration which if not approached carefully could ultimately lead to the failure of the project.  RPA in one sense, is just like any other piece of software that you will implement, but in another way it’s not. Implementation involves close scrutiny of an employee’s job with the employee feeling threatened by the fact that the robot may take over and they will be left redundant in the process.   

IT and the business must work closely together to ensure that process accuracy, cost reduction, and customer satisfaction benchmarks are met during implementation.  RPA implementation success is both IT- and business-driven, with RPA governance sitting directly in the space between business and IT. Failure to maintain consistent communication between these two sides will mean that project governance will be weak and that any obstacles, such as potential integration issues of RPA with existing programs, cannot be dealt effectively. 

Don’t Underestimate Change 

Change management should not be underestimated, the implementation of RPA is a major change in an organisation which needs to be planned for, and carefully managed. Consistently working through the change management aspects is critical to making RPA successful. It is important to set realistic expectations and look at RPA from an enterprise perspective focusing on the expected results and what will be delivered. 

 RPA = Better Business Outcomes 

RPA is a valuable automation asset in a company’s digital road map and can deliver great results if implemented well. However, often RPA implementations have not delivered the returns promised, impacted by the challenges we have discussed. Implementations that give significant consideration to the design phase and realise the importance of broader change management into the process will benefit from better business outcomes across the end-to-end process. Enterprises looking to embark on the RPA journey can have chance to take note, avoid the pitfalls and experience the success that RPA can bring. 

Posted on : 25-01-2019 | By : kerry.housley | In : Innovation, Uncategorized

Tags: , , , , ,

0

It’s Time to Take Control of Your Supply Chain Security

According to the Annual Symantec Threat Report supply chain attacks have risen 200% in the period 2016-2017. Confirming the trend for attackers to start small, move up the chain and hit the big time!

Attackers are increasingly hijacking software updates as an entry point to target networks further up the supply chain. Nyetya, a global attack started this way affecting such companies as FedEx and Maersk costing them millions.

Although many corporations have wised up to the need to protect their network and their data, have all their suppliers? And their supplier’s suppliers? All it takes is a single vulnerability of one of your trusted vendors to gain access to your network and you and your customer’s sensitive data could be compromised.

Even if your immediate third parties don’t pose a direct risk, their third parties (your fourth parties) might. It is crucial to gain visibility into the flow of sensitive data among all third and fourth parties, and closely monitor every organization in your supply chain. If you have 100 vendors in your supply chain and 60 of them are using a certain provider for a critical service, what will happen if that critical provider experiences downtime or is breached?

The changing nature of the digital supply chain landscape calls for coordinated, efficient and agile defences. Unless the approach to supply chain risk management moves with the times, we will continue to see an increase in third-party attacks.

Organizations need to fundamentally change the way they approach managing third-party risk, and that means more collaboration, automation of the process with the adoption of new technology and procedures. It is no longer sufficient simply to add some clauses to your vendor contract stating that everything that applies to your third-party vendor applies to the vendors sub-contractors.  

Traditionally, vendor management means carrying out an assessment during the onboarding process and then perhaps an annual review to see if anything has changed since the initial review. This assessment is only based on the view at a point in time against a moving threat environment. What looks secure today may not be next week!

The solution to this problem is to supplement this assessment by taking an external view of your vendors using threat analytics which are publicly available, to see what is happening on their network today. With statistics coming through in real time you can monitor your suppliers on a continuous basis. It is not possible to prevent a third-party attack in your supply chain, but with up to date monitoring issues can be detected at the earliest possible opportunity limiting the potential damage to your company’s reputation and your client’s data.

Many vendor supply management tools use security ratings as a way of verifying the security of your suppliers using data-driven insights into any vendor’s security performance by continuously analysing, and monitoring companies’ cybersecurity, all from the outside. Security ratings are generated daily, giving organizations continuous visibility into the security posture of key business partners. By using security ratings, it enables an organisation to assess all suppliers in the supply chain at the touch of a button. This is marked difference to the traditional point-in-time risk assessment.

Here at Broadgate we have helped several clients to take back control of their supply chain by implementing the right technology solution, together with the right policies and procedures the security and efficiency of the vendor management process can be vastly improved.

If you are responsible for cyber security risk management in these times, you are certainly being faced with some overwhelming challenges.  Implementing a vendor risk management program that is well-managed, well-controlled, and well-maintained will mean that you have a more secure supply chain as a result. Companies with more secure third parties will in turn have a lower risk of accruing any financial or reputational damage that would result from a third-party breach. Don’t fret about your supply chain, invest in it and you will reap the rewards!

Posted on : 31-08-2018 | By : richard.gale | In : Uncategorized

1

M&A – Cyber Security Due Diligence

Following the discovery of two data breaches affecting more than 1 billion Yahoo Inc. users, Verizon Communications Inc. reduced its offer by $350 million to acquire the company in 2017. This transaction illustrates how a companies’ reputation and future are impacted by cybersecurity, failure to investigate these measures during mergers and acqusitions could lead to costly integration, unexpected liability and higher overall enterprise risk.

We can see almost daily the effect a data breach can have with companies losing millions in terms of direct losses, reputational damage and customer loyalty. A hurried or limited cybersecurity vetting process may miss exposures or key indicators of an existing or prior breach.

It is crucial to understand cybersecurity vulnerabilities, the damage that may occur in the event of a breach, and the effectiveness of the infrastructure that the target business has in place. An appropriate evaluation of these areas could significantly impact the value that the acquirer places on the target company and how the deal is structured. It is therefore crucial to perform a security assessment on the to-be-acquired company.

It wasn’t that long ago that mergers and acquisition deals were conducted in a paper-based room secured and locked down to only those with permitted access.  These days the process has moved on and is now mostly online, with the secure virtual data room being the norm. Awareness of cyber security in the information gathering part of the deal making process is well established. It is the awareness and need to look at the cyber security of the target company itself that has traditionally been under emphasised, looking more at the technical and practical job of integrating the merged companies’ infrastructure.

Deal makers acquiring must assess the cyber risk of an organisation in the same way that it would assess overall financial risk. Due diligence is all about establishing the potential liabilities of the company you are taking on.  According to the Verizon Data Breach survey it takes an average of 206 days to discover a breach. Often companies are breached without ever knowing. It is therefore important to look at the cyber risk not just in terms of have they been breached but what is the likelihood and impact of a breach.  An acquisition target company that looks good at the time of closing the deal may not look quite so good a few months later.

The main reason for this lack of importance given to the cyber threat is that M&A teams find it hard to quantify the cyber risk particularly given the time pressures involved.  A cyber risk assessment at the M&A stage is crucial if the acquiring company wants to protect its investment. The ability to carry out this assessment and to quantify the business impact of a likely cyber breach with a monetary value is invaluable to deal makers. Broadgate’s ASSURITY Assessment provides this information in a concise, value specific way using business language to measure risks, likelihood and cost of resolution.

A cyber security assessment should be part of every M&A due diligence process. If you don’t know what you are acquiring in terms of intellectual property and cyber risk how can you can possibly know the true value of what you are acquiring!

 

Posted on : 31-08-2018 | By : richard.gale | In : Cyber Security, data security, Finance

Tags: , ,

0

Application Performance Management (APM)  – Monitor Every Critical Swipe, Tap and Click

Customers expect your business application to perform consistently and reliably at all times and for good reason. Many have built their own business systems based on the reliability of your application. This reliability target is your Service Level Objective (SLO), the measurable characteristics of a Service Level Agreement (SLA) between a service provider and its customer.

The SLO sets target values and expectations on how your service(s) will perform over time. It includes Service Level Indicators (SLIs)—quantitative measures of key aspects of the level of service—which may include measurements of availability, frequency, response time, quality, throughput and so on.

If your application goes down for longer than the SLO dictates, fair warning: All hell may break loose, and you may experience frantic pages from customers trying to figure out what’s going on. Furthermore, a breach to your SLO error budget—the rate at which service level objectives can be missed—could have serious financial implications as defined in the SLA.

Developers are always eager to release new features and functionality. But these upgrades don’t always turn out as expected, and this can result in an SLO violation. Deployments and system upgrades will be needed, but anytime you make changes to applications, you introduce the potential for instability.

There are two companies currently leading the way in Business Service Monitoring, New Relic and AppDynamics. AppDynamics has been named as Gartner Magic quadrant winner in APM for the last six years. This suite of application and business performance monitoring solutions ensures that every part of even the most complex, multi-cloud environments—from software to infrastructure to business outcomes—is highly visible, optimized, and primed to drive growth. The need for such a monitoring tool can be evidenced in the large number of Tier One banks which have taken it onboard.

AppDynamics is a tool which enables you to track the numerous metrics for your SLI. You can choose which metrics to monitor, with additional tools that can deliver deeper insights into areas such as End User Monitoring, Business IQ and Browser Synthetic Monitoring.

The application can be broken down into the following components:

  • APM: Say your application relies heavily on APIs and automation. Start with a few API you want to monitor and ask, “Which one of these APIs, if it fails, will impact my application or affect revenue?”  These calls usually have a very demanding SLO.
  • End User Monitoring: EUM is the best way to truly understand the customer experience because it automatically captures key metrics, including end-user response time, network requests, crashes, errors, page load details and so on.
  • Business iQ: Monitoring your application is not just about reviewing performance data.  Biz iQ helps expose application performance from a business perspective, whether your app is generating revenue as forecasted or experiencing a high abandon rate due to degraded performance.
  • Browser Synthetic Monitoring: While EUM shows the full user experience, sometimes it’s hard to know if an issue is caused by the application or the user. Generating synthetic traffic will allow you to differentiate between the two.

There is an SRE dashboard where you can view your KPIs:

  • SLO violation duration graph, response time (99th percentile) and load for your critical API calls
  • Error rate
  • Database response time
  • End-user response time (99th percentile)
  • Requests per minute
  • Availability
  • Session duration

SLI, SLO, SLA and error budget aren’t just fancy terms. They’re critical to determining if your system is reliable, available or even useful to your users. You should be able to measure these metrics and tie them to your business objectives, as the ultimate goal of your application is to provide value to your customers.

Posted on : 30-08-2018 | By : richard.gale | In : App, Consumer behaviour, Innovation

Tags: ,

0

How Can Artificial Intelligence Add Value to Cyber Security?

Cyber security is major concern for all organisations. A recent EY survey found that Cyber Security is the top risk for financial services. The cyber threat is ever growing and constantly changing. It is becoming increasingly difficult to put the right controls and procedures in place to detect potential attacks and guard against them. It is now imperative that we make use of advanced tools and technologies to get ahead of the game.

A major weapon in the race against the cyber attacker are Artificial Intelligence (AI) powered tools which can be used to prevent, detect and remediate potential threats.

Threat detection is a labour intensive arduous task and AI can help considerably with the workload which is often like looking for a needle in a haystack.

AI machines are intended to work and react like human beings. They can be trained to process substantial amounts of data and identify trends and patterns. A major cyber security issue has been the lack of skilled individuals with organisations unable to find staff with the necessary skills. AI and machine learning tools would help overcome these gaps.

Despite what you’ve seen in the movies, robotic machines are not about to take over the world!  Human intelligence is unique characteristic which a robot does not have (not yet anyway). Cybersecurity isn’t about man or machine but man and machine. A successful cyber strategy means machine intelligence and human analysts working together.

The machines perform the heavy lifting (data aggregation, pattern recognition, etc.) and provide a manageable number of actionable insights. The human analysts make decisions on how to act. Computers, after all, are extremely good at specific things, such as automating simple tasks and solving complex equations, but they have no passion, creativity, or intuition. Skilled humans, meanwhile, can display all these traits, but can be outperformed by even the most basic of computers when it comes to raw calculating power.

Data has posed perhaps the single greatest challenge in cybersecurity over the past decade. For a human, or even a large team of humans, the amount of data produced daily on a global scale is unthinkable. Add to this the massive number of alerts most organizations see from their SIEM, firewall logs, and user activity, and it’s clear human security analysts are simply unable to operate in isolation. Thankfully, this is where machines excel, automating simple tasks such as processing and classification to ensure analysts are left with a manageable quantity of actionable insights.

It’s essential that we respond quickly to security incidents, but we also need to understand enough about an incident to respond intelligently. Machines play a huge role here because they can process a massive amount of incoming data in a tiny fraction of the time it would take even a large group of skilled humans. They can’t make the decision of how to act, but they can provide an analyst with everything they need to do so.

Posted on : 28-07-2018 | By : richard.gale | In : Uncategorized

0