The title of the report begs the question: Is application performance management becoming obsolete? The answer may be unclear at this point. But Cappelli speculates that APM suites will evolve to offer more analytics-based data as these two aspects of application performance monitoring converge over the next five years:
"The end-user experience will continue to grow in importance. I'm beginning to get some calls from end-users looking at the CMDBs or asset management databases or trouble ticketing systems and asking if they can use analytics in these areas … That is a new development that takes us beyond performance and availability."
According to Cappelli, there are three key reasons for this evolution:
1) Application complexity and interdependence
Cappelli pointed out that we will see changes in how we watch and monitor applications as they become increasingly complex and interdependent:
"Non-analytic APM tools will continue to generate more and more data, [and] in order to understand what is going on in the application, deeper and deeper analysis capabilities are required."
APM vendors need to be conscious of the law of unintended consequences when it comes to delivering and presenting APM data. It would be easy to overwhelm an operations user with details. This makes it essential to structure how data is presented and how much of it is retained, and to use visualization to make it easy for users to quickly understand and act upon APM information.
2) Automation of monitoring
The aforementioned complexities and interdependencies of applications mean that it won't be humanly possible, let alone resource-effective, to oversee everything that's going on, Cappelli said.
"Root cause analysis … now [is] not really done with analytics tools [but] by looking at the topology map. As the application topologies become a lot more complex, you are not going to be able to just look at a map on the screen and find the root cause. You are going to have to apply some sort of automated algorithm that will identify what could be the cause."
Our answer to this problem is a set of automated algorithms that transform low-level network and application information into higher-level application and business transaction objects. This transformation involves mapping the observed activity to a set of ideal models and highlighting where slowdowns or failures occurred. Key data and meta-data are preserved at each layer to assist root cause determination.
3) Proactive strategies for addressing problems
As IT organizations place increased emphasis on end-user experiences, predictive analytics will become a key component in monitoring application performance. After all, end users do not care why an app is not working — they're just frustrated that it does not work. So it makes sense to address potential glitches in a proactive fashion, which analytics can better address. Said Cappelli:
"Until now most of the burden of the application performance monitoring tools has been in the area of retroactive determination of the root cause of the problem. We are seeing, with the people we talk to, more and more focus on getting out ahead of problems before they occur. And in order to do that kind of predictive action, you need some kind of analytics tools."
We've long predicted that analytics will be a part of the APM equation. Our latest version of INETCO Insight can analyze the performance characteristics of every user, device, and component of a distributed application and can provide this information to the higher level IT operations analytics platforms Cappelli described:
"I think you'll see IBM, HP, BMC, CA, Compuware rolling out generic IT operations analytics platforms, if they are in multiple enterprise management areas. If they are an APM specialist, like a Compuware, their analytic platforms will tend to be more specific to APM."
We see this combination — APM and IT operations analytics — as extremely powerful. I'll provide a detailed use case of this in my next blog.