What is Application Performance Management (APM)?
APM is a term referring to any of a broad range of technologies that are used to quantify the performance and resource utilization (memory, CPU, etc) of applications, pinpoint the causes of performance problems, and trace transactions across the IT infrastructure.
Today, APM often involves a full suite of tools, rather than a single point solution. Whatever the methodology, the end goal of APM is to provide administrators with the necessary information to optimize their application's performance, conserve computing resources, provide better end-user experiences, and derive business intelligence. Successfully executed application performance management strategies can pay big dividends for companies that rely on their applications to drive revenue or gain competitive advantage.
Evolution of APM
The term APM has been a moving target. Years ago, many IT Ops teams conducted their performance analysis and troubleshooting with offline analysis of packet captures. Over time, spurred by Gartner's APM Magic Quadrant and improvements in byte-code instrumentation, the term has come to refer more to agent-based tools and platforms that run continuously in production instead of only in test and QA. Many APM solutions now run in the cloud, further simplifying their deployment and management.
To muddy things, the term Application-Aware Network Performance Monitoring has also entered the lexicon. Tools using this moniker approach application performance monitoring from a network perspective, but the overarching vision is the same: understand the performance of your applications and network, so you can make data-driven decisions when troubleshooting, optimizing, and managing them.
Why Use APM Tools?
APM tools have traditionally been employed to pinpoint what's broken in an application delivery chain. When a user's desktop client is behaving abnormally, or server requests or database queries are taking forever, an APM tool may be able to help the administrator understand the source of the performance issue so they can fix it. Organizations with more mature IT teams will find that APM practices can also be used proactively to address issues and even provide the business with new insights, in addition to reactively fixing performance problems.
Data Sources for Application Performance Management
The effectiveness of any APM practice depends entirely on the source of your data. Traditional APM tools use an agent-based approach to provide performance data. Agents are custom pieces of software, installed on the host system to be monitored, that report application performance metrics through a UI accessible by the administrator. While APM tools provide useful information about the execution of software applications, they also use the same computing resources as the systems they are monitoring. Also, agents cannot be deployed on every system.
Ultimately, you get the best visibility into your infrastructure by using a combination of solutions that specialize in analyzing various sources of data. This approach lends itself to the Open IT Operations Analytics (ITOA) architecture that is gaining popularity in data-driven organizations. This architecture depends on the correlation of four different sources of data that can provide the most complete, holistic picture of an IT environment, which allows for more effective management of application performance than ever before. The four sources are:
Wire DataObserved, objective data set
(Read: What is Wire Data?)
Machine DataSelf-reported data set
Agent DataHost-instrumented data set
Probe DataSynthetic data set
An APM strategy that taps into multiple sources of data provides the most complete IT visibility. Rather than considering APM a separate initiative only for a particular application or set of applications, it is better to incorporate a vision for APM into a more holistic ITOA strategy. This data-driven approach will enable you to correlate all of the data types across all the tiers of your infrastructure and derive meaningful insights from your data, including but not limited to application performance.
To learn more, check out our APM Vendor Comparison.
Application Performance Monitoring vs.Network Performance MonitoringIt is impossible to talk about APM without also talking about Network Performance Monitoring (NPM). The two are sometimes perceived as competing product types, but the truth is that each of them provides a unique and vital component of a successful IT Operations Analytics practice. APM provides visibility into the internal workings of an application, including what comes in, what goes out, and how data is processed within the app. NPM looks at communications between applications, databases, load balancers, and other network components. Products that merge both capabilities are sometimes referred to as Application-Aware Network Performance Monitoring tools.
When you have both APM and NPM, you get insight into what's happening inside your applications, but also into the data in motion on your network, right down to the packet payload level.
By combining what you see with agent-based APM with wire data that sees all communications on the network, you get a new level of visibility and insight into your network.
The Importance of Cross-Tier Correlation
Cross-tier correlation is the hallmark of successful application performance monitoring. Performance problems can be caused by anything from faulty routers to misconfigured databases to inefficient application code, and being able to correlate an unexpected slowdown in the application with misbehavior in a completely different layer of the IT environment is what defines a great monitoring platform.
This is where wire data starts to look like the real hero of APM. Since every application and device on a network uses the wire to communicate, wire data is a vital source of both broad and deep visibility into network and application performance.
APM in Virtualized & Software-Defined Environments
As software-defined and virtualized environments become mainstream, application performance management is getting harder. Virtualized environments may split application workloads across virtual machines that can be spun up and spun down or moved across the datacenter, making it incredibly difficult to pinpoint the source of performance problems if you are only looking at resource utilization. In such cases, an Open ITOA Architecture that combines multiple sources of data becomes the only viable way to get visibility into performance across every tier of the application delivery chain.