Recently I saw a post on CIO.com, written a lifetime ago in internet years - 2013, about information superiority. As a term this was new to me, but as a concept, it's as old as can be.
The idea is fairly simple: use information to your benefit. Information asymmetries exist all over, and can be a huge help or a huge liability. "Information superiority" is essentially this: putting information into action in ways that benefit you, your teams, and your organization. The idea applies in various contexts including Information Security, IT Operations, and Business Intelligence.
In other words: inform your decisions with data. If this practice is adopted methodically, it can have profound implications for an organization and the teams that keep it going.
I was having a philosophical discussion with some very smart co-workers, and we converged on a model to help express the implications of data driven ops. But it's predicated on a couple of assumptions:
- Data is intrinsically valuable, but often difficult to get at.
- More data is good. Less data is bad. The more people that can use the data effectively, the better.
- Data's value decays over time. Ignored too long, it becomes stale, and can even become a hindrance.
So a single unit of data, put to use across various parts of an operations team, is much more valuable if four teams can benefit from it as opposed to one.
How to Achieve Information Superiority: Examine your Data Consumption Model
A large amount of data is useful only if you can get to it quickly, as easily as possible, and it is shareable to the maximum number of users. Here at ExtraHop we represent the consumption model with this equation, which we've dubbed the Data Value equation:
Let's unpack the meaning of each of these terms.
Velocity: This is the speed at which a high volume of relevant data is obtained and goes back to the idea that ancient military strategists (think Sun Tzu) understood: How quickly can you collect the data that is needed?
Friction: There are two aspects to this:
- How easy is it to access high-velocity data?
- Once data has been collected, how quickly and with how little effort can it be transformed into useful information?
For example, is the data easily accessible from one location or must it be collected and aggregated across multiple or disparate locations? Does the accessibility and transformation of the data require a lot of manual effort? Is the result open to interpretation? Does it take hours to sift through everything you've collected to make it useful? Friction is where most organizations are struggling today as information overload has limited the overall usefulness of what they are measuring and tracking.
Number of Users: The number of people that the information affects is what makes this equation important, and is why the equation represents the number of users as a multiplier. Enterprises need to put insights in context for numerous roles, and not impose a barrier to entry with complex workflows that require specialization. Imagine the power of making every soldier as capable as a general when it came to decision making, or better yet, what if the information could provide unique value to each particular role?
This equation largely explains the "war room" that is so common in IT Operations. The sole purpose of a war room is to coordinate information across teams. One of the killer, hidden costs in operations is coordination costs. Different teams often use different data and speak very different languages. This is the dark side of information asymmetry. The more coordination, the better - particularly if it's using the same set of data across those teams.
Putting This Equation to Work
Consider Velocity at Scale: In a modern data center it is common to find the traffic being measured in hundreds of Gbps, and it only continues to grow. Consider the fact that even at 40Gbps you're looking at about half a petabyte a day in data that must be analyzed. Traditional methods of analysis just don't cut it. The data footprint, amount of time required to sift through this data, and complexity of maintaining such a massive data store scale faster than anyone can handle.
For this reason, ExtraHop does the analysis on the front end using stream analytics, taking advantage of in-memory, parallel processing. This is the only way to scale and keep the lookback that is necessary to provide the answers teams need. The idea here is simple: do the analysis as the events occur, then store the results.
Consider Friction in the Workflow: What is an acceptable duration for an outage? How long can you wait before being alerted to a security incident? How many upset customers can you tolerate? There isn't a good answer for any of these questions beyond minimizing the impact of these issues as much as is possible. So why are teams still tackling these problems using decades-old tools and workflows that take hours and sometimes days? Because their existing toolset doesn't offer a better way.
ExtraHop is revolutionizing these workflows by allowing IT operations teams to go from a global view, to an individual transaction, down to the root cause in clicks and minutes instead of hours of grueling manual work. Afterall even the best data source isn't much use if it takes too long to extract insights from it in time to address the issue at hand.
Consider Empowering Every User: Which teams should benefit from data? Is there benefit in all teams working from the same dataset? It seems obvious that all operations can benefit from better access to data, but traditional monitoring models require a proliferation of tools that balkanize IT staffs. Teams who are lucky enough to get a place in the budget for their monitoring tool are faced with a new problem—working across silos and arguing about whose data is right. Teams who lost out altogether just have to trust what everyone else is seeing.
ExtraHop believes that in the complex and intertwined world of IT, only those who have access to timely data will be successful. Moreover, that it isn't enough to check the box around your own data silo as most problems will span departmental/technology boundaries. For that reason, ExtraHop turns the network into the ultimate source of cross-tier correlated data benefiting every role.
The notion of information superiority isn't new, but in the modern age it begins with having high rates of trustworthy, easily accessible, and shareable data. This Data Value equation is the way to measure your information superiority (or lack thereof), assessing your data's value and uncovering challenges you have today. It's also the framework we've used to deliver a platform that is aligned with those needs.