In the past decade, processing speed and storage capacity have continued to increase in magnitude and decrease in price so rapidly that every couple of years we need to reassess what is feasible and what is economical using the current technology. This trend is especially true for the x86 platform which has maintained a price-performance ratio that is difficult to beat. When I started working at F5 Networks, there were a lot of competing load-balancing and traffic-management products on the market. Most of them included specialized hardware, such as ASICs and network processors. In contrast, BIG-IP utilized x86 processors and other off-the-shelf server components. This platform was a bit of a sore spot for us. Competitors mocked the BIG-IP, saying that it was just a PC. We focused on developing a rich feature set and squeezing out every bit of available performance. In two years, the BIG-IP platform was twice as fast. The ASICs and network processors remained the same. In another two years, the BIG-IP platform doubled again. Nowadays, the market is flooded with networking and security appliances based on the x86 platform.
WAN optimization devices are a more recent example of innovation that was enabled by gains in processing speed and storage capacity.
Eight years ago, WAN optimization did not exist. Today, it's a separate category within IT infrastructure. What happened? Certainly there were innovations from companies like Peribit Networks and Riverbed Technology, but I think these innovations were inevitable. For WAN optimization, two things occurred. First, it became feasible to perform binary sequence reduction at network speeds. Second, the systems became sufficiently inexpensive such that it's often more cost effective to place WAN optimization devices at two endpoints rather than purchase additional bandwidth or upgrade the network protocols in the middle.
Now, consider the network- and application-performance management space. These products provide visibility and assist with performance tuning, troubleshooting, and capacity planning. There are a hodgepodge of tools and technologies that have been applied to these problems for decades, including packet sniffers, agents, synthetic transactions, SNMP, and NetFlow. In general, these tools provide either a high-level telescopic or low-level microscopic view of the network. The problems are getting harder, and the tools have not kept up. Applications are becoming more distributed; as a result, complexity is increasing and visibility is decreasing. Intelligent network devices are blurring the lines between the network and the application. The widespread adoption of technologies like server virtualization and network-attached storage are creating new management challenges.
We've spent the past two years developing the ExtraHop Application Delivery Assurance system to address these challenges and to alleviate the pain points of troubleshooting, performance tuning, and capacity planning. Recent gains in processor speed and storage capacity have enabled us to develop a solution that was dismissed as infeasible just a few years ago. The ExtraHop system scales to the speed of the network, monitoring tens of thousands of transactions simultaneously and in real time. It performs full-stream reassembly and content inspection for a number of specific application-level protocols, recording health and performance metrics that include transaction-timing information, methods, status codes, and errors. I think that innovation and new technology for application delivery assurance has been long overdue, and I'm confident that we've built a great product. I hope you agree!