Gary Read, the CEO of Boundary, said the company aims to use its ability to see second-by-second networking stats as a way to understand and indicate the health of an overall application. It’s analogous to Splunk using machine log data to tell the same thing. And given how much network health can impact overall application health, especially since most people access their software as a service or build apps that are reliant on a variety of API calls, using network information to determine performance can be an early warning system for application failure.
That’s one of the things Read and I spoke about a few months ago, when I explained what Boundary and real-time network monitoring has offered customers so far. For the most part it helps customers see more data and get it faster because it checks the network and offers updates every second, but that quantity and access to fast information can also help customers save money if they are building on platforms such as Amazon Web Services or Google’s Compute Engine.
Users of the service say they can quickly see if they’ve built their applications in a manner that will cost them a lot just by tracking the network flows. The key here is that Boundary is assessing information every second, in a way that’s perfect for a cloud-based world where the demand for services can fluctuate on a second-by-second basis. Three months after launching its application monitoring service, Boundary is tracking 20 billion records per day. That’s a lot of data, and Read only hopes those records will continue to grow.