Blog Post

Cedexis Fusion gathers system, cloud data to speed content delivery

Cedexis, the company behind the Openmix load balancing service, is drilling down into customers’ infrastructure with Cedexis Fusion, an API that integrates with popular New Relic and AppDynamics application performance software. That integration should give Cedexis a deeper look into how customers’ servers and applications are running. And, because it also ties into Akamai(s akam), Level3, Edgecast and ChinaCache content delivery networks (CDNs) and SoftLayer’s server management data it should enable the company to take load balancing across clouds to a new level.

Big companies — and Cedexis’s customers include EuroDisney, Hermes and Nissan — need to make sure their e-commerce sites run smoothly, that pages load fast, that content gets delivered optimally around the globe. A service that can quickly flag when an application or server is approaching overutilization and automatically redeploy would be a very valuable. The new data from inside customer shops augments data Cedexis already gleans from Radar, a crowdsourced service that collects data about cloud and CDN performance around the world.

“Fusion Radar collects data from outside all the various clouds … [and] Fusion gives us the inside-out view that you’d normally get from a server vendor or monitoring provider,” Cedexis CMO Rob Malnati said in an interview. The company said Fusion can also tap into Catchpoint, Keynote and Gomez to detect slowing e-commerce processes and sniff out cloud outages early, using data from Amazon(s amzn), Rackspace(s rax), SoftLayer and other cloud service providers.

If it works as advertised, Fusion could help alleviate operational headaches for enterprise customers.


3 Responses to “Cedexis Fusion gathers system, cloud data to speed content delivery”

  1. Perhaps the most interesting thing about this is the crowdsourced performance data. This is a novel use (not just crowdsourced reviews/logos/boring/etc) and means they can potentially detect problems early on, and route around them. Being able to do that at the load balancer level makes it faster to react. Shame it’s a very enterprisey offering though. I could see someone like Cloudflare doing something similar, and beating them to the mass market.

  2. William Louth

    btw Google engineering recently discussed how ineffective CPU utilization is with regard to load balancing

    “An alternative to the tied-request and hedged-request schemes is to probe remote queues first, then submit the request to the least-loaded server. It can be beneficial but is less effective than submitting work to two queues simultaneously for three main reasons: load levels can change between probe and request time; request service times can be difficult to estimate due to underlying system and hardware variability; and clients can create temporary hot spots by all clients picking the same (least-loaded) server at the same time.”