6 Comments

Summary:

MySpace, the largest social network in the world went dark this past weekend, thanks to a large-scale blackout in downtown Los Angeles, according to company officials. The outage that left nearly 80-million users without access to their pages for a few hours, has galvanized MySpace’s corporate […]

MySpace, the largest social network in the world went dark this past weekend, thanks to a large-scale blackout in downtown Los Angeles, according to company officials. The outage that left nearly 80-million users without access to their pages for a few hours, has galvanized MySpace’s corporate parent, Fox Interactive Media into spending more and building redundancy for the fast growing network.

“The weekend was brutal, and caused some issues,” FIM president Ross Levinsohn said in an interview this morning. (Read, The Sly Fox.) He explained that MySpace has two data centers in the Los Angeles area – one in downtown LA that is operated by InterNAP, and another in El Segundo, that is owned by Equinix.

The problem was in the downtown building. A switch meltdown caused blackouts in downtown and shut down the building, only to be followed by air-conditioning outage. “Suddenly our servers heated up and (nearly) melted,” Levinsohn added. “We are looking to add more data centers, preferably on the East Coast to make the system even more redundant.”

Levinsohn admitted that it was going to take time, since these are complex issues. He did say that the company is sparing no expense to keep MySpace performing at its optimum best. Maybe the blackout served as a wake-up call for News Corp. – the multinational that Murdoch built is no longer just a media giant, it is a technology company and needs to spend like one.

  1. I found it a little humurous that a site that large didn’t have decentralized data centers. But, then again, MySpace has never been a bastion of techical expertise.

    Share
  2. Jesse Kopelman Wednesday, July 26, 2006

    Why are people running data centers out of a place with high real estate costs in a State with longterm chronic power issues. You’d think everyone would be doing their data center in the rust belt.

    Share
  3. Decentralised redundant data centers are possible, but they are Expensive, especially with the load MySpace will put on it.
    But maybe they just decided that with their demograpic and requirements, it would just be to expensive.
    Put aside the troubles they seem to be having with just keeping up with the growth, building everything just once, instead of twice.

    Share
  4. jesse

    i agree with you on this, and when i asked them there was a lot of talk about virigina and other data center destinations etc.

    Share
  5. If they indeed want to have true redundancy, that means having matching clusters of databases replicated across the US with very little latency replicating said databases, which thereby absolutely adds more complexity to the equation. Simply doing a site A/B with network hardware, geoIP, global load balancing and web servers is a no brainer, but when you throw in massive databases that have changes replicated across the country within seconds, having duplicate sites becomes more of a challenge.

    I do agree that moving out of LA would be their best bet and Equinix has a great (and massive!) facility in Ashburn, Virginia that would suit their needs (fyi – you’ll likely need LX sfp’s not SX in some of those Ashburn facilities). They’ve had power outages in LA before that did indeed take down the MySpace site and one would have hoped they learned their lesson after that happened, but here we are.

    Getting the space is not the big deal for MySpace – cages and cabinets are pennies when compared with the monthly recurring price for bandwidth. Think about it – they pay for cdn from Limelight, they have their streaming music hosted at VitalStream and who knows how many transit providers and peering arrangements they have. To get all that setup in parity means renegotiating all their transit, cdn and peering agreements over again and figuring out how much headcount to have at their second site, or to simply use expensive remote hands/contractors for installation and maintenance so far from the mothership. It’s going to be a fun game of math for the MySpace folks, but I’m sure their operations folks are more than up to the task.

    Share
  6. There’s already in a bunch of other Equinix sites. This issue isn’t the network or the servers – its the software. Getting their software to work in a distributed fashion is appearently quite difficult. Freakout, they’ve already done all the math, and have negotiated all the deals. Its just a matter of coding – most coders don’t think about the need for a distributed app, and when the do, they don’t consider greater-than-LAN latencies. How many coders have you ever met who thought about writing an app that performed well with 80ms of latency seperating two plesiosyncronous databases, constantly replicating.

    Share

Comments have been disabled for this post