1 Comment

Summary:

Myspace’s gradual decline and a recent blog post have me wondering what the flip side is of rapid scaling. I wonder what social-media sites do with their expansive infrastructures once they no longer need them to meet high demand. They can’t just scale them back, right?

serverroom

Myspace’s gradual decline and a recent blog post have me wondering what the flip side is of rapid scaling. Todd Hoff at High Scalability wrote recently about scaleogenic environments “where a combination of factors can make an application need to scale and scale quickly.” There are plenty of application-development and infrastructural strategies to accommodate this need to scale (and Hoff details them regularly), but, at the infrastructure level at least, there isn’t much talk outside cloud computing circles about being able to scale back down. I have to wonder what social-media sites do with their expansive infrastructures once they no longer need them to meet high demand. They can’t just scale them back, can they?

If all goes according to plan, investing in infrastructure is just a necessary cost for social media sites to keep up with steady influxes of new features and new users. But, as the ever-worsening Myspace situation illustrates, it doesn’t always go according to plan and popularity can be fleeting. Another current example, not quite so dire, might be Yahoo, although pretty much every site runs the risk of a potential fall from grace at the hands of a new entrant and fickle consumers. If they reach this point, there’s a fair question over whether infrastructure is an asset or an albatross. The issue, it would seem, is whether the amount of money coming in justifies the rent, power and personnel costs to operate the infrastructure at full capacity, or whether they even have a choice.

Facebook is nowhere near having to address this problem, but its infrastructure costs provide a great example of how much money might be involved. The company is paying an estimated $50 million annually to lease data center space, a price that doesn’t include the cost of its servers. Additionally, it’s in the process of building an approximately $200 million data center in Prineville, Ore., and a $450 million data center in rural North Carolina. There are a lot of checks going to a lot of banks, landlords and employees every month.

My sense is that sites don’t really have the option – either technically or logistically – to scale down as traffic dwindles. Pulling large numbers of servers and other gear doesn’t seem like a wise idea for keeping complex systems up and running. And whatever users are still around will want access to their data, so those petabytes of storage likely need to remain spinning. The problem, of course, is that unless income recovers, the costs to keep that infrastructure operational could become crippling.

It’s very exciting to watch web sites and applications take off in popularity, and to watch their infrastructure footprints grow along with them, but maybe that’s just half the picture. Maybe we – and they – need to think about contingency plans for scaling down, too. I’d love to get your thoughts on this: When they fall on hard times, do sites running massive infrastructures need to keep them going and pray for a resurgence, or is there some way to get OPEX back in line with actual income? Can cloud computing be a legitimate option for large sites, or will the performance gains from customized infrastructure always trump possible savings down the road?

Image courtesy of Flickr user Torkildr.

Related content from GigaOM Pro (sub req’d):

  1. Honestly, I think it’s probably not such a big deal. I’m not sure any of these sites saw a huge drop-off in traffic. It was more gradual, and hardware requirements would be able to scale down organically in the natural cycle of replacement.

    Share

Comments have been disabled for this post