79 Comments

Summary:

Many of the Web 2.0 companies that I meet in my job as a venture capitalist lack even the most basic understanding of Internet operations. They’d better figure it out — and fast — because not doing so will only cost them money down the road. Continue Reading

I have a major problem with many of the Web 2.0 companies that I meet in my job as a venture capitalist: They lack even the most basic understanding of Internet operations.

I realize that the Web 2.0 community generally views Internet operations and network engineering as router-hugging relics of the past century desperately clutching to their cryptic, SSH-enabled command line interfaces, but I have recently been reminded by some of my friends working on Web 2.0 applications that Internet operations can actually have a major impact on this century’s application performance and operating costs.

So all you agile programmers working on Ruby-on-Rails, Python and AJAX, pay attention: If you want more people to think your application loads faster than Google and do not want to pay more to those ancient phone companies providing your connectivity, learn about your host. It’s called the Internet.

As my first case in point, I was recently contacted by a friend working at a Web 2.0 company that just launched their application. They were getting pretty good traction and adoption, adding around a thousand unique users per day, but just as the buzz was starting to build, the distributed denial-of-service (DDOS) attack arrived. The DDOS attack was deliberate, malicious and completely crushed their site. This was not an extortion type of DDOS attack (where the attacker contacts the site and extorts money in exchange for not taking their site offline), it was an extraordinarily harmful site performance attack that rendered that site virtually unusable, taking a non-Google-esque time of about three minutes to load.

No one at my friend’s company had a clue as to how to stop the DDOS attack. The basics of securing the Web 2.0 application against security issues on the host system — the Internet — were completely lacking. With the help of some other friends, ones that combat DDOS attacks on a daily basis, we were able to configure the routers and firewalls at the company to turn off inbound ICMP echo requests, block inbound high port number UDP packets and enable SYN cookies. We also contacted the upstream ISP and enabled some IP address blocking. These steps, along with a few more tricks, were enough to thwart the DDOS attack until my friend’s company could find an Internet operations consultant to come on board and configure their systems with the latest DDOS prevention software and configurations.

Unfortunately, the poor site performance was not missed by the blogosphere. The application has suffered from a stream of bad publicity; it’s also missed a major window of opportunity for user adoption, which has sloped significantly downward since the DDOS attack and shows no sign of recovering. So if the previous paragraph read like alphabet soup to everyone at your Web 2.0 company, it’s high time you start looking for a router-hugger, or soon your site will be loading as slowly as AOL over a 19.2 Kbps modem.

Another friend of mine was helping to run Internet operations for a Web 2.0 company with a sizable amount of traffic — about half a gigabit per second. They were running this traffic over a single gigabit Ethernet link to an upstream ISP run by an ancient phone company providing them connectivity to their host, the Internet. As their traffic steadily increased, they consulted the ISP and ordered a second gigabit Ethernet connection.

Traffic increased steadily and almost linearly until it reached about 800 megabits per second, at which point it peaked, refusing to rise above a gigabit. The Web 2.0 company began to worry that either their application was limited in its performance or that users were suddenly using it differently.

On a hunch, my friend called me up and asked that I take a look at their Internet operations and configurations. Without going into a wealth of detail, the problem was that while my friend’s company had two routers, each with a gigabit Ethernet link to their ISP, the BGP routing configuration was done horribly wrong and resulted in all traffic using a single gigabit Ethernet link, never both at the same time. (For those interested, both gigabit Ethernet links went to the same upstream eBGP router at the ISP, which meant that the exact same AS-Path lengths, MEDs, and local preferences were being sent to my friend’s routers for all prefixes. So BGP picked the eBGP peer with the lowest IP address for all prefixes and traffic). Fortunately, a temporary solution was relatively easy (I configured each router to only take half of the prefixes from each upstream eBGP peer) and worked with the ISP to give my friend some real routing diversity.

The traffic to my friend’s Web 2.0 company is back on a linear climb – in fact it jumped to over a gigabit as soon as I was done configuring the routers. While the company has their redundancy and connectivity worked out, they did pay their ancient phone company ISP for over four months for a second link that was essentially worthless. I will leave that negotiation up to them, but I’m fairly sure the response from the ISP will be something like, “We installed the link and provided connectivity, sorry if you could not use it properly. Please go pound sand and thank you for your business.” Only by using some cryptic command line interface was I able to enable their Internet operations to scale with their application and get the company some value for the money they were spending on connectivity.

Web 2.0 companies need to get a better understanding of the host entity that runs their business, the Internet. If not, they need to need to find someone that does, preferably someone they bring in at inception. Failing to do so will inevitably cost these companies users, performance and money.

Related research

Subscriber Content
?
Subscriber content comes from Gigaom Research, bridging the gap between breaking news and long-tail research. Visit any of our reports to learn more and subscribe.
By Allan Leinwand

Related stories

  1. Why bother? Isn’t Amazon EC2/S3 around?

    Emil

    Share
  2. “it’s high time you start looking for a router-hugger, or soon your site will be loading as slowly as AOL over a 19.2 Kbps modem.”

    great line.

    Share
  3. @Emil – thank you for articulating my point in two words :) If you think relying on Amazon’s outsourced infrastructure will enable you to build a highly scalable Web2.0 application without any knowledge of Internet operations then I predict your business will encounter Internet operations issues and cost you more money than you realize as you scale. Don’t get me wrong – Amazon runs a good operation – but a lack of understanding of the host infrastructure that your business relies on to make money is going to be an issue. What happens when your employees sitting in an office in Indiana have connectivity issues connecting to Amazon’s service when you launch your service this month?

    @Jon – thanks.

    Share
  4. allan, unfortunately not every startup has the co-author of a cisco router book on board with real tech chops still working (even the most tech savvy vc’s i’ve seen haven’t touched code in at least 10 years, sorry)…but per emil, this is where the opportunity gets real for amazon, cloudfs and others – of course my real question for you is what are you gonna do with that first company that floundered and lost traction after the access debacle?

    there’s no cure for this stuff. startups in this space would be well served to spend a little more time following the activities of the IETF and examine how they’re thinking about these problems in a more vendor-neutral way…

    Share
  5. @dave – I’m not an investor in either company that I mentioned. So, for the company that floundered, I did my best to get them a consultant in a timely manner and it’s now up to them. Also, please don’t get me wrong – I think that moving services to the cloud can be the right way to go for some Web2.0 applications, but when you don’t understand the basics of the technology that allows your business to operate, well….

    Has Web2.0 really killed the network engineer as I wrote about last year? http://gigaom.com/2007/04/10/web-20-death-of-the-network-engineer/

    Share
  6. On a far smaller scale, I just discovered a small startup (non-IT – medical devices, to be specific) which I support has been paying $140 per month for 2Mbps DSL. Expensive? Well, it would be: most of that money was actually for the bundled webhosting, which they had never used or indeed known they had!

    Allan: Absolutely. Emil, maybe your startup scales well, so that as your traffic builds up you can ratchet up through 10, 100 Amazon servers – but sooner or later, it’ll come back to bite you, either when you hit a bottleneck you hadn’t spotted or when someone else more efficient comes along and eats your lunch with a quarter of your costs for the same service!

    I’ve always felt that trying to build any kind of Internet service without understanding the structure you’re building on is a bad idea. There are things you should bear in mind which you simply won’t understand otherwise – as in this case: why two separate peer links to a single ISP, rather than dual-homing (connecting to two ISPs) or a simple bonded link between the two routers? Maybe in this particular case there were good reasons for this particular setup, but the company should have had someone thinking this sort of thing through before spending lots of money committing to one option!

    I’ve seen painfully slow solutions built on what should be a lightning-fast CDN, thanks to poor implementation – and far faster sites on one small server on the far side of the planet with a well-tuned setup. You can all guess which one cost more – and yet it’s the other one which provided the better experience for end-users!

    Share
  7. There’s a meta issue at work here and it has to do with the kinds of activities that are typically recognized and rewarded in technology companies, particularly startups. Product releases and (in some cases) sales are everything. Tactical execution, on the other hand, isn’t recognized much at all — not by investors, executives, or users.

    It is quite unsurprising that this would happen in a world in which most VCs will only fund companies founded by kids who are only a few years out of college with little or no operational experience in running a web site.

    Share
  8. @James – thanks for the comments – thankfully the extra costs were limited to $140/month. When you’re talking about a GigE link to a Tier-1 ISP, you’re into a few orders of magnitude more of expense.

    @Jeffrey = I don’t think that we’re not one of those VCs :) http://www.panoramacapital.com/portfolio.shtml

    Share
  9. @Jeffrey – Grrr….that should have read: I don’t think we’re one of those VCs :)

    Share
  10. Allan,

    I’m glad you’ve realized your post about the “death of the network engineer” was highly exaggerated (and totally ridiculous).

    This post, however, is great. :-)

    Share
  11. A copy of every Cisco book won’t help you. True router CLI huggers are important – but expensive. Are you going to hire one for a few hours work per month? A few minutes with a Cisco text will not make even the most die hard coder a BGP expert (let alone bonding links)

    That should be the outsourced domain of the ISP or other 3rd party. If the ISP was a little more aware – that would be a service that they would provide. They have the staff and 24×7 operations to manage it properly – rather than just shipping a box for the customer to plug in.

    Share
  12. Allan — fair enough, I look forward to pitching you my next startup idea. :)

    Share
  13. Daniel Golding Wednesday, May 7, 2008

    This is why you should use a hosting company. The idea of any web 2.0 startup hosting internally on a T-1, DSL, or Ethernet loop, is ludicrous. You need multihomed, reliable, and scalable bandwidth. The point about EC2/S3 in the first comment is certainly simplistic, but there is a certain truth in the idea that this should not be the web 2.0 company’s problem.

    Most reasonably sized managed hosting firms have crack network engineering teams that understand BGP and Internet architecture quite well. They order Internet transit in 10 gigabit chunks and the largest also peer at Internet Exchange Points (IXPs). The idea that “network engineering is dead” is foolish – network engineering is alive and kicking. Its just that network engineering has become professionalized and is no longer the realm of “Jim, the Sysadmin, who knows Cisco” – Jim never really knew “Cisco”, and he always did a margin job. Real network engineers works for carriers, hosting companies, and CDNs, as well as large financials.

    There is no way on earth that you’ll get real understanding of Internet architecture at Web 2.0 firms. I appreciate the sentiment – it is important – but leaving the underlying infrastructure to hosting providers is the way to go.

    Share
  14. Well… to put things simply; I don’t see the point of this article. Yes, while it is true that your friends faced problems connecting to their ISP’s backbone, how many people actually run their own servers unless the scale justifies it? (In other words, isn’t it really dumb to do that?)

    You’ll more likely go with a VPS provider, and God willing, upgrade to some blade at Rackspace someday; both of which are *managed* (so you don’t have to configure BGP and punch holes in firewalls)

    And, if the scale does justify it; you might run your own servers. I presume any company would hire the “router huggers” they should.

    The Internet (and computing in general) have grown because of a clean separation of concerns. (To the router hugger: think of the layered TCP/IP architecture.)

    To the others: why stop at hugging a router? Isn’t electricity a part of “the host infrastructure that your business relies on”? Why not learn all about lead-acid battery I-V characteristics? They will be useful once power to your server room fails!

    Share
  15. Your description of how you thwarted a DDOS attack doesn’t make much sense.

    As a router-hugging relic from days gone by, I know that if you had an actual DDOS attack, no amount of filtering of UDP high port numbered traffic, SYN-cookie detection, or ICMP filtering would have helped you.

    Distributed denial of service is just that; Hundreds, if not thousands of hosts hitting your server at the same time. Eventually, the host falls down from load, and blocking at the router doesn’t help much because there’s too many hosts hitting the small pipe feeding your site. You have to take the blocking upstream to your provider and try to block there, where you’ve got a better chance at mitigating the load.

    It sounds more like you had a basic DOS attack combined with a poor configuration and misconceived security.

    I do appreciate your article, though. Too many people are reliant on Amazon to save them, or think that by setting up a single server in co-lo they’re going to be able to scale.

    Also, one last thought: You’ve said: Fortunately, a temporary solution was relatively easy (I configured each router to only take half of the prefixes from each upstream eBGP peer) and worked with the ISP to give my friend some real routing diversity.

    You’re partially right about this, but there are tricks for load balancing BGP that can be used.

    Share
  16. [...] in at inception. Failing to do so will inevitably cost these companies users, performance and money.read more | digg [...]

    Share
  17. @David Ulevitch – thanks. I did think my post last year was somewhat facetious and was struck by how many took it literally ;)

    @elliotross & Daniel Golding – I agree that outsourcing your hosting makes sense – but you still need to understand how infrastructure works. In my second example, the 2 GigE links could have been from a colo cage at a hosting provider cross-connected to the ISP via a switch – and have the same result.

    @Jeffrey – I look forward to hearing about your startup :)

    Share
  18. @John – I do think this was a DDOS attack as there were multiple source IPs. If you really want to know more details, let’s chat offline. On the BGP solution – I was waiting for the route-huggers to give me alternatives – I picked the most expedient fix given my lack of faith in the competence of the upstream ISP ;)

    Share
  19. Great post. It’s about time more people recognized the importance of a good hosting company. I used to run a hosting company and I could regale you with countless stories of complete ignorance on the part of customers… but what it typically comes down to is this:

    “I know [insert some programming language here], so I don’t need to pay for managed services.”

    “Managed services are too expensive; I don’t need them because my friend Bob knows this stuff.” (a parallel to the first quote)

    So you sell these folks an unmanaged dedicated server, and then of course you get the screaming OH MY GOD MY RAID JUST FAILED AND THEN BOTH DRIVES DIED AND I DIDN’T HAVE ANY BACKUP AND MY ENTIRE SITE IS DOWN AND I AM LOSING THOUSANDS OF DOLLARS. AREN’T YOU SUPPOSED TO DO BACKUPS AND MONITOR MY DRIVES FOR ME??????

    Um, no. I’m sorry to hear that, but it is an UNMANAGED server…

    We lost customers with problems like that on a regular basis. Usually they went to another unmanaged host — setting the clock for when it would happen again.

    The first comment is an excellent example of the sort of customers we got on a regular basis. Even Amazon has limits (200Mbit transfer limit per instance, for one.) Knowledgeable about tech. Think they know a lot about hosting. In reality, have no idea how to manage a server, keep it up to date, keep it from getting hacked, handle a DDoS, check that the RAID is operational, or run simple backups.

    It got so bad at one point that I was seriously considering throwing in the towel on unmanaged services and going all super-high-end-managed (like Rackspace was smart enough to do.)

    I am really glad I am out of the industry. I am much more sane now!

    -Erica

    Share
  20. I like the article – thanks Allan.

    I sometimes feel like we router-huggers are a bit like highway maintenance people – no-one cares when it is all working. What people don’t realise the level of maintenance that is going on in the background to keep them and the services they use online 24/7.

    “I configured each router to only take half of the
    prefixes from each upstream eBGP peer”

    Was just thinking about this: I appreciate it is a temporary solution, but wouldn’t it be better to configure eBGP multihop (TTL=3) and peer between loopback addresses?

    As it is currently configured, your friend will lose half of his IP space if one of the links goes down, won’t he?

    Alternatively, advertise half up each gig link, and the whole block up each link as well. The provider will then route on longest prefix match when both links are up, but use the shorter prefix when one link is down.

    Best regards, Andrew

    Share
  21. All of the above said – they’ve got a gigabit of outbound traffic, yet they’re using a single router and homed to only one provider edge (PE) router?

    Sounds like they need more diversity than that – two datacentres, two routers, two providers would be my prescription…

    Share
  22. D’oh.

    Just noticed the “two routers” part of your article. Still, I guess my last suggestion (advertising half the IP space plus the whole IP space up each) still works…

    Share
  23. We have been lucky enough to slip under the radar and never get hit by a DDOS, even though we have 300,000 users a day. Is there a good outsourcing contact to handle DDOS attacks?

    Share
  24. @Andrew Mulheirn – Thanks for the comments. No, he won’t lose half of the Internet if a eBGP peer goes down as the routers are interconnected and share a default route via their IGP. More details offline if you’re interested :)

    @SteveR – contact me offline and I’ll provide you with a few resources – they won’t be cheap….

    Share
  25. If Amazon, Google, Sun et al cannot figure this stuff out, then I doubt a little, underfunded start-up can. And if Amazon, Google, Sun et al cannot figure this stuff out, then there must be a great opportunity for entrepreneurs to add products/services to those hosting ecosystems to satisfy the real hunger to deal with “plumbing” as a totally outsourced variable cost. Sorry, hiring lots of infrastructure guys internally seems lile a retrograde step to me.

    Share
  26. @Alan: You sound like one can’t go startup unless they know at least half scary words you just threw around. I’m sure startups can figure out infrastructure later. This is where people come to optimize it.

    No way 2 guys in garage should bother about that. The company you’re writing about was probably too slow to resolve their issues.

    Share
  27. @bernardlunn – Agreed – don’t hire lots of infrastructure folks. But don’t expect to scale your Web2.0 application dramatically without using the services of someone who understand Internet operations.

    @Emil – Of course you can do a startup without learning scary words like BGP, DDOS and SYN Cookies! But once you get out of the garage and want to make money you need to understand about Internet operations (or at least have someone around that does).

    Share
  28. Great post.

    My questions are simple:

    (1) How does a startup that has limited capital find a “router-hugging relics of the past century desperately clutching to their cryptic, SSH-enabled command line interfaces” willing to come on board as an adivsor until the money shows up?

    (2) Where is a good source to find quality managed hosting service providers?

    Thanks

    Share
  29. Very cool post, probably the single most interesting thing to hit my RSS reader this week. Thanks Mr. Leinwand. :)

    Share
  30. @David Mullings — You talk to me.

    Share
  31. I agree with you on principle, but I think you take the idea a bit too far. For instance, I find it poor judgement for a start-up to run servers in their basement and deal directly with an ISP in the first place and the only time you would need to truly bone up on your router, bandwidth, etc. knowledge is if you’re doing that. My company uses unmanaged hosting, but the servers are still connected through a very high-quality, time-tested network staffed by people who have far more knowledge of networks than I ever wish to have. I need to know how to properly secure, backup, maintain, and setup web servers, but beyond that… the vast font of low-level networking knowledge is left to the experts.

    Share
  32. Great post. And I like some of the comments too!

    Share
  33. Major sites that I’ve worked on can log easily in the millions of attacks per IP address PER DAY. Depending largely on the number of source attacking hosts (or their shadows from say a SMURF attack) it can add up fast. My point is that the incumbents have to sustain themselves against these attacks 24×7. So why should it be any different for the poor lowly 2.0 startups — they need to secure the applications as much as the incumbents do. I agree completely that every company that relies on the Internet as the PLATFORM for their product (not to sell the product — but to RUN on) should have in-house expertise or closely held advisors that can provide guidance especially on matters of security.

    Share
  34. Finding router huggers — it’s easy. You PAY for them, just like you have to pay for everything else in business. It’s not free. Turns out that the strongest network engineers are also past-developers. Not really amazing since routing started it’s life as an application — written by telecom engineers…

    Seriously — your board and advisors should help you find one key Internet architecture, security, and engineering person to advise your company. You will need to do some combination of compensation including one or more of stock/equity, consulting, or both.

    And I agree with some of the other posters that to keep costs down and your focus tight, you do not necessarily need to hire full time staff (depending on what you want to do). I will say this. The right person can help with foundation architecture work as well. Building network constructions into the software architecture could save you a lot in the long run.

    -Victor

    Share
  35. [...] an interesting post at GigaOM: Web 2.0, Please Meet Your Host, the Internet. It’s a good read, though could be shorter, but a few things struck me after reading it. I [...]

    Share
  36. It seems like a lot of these companies are being driven by overly high expectations of striking it rich without a solid foundation in the evolution of the Internet over the past 15-20 years. It’s like 2000-2001 didn’t even happen. I read a lot about Web 2.0 because I’m interested in how technology changes affect society and most of stories are not about technology at all but had to make a profit off of Internet development.

    I don’t have a problem with people trying to make a living or even getting wealthy off of innovation as the profit motivation has driven a lot of internet development. But I think a lot of these folks have completely unrealistic expectations about striking it rich and I think another crash is coming in about 9 months or so unless new enterprises have a firmer footing and grounding in the cycles of internet growth and slowdowns. Their ambitions seem disconnected from the goals and abilities of developers and programmers who actually create the programs they are trying to make money off of. JMHO.

    Share
  37. [...] It’s the network, dummy Posted on 8 May 2008 (Thursday) by smp In the GigaOm blog today, Allen Leinwand puts up a monstrous wake-up call to all the hip and cool Web 2.0 companies out there: Your apps run across the Internet [here]. [...]

    Share
  38. We have an elegant network based defense against DDos and other hostile attacks.

    Share
  39. Stacey Higginbotham Thursday, May 8, 2008

    Allan, perhaps in this column there’s the germ of business idea. Much like their are several virtual CFO companies, perhaps a network engineer SWAT team business would do well for startups. Like everything 2.0, it would be “on-demand.”

    Share
  40. It’s very hard to take this article seriously when it uses gross exaggeration to make a point. To say that Web2.0 companies “lack even the most basic understanding” is an offensive exaggeration. This statement seems particularly dumb since you follow it up with a description of how a team specialists were required to resolve the problem.
    It would be enough to say “routers are too often over looked…” to make a point. No need to slap us all in the face to get our attention.

    Share
  41. Stacey’s post is a great idea, but somewhat misses the point. The vast majority of humans taking advantage of infrastructures (not just communications, but power-grid, etc.) dwell blithely and happily at Layer 7, thinking that problems arising at anything under the application can be solved through liberal applications of pixie dust and magical incantations. Not to sound too much like a survivalist, but it behooves all of us to understand the basic operations of all underlying infrastructures, preparing for the day when things fall apart, and we have to go back to configuring our own routers and manually setting the f-stops for our own non-digital cameras. Maybe even slide-rule education would be useful. Pixie dust just doesn’t cut it.

    Share
  42. [...] users. Web 2,0 sites don’t seem to take the time to optimize queries or their site properly.read more | digg [...]

    Share
  43. [...] some years ago, during the dot COM boom, dot COM sites were popping up for every business…read more | digg story Related posts:Facebook Hyperactivity is [...]

    Share
  44. [...] in at inception. Failing to do so will inevitably cost these companies users, performance and money.read more | digg story Posted in observations — by db on [...]

    Share
  45. [...] Web 2.0, Please Meet Your Host, the Internet – GigaOM [...]

    Share
  46. I cannot believe this made it on Digg.
    What a waste of time by people who think they know what is going on.
    If you are having a hard time logging into Facebook try something else.

    Share
  47. Emil — wrong, to be honest. Infrastructure concerns should be thought about from the offset of a project. Not scaling properly can kill a project (or their budget, later on) so it should be taken into account from the ground up.

    The other half of it.. bad code, could kill any worthwhile project as well. Some survive nonetheless but taking them into consideration upfront will save tonnes of hassle! :)

    On another note, there’s a widerange of inexpensive hosting providers (some scale well, too) on http://www.hostjury.com :)

    Share
  48. [...] sul tema dell’importanza dell’infrastruttura nel successo di un servizio web. In un articolo su gigaom un venture capitalist riflette sul fatto che l’errore maggiore che possano fare le [...]

    Share
  49. I’ve seen quite a few of these situations, and ultimately a lot of it has to do with developers ignoring the inevitability that their site will expand beyond one server and a couple users. The Amazon services don’t save you from this, and I think it would be foolish to rely on that as ones scaling plan. Even some managed hosting providers fail at providing loadbalancing, so if you’re starting to make real money it pays to make a minor capital investment into getting colo and some folks who know how to do things.

    Share
  50. Web 3.0 is about to open!

    http://www.permaid.com

    Web Post Office has cured most of current email/web problems.

    Share
  51. [...] as it might be, even from its most ardent admirers. Network engineer-turned venture capitalist Allan Leinwand explains: I have a major problem with many of the Web 2.0 companies that I meet in my job as a venture [...]

    Share
  52. [...] in at inception. Failing to do so will inevitably cost these companies users, performance and money.read more | digg story addthis_url = [...]

    Share
  53. Allan,

    This may be my favourite post of the year.

    I would argue that just as computers are giving way to computing, routers have given way to routing. I have no desire to hug a router, or even a CLI for that matter. But I have a tremendous respect for people who understand routing.

    The issue here is abstraction. You can’t function in AJAX, within HTML, within HTTP, served by Ruby, run on EC2, sitting atop a hypervisor, and so on, without losing touch with the fundamentals. This Internet thing is still about packets getting where they’re going, regardless of how shiny the buttons in those packets are.

    You also touch on a related point — scalability used to mean computing and bandwidth. Now it means social backlash, comment spam, community management, and so on. We can’t just register .com, .net, and .org any more; now we need to make sure the Twitter, Drop.io, Myspace page, and Facebook names aren’t taken too.

    Great piece.

    Share
  54. inthemission Thursday, May 8, 2008

    The article was pretty good, the comments are hysterical. Anyone who thinks that we’ll just put something on Rackspace, or we’ll just hire some consultant to tell us what’s wrong and everything will be alright is really fooling themselves.

    How do you know what your consultant is telling you isn’t a complete bunch of garbage? “Gee, I tried starting my car and it sounds like the engine is trying to start but then it kinda winds down and stops”. “Well, you obviously need new brakes sir!” “Oh, ok, here’s my credit card”.

    Relying on someone else’s expertise without knowing anything about the subject matter is a good method for parting you with your money. Most Web 2.0 startups don’t have much of the green stuff to waste.

    Good sysadmins or network engineers are essential for keeping your site from getting hacked, overwhelmed, horribly underutilized etc.

    You think your sales rep at the hosting company is going to say “Why are you running CUPS on your hosted server? Maybe you should tune your box before you pay us more money for another machine”!??!

    Because someone runs a company that does managed hosting doesn’t mean everyone that works there is smart or good at what they do either. I called IBM once to say the ISP I was working for was having trouble with the SMTP connection to a certain bank of their MX hosts. The response of the guy on the other end? “Have you checked your POP settings sir?”

    It’s up to you to be an educated consumer of the services you are buying. If you don’t have time to get educated, then hire someone who is educated (sysadmin/network engineer/what have you). Running an Ubuntu box at home is not the same as being a professional sysadmin, trust me. Plugging in your DSL modem or even FIOS connection, for Pete’s sake, doesn’t make you a network engineer.

    Amazon S3 went down for us about 3 weeks ago, by the way. So much for that theory.

    Thanks for the great post Allan. Kinda sorry I got to DI a few months after you left.

    Share
  55. @Allan: Yeah – I see what you mean about them not losing the internet because of the default route and interconnection, but I was more thinking of the Internet losing half of *them*.

    More details offline. I’m probably boring everyone with my router-hugging antics. ;)

    Share
  56. [...] in at inception. Failing to do so will inevitably cost these companies users, performance and money.read more | digg [...]

    Share
  57. Sorry folks about being out of pocket today. Back to the conversation:

    @David Mullings – (1) I think you can attract good router-huggers to a startup just as you attract good early employees. Either pay them well or sell them on the your ideas. (2) There are lots of good managed hosting providers – ask your peers or traceroute to your competition :)

    @Kevan – thanks, I’m flattered.

    @Tom – In principle I agree with you, but most folks don’t know that they even need to seek out some source of the vast font of low-level networking knowledge.

    @Victor – great points.

    @Antoinette – great point on the macro economic environment.

    @gerry – great news. Maybe you can share with others how you got this accomplished?

    @Stacey – Interesting idea. Back about 5-7 years there were consultant shops that did this but I have not heard much about this business in some time. Maybe we need a Web2.0 app that dispatches router-huggers on demand? :)

    @Cham – Fair enough and point well taken. That being said, I don’t think I am grossly exaggerating given some of the conversations I’ve had lately.

    @Loring – thanks for the comments.

    @Tom – agreed, but you need to get someone involved who understand more than just ping, pipe and power if you’re really going to make any serious amount of money.

    @Alistar – thanks very much! I’m off to register a few domain names now :)

    @inthemission – thanks, always great to hear from a DI alum.

    @Andrew – way too geeky, but in this case the same prefix was announced out both upstream eBGP peers. You’re not boring me :)

    Share
  58. dave

    seting up a couple of routers in this sort of configuration is not exactly fracking rocket science needing a Quad CCIE

    sound slike there CTO was not doing his/her job

    Share
  59. Geoff Lisk Friday, May 9, 2008

    Being a self-described router hugger I’m a little biased here, but the message is simple – ignore infrastructure at your own peril. I would venture that most readers of this article do not know how (or have the tools) to replace the brakes and tires on their automobiles themselves but know that they need to be maintained and replaced. So what do you do? Hire a professional on a contract basis with the tools and skills to maintain the brakes and tires on your auto. Any web 2.0 (whatever that is anyway) company who ignores the fundamental infrastructure concerns underpinning their shiny new concept are at best reckless, at worst incompetent or even downright fraudulent. Extending the automobile analogy one final time, you wouldn’t say “a mechanic is too expensive and hard to find so I’m going to take my chances” why would you do the same with your business?

    Allan, thanks again for another great article!
    -Geoff

    Share
  60. @Geoff Lisk – Thanks and good points! You’re right, some folks don’t think they need a mechanic….

    Share
  61. [...] “Web 2.0, Please Meet Your Host, the Internet” I have a major problem with many of the Web 2.0 companies that I meet in my job as a venture capitalist: They lack even the most basic understanding of Internet operations. [...]

    Share
  62. Good article. It’s true that some Web 2.0 companies are so overwhelemed by their idea that they forget the fundamentals.

    Share
  63. Geoff?Allan

    And don’t forget that a Cars engine at in its most basic form has 3 moving parts.

    Share
  64. [...] there remains an immense and growing corporate market, Intranets running telecomm services and Web 2.0 companies offer that [...]

    Share
  65. [...] Web 2.0, Please Meet Your Host, the Internet – GigaOM – So all you agile programmers working on Ruby-on-Rails, Python and AJAX, pay attention: If you want more people to think your application loads faster than Google, learn about your host. It’s called the Internet [...]

    Share
  66. [...] an interesting post at GigaOM: Web 2.0, Please Meet Your Host, the Internet. It’s a good read, though could be shorter, but a few things struck me after reading it. I [...]

    Share
  67. [...] Web 2.0, Please Meet Your Host, the Internet – GigaOM (tags: article web2.0 internet infrastructure) [...]

    Share
  68. [...] in at inception. Failing to do so will inevitably cost these companies users, performance and money.read more | digg [...]

    Share
  69. Web 2.0 companies – create yourselves a new host!

    http://diversity.net.nz/on-telcos-and-disintermediation/2008/05/15/

    Share
  70. [...] prepare for the traffic that hopefully will arrive when you launch a new destination or service. As GigaOm points out, malicious types who are envious of others success can attempt to bring down upcoming [...]

    Share
  71. [...] SaaS providers underestimate their reliance on Telcos (and I’m not the only one – see this from Gigaom and the unreasonablemen.net). I believe that Telcos don’t really understand the internet that [...]

    Share
  72. Many of these internet operations problems are caused by the two web 2.0 business models which are:

    “I’ll create a site that goes viral creating massive traffic and user base so I can sell cpm’s for piddling amounts”

    or

    “I’ll create a site that goes viral creating massive traffic and user base which makes no money, but the traffic and user base are so impressive that maybe some sucker will buy a percentage so I can say its worth x billions”

    If their business models didn’t rely on massive volumes of traffic then this wouldn’t be such an issue.

    Many of these web 2.0 companies have models that mean the more traffic they have the more money they lose.

    If lots of people like it, use it, but it’s free, and it relies on contributions from ‘investors’ to continue, it’s not a business, it’s a charity – like Twitter.

    Share
  73. [...] websites take good architecture, reliable infrastructure, and constant vigilance. Allen Lewind wrote that “Failing to do so will inevitably cost these companies users, performance and [...]

    Share
  74. [...] sul tema dell’importanza dell’infrastruttura nel successo di un servizio web. In un articolo su gigaom un venture capitalist riflette sul fatto che l’errore maggiore che possano fare le [...]

    Share
  75. @ Allan.

    We created Pingsta ICE – http://www.pingsta.com/ice_intro – for this very purpose. To provide startups, SMBs, large enterprises and service providers alike {yes, even Amazon is welcome ;-) } with Network-Intelligence-as-a-Service. This way, companies will simply plug into the ICE platform and tap into Pingsta’s coalesced Internetwork expertise on-demand – like a utility on a pay-as-you-go basis.

    Cheers,
    Peter
    Pingsta Founder & CEO

    Share
  76. [...] Leinwand at Giga Omni Media posted an article on small companies that learned this the hard [...]

    Share
  77. I’m completely agree with author’s idea regarding Web 2.0. All tools require specialists to work with.

    Share

Comments have been disabled for this post