Web 2.0 & Death of the Network Engineer

101 Comments

I was recently meeting with a Web 2.0 company discussing their network infrastructure plans. As I started asking questions about their racks of servers, their storage area network (SAN), their plans for routing, load-balancing and network security, the CTO of the company stopped me and made a bold statement.

He said, “The Internet is like electricity. We plug into it and all of the things that you mention are already there for us. We don’t spend any time at all on network or server infrastructure plans.”

To this CTO, knowing the details of his network and server infrastructure was like knowing the details of the local utility electricity grid – not required. Is this a bad thing, or proof that networking technologies have succeeded?

I guess I am old school, but I recall in the not-so-distant past that every startup needed a plan for their network and server infrastructure and even knew the details of their service providers network – are they using OSPF and BGP? What is the latency across the local peering point? Who are their upstream network peers? How are their firewalls and load-balancers configured? What blocks of IP addresses have I been assigned and how are they routed?

Some companies, like InterNAP and Level 3, have businesses that emphasize their network optimization and network architectures. I don’t know of any electricity optimization companies and I don’t have any idea of the architectures they have built.

My roots are in network engineering and I have spent a good part of my career building network devices and global IP-based networks and services. I’ve spent years studying routing protocols, quality of service algorithms, security mechanisms to prevent DDoS attacks and have every field of the IPv4 packet header memorized.

When the CTO of a Web 2.0 company does not know how a router or switch works (or even what layer of the OSI model they even operate on), I tend to cringe a bit.

I guess I’m reluctant to admit that my technical depth in networking has been abstracted to not being relevant in the Web 2.0 world of social networking, mash-ups, RSS and AJAX. I know that a well-architected network can have a dramatic affect on application performance – but maybe on today’s high-speed Internet it does not matter. It might be that network engineers are not relevant for today’s Internet in the same way that software optimization engineers are seemingly not relevant for Microsoft applications.

On the other hand, I see the current state of the Internet as the ultimate success of these networking technologies. You can deploy a wildly successful Web 2.0 application that serves millions of users and never know how a router, switch or load-balancer works. Even network security and firewalls that were making headline news not more than a few years ago are considered perfunctory. The success of these networking devices and technologies has enabled them to become part of the technology landscape that exists for all to use as they see fit, similar to the microprocessor or electricity.

In your opinion, has the Internet reached a level of abstraction similar to electricity? Do you use the infrastructure that is given to you by your local Internet service provider or a specialized hosting facility like Amazon without questioning how it is architected and designed?

In my role as a venture capitalist, the answers to these questions will help me determine if startups that are building optimized networking devices, improving network security, virtualizing storage, and so forth are required in today’s market.

Allan Leinwand is a venture partner with Panorama Capital and founder of Vyatta. He was also the CTO of Digital Island.

101 Comments

routerguy

For the small to medium company, a competent ISP and hosting company is probably all that’s necessary. But as video and other high-bandwidth applications become more prevalent, design and optimization become more important. The point at which it becomes necessary and/or cost effective to optimize has moved much higher on the complexity scale, as the networking devices have gotten more intelligent and the bandwidth costs have gotten cheaper.

Tom Mornini

I think part of this is the emergence of application hosting companies, which you’ve written about recently.

http://gigaom.com/2007/02/26/engineyard/

Many of our customers DO know about this stuff from past ventures, but are satisfied that we know as much or more than they do and are happy to pass the load onto someone else.

Allan Leinwand

John Furrier – you said “the innovation will come from mastery of network theory” for web2.0 companies. Interesting thought! If you’re working on a web2.0 company that leverages network theory, that would be very interesting to me….

Allan Leinwand

Hi folks,

Thanks for all of the great comments! I’m currently on business travel and have not had as much time to reply here as I would have liked.

I truly appreciate both sides of this discussion, although my network engineering roots still makes me lean towards wanting a CTO that understands details about their network infrastructure. There still seems to be some science in tuning your router, load-balancer, firewall, and so forth to make web2.0 applications perform and scale well. That being said, I know startups are using Amazon’s EC2 and S3 with some good success. And I know of more than a few web2.0 companies that plug their rack of servers into a xSP provided Ethernet port and that’s it.

And no, tomo, I did not invest in the company that sparked this discussion. Maybe if the CTO changed roles to VP of Marketing, I might change my mind :)

I’ll try to get back here and respond to more comments after my cross-country flight.

Phil

Really interesting both- I think the CTO you talked to is a result of what Web 2.0 companies are- true Web 2.0 companies, I think, are media companies, not technology companies. Conde Nast worries about the paper they are printing on, but they worry about what they are printing more.

Carlos

We see this every day, but that is how we come in place many times. The programmers / Ceo or CTO doesn’t know what they need and how much better something can run if run with the proper network setup.

We have the privilege to help a few Web2.0 come to the success. Most of them started in the same situation the CTO say. Everything changed when they started to grow and they needed to scale. They started to move some big amount of internal traffic they upgraded there network. Same happened when you want to deploy a complex database solution like a cluster.

Like always, this is very related to the size of the web2.0 company, but at some point you will always need a network engineer.

Eran Shir

It’s like Jeff Bezos saying:
“At Amazon, no, we don’t care about shipping”. At the end of the day, if you’re a web start-up you ship packets. And if packets get lost, fail to route properly, and you get ongoing re-tries that clog your que (be it db, tomcat or whatever) than your users will have a poor experience. This is especially true on the web, where you can get bursts very easily.
Besides, current Internet research projects (e.g. http://www.netdimes.org) which constantly monitor the Internet teach us that we’re the low level infrastructure is still miles away from being as reliable as the electric network or the water supply network.
At the end of the day, god is always in the details.

Mark

I’m in the same place as the CTO. I was most recently the most knowledgeable developer in a small agency, and was promised an ultimate Director role. I’m a developer and a strategy guy who inherited a hosting environment in a nearby coloc. I hated it. I can master and do a lot of things, including server admin, but my eyes would glaze over every time our network consultant started talking hardware. If the path wasn’t pre-selected, I would’ve recommended farming everything out to RackSpace.

I’ve got reliable hardware at home, reliable macs, a dedicated virtual at MediaTemple, and know enough about scalability to make smart recommendations to my clients. What more do I need?

gz

Interesting post and conversation. I’m also biased by an engineering background, but just founded a startup and we’ve been considering scalability and performance since day one, even though our architecture does include Amazon S3 and EC2. True it is some degree of wasted time if we never have to scale, but it would be much more costly and inefficient to try to redesign/rebuild for scale/performance down the road w/ live customers to support in the meantime than to understand what it will take from the start. This doesn’t mean initially building a massive scaleable architecure, but it does mean putting a foundation in place that can eventually scale. I agree this isn’t nearly as hard as it was even 5 years ago, but I don’t think we can yet plug in and hope for the best.

John

CTO 2.0?

“I don’t need to know any of the details, it just works!” How many times did we hear this kind of nonsense during the dot.com boom?

As many of the comments above have pointed out, any CTO worth his salary would understand why this stuff is not simply plug & play. The network is not quite that simple yet (although apparently investors are).

Vito

I think this is like the popular question: “What was first? The Chick or the Egg?”
I’m working as COO for the Network Department of my Company. What I see is, that WEb2.0 wirh it’s Applications is THE NET. When you take a look at Cisco and some other Network Vendors, you can see that the Good old Boxes are no Routers anymore. Today we have Boxes wirh a lot of Functions/Applications. Virtualization (NAS/SAN)etc. are only some further Examples. Take a look on Cisco, what they are working on!?
What would you say is Google? Is it a Network Application, a ISP, an ASP?
So I would say (as a proud Networker) that the next Generation CEO’s will be from the Network Department. ;-)

LukeD

I have to agree with what most people have already said – if you are a CTO, and you are taking the hosting for your web2.0 app as that much of a non-issue, either you have no ambition for your product to become widely used, or you really don’t understand enough to calling yourself a CTO.

Right now, I’m working as lead dev for a stealthed web2.0 startup (ohgnoes! not another one!), and the one of the biggest things on my mind is “how do we write this so that its going to scale well, and what is the architecture (in terms of hardware, software and networking) that’s going to help us in doing that?”. If you aren’t at least asking questions (and know what they are), your startup isn’t going to go far in the long run.

Peter Secor

This is definitely an interesting and topical subject. I’m currently running a small startup and we treat our network, databases, and servers as a combined service when doing low-usage prototypes, some parts of beta-testing, and some low-usage production services. This allows us to test and design functionality independently of deployment design, and in the beginning stages of a project it allows for very quick iterations through feature-sets. It also keeps us from wasting time on premature optimization of services before their functionality is fleshed out.

However, once we start looking at production servers, appropriate network and server design still matters because it significantly affects our operational cost/user. Having come from a network management company (Micromuse), I’ve seen the operational impact of network, database, and server design (good and bad) when scaling. Interestingly, some services such as Amazon S3 and some types of hosted servers are configurable and scalable enough to give a bit more leeway when initially deploying a product and I look forward to taking advantage of these services in the future.

Simon Leyland

Allan great post with a real curve ball at the end.

I think you’ve seen from the responses that a web2.0 startup company can survive with a very simple internet provision but will certainly need to have a plan to deal with growth – wasn’t twitter down most of last week due to traffic loads?

As for you end paragraph this battle is a very different one and is aimed squarely at Cisco’s enterprise market. They have a virtual monopoly over the enterprise router market because 10-15 years ago they built a box that integrated all the different protocols such as IP/IPX/Decnet/Appletalk/DLSW and simplified the network for an enterprise.

Time has moved on and I think the enterprise market is looking for a new router that integrates all the latest developments such as VoiP, QoS, Security, VLANs etc. Cisco may say they have a product but I think there is a real opportunity for a startup to develop such a router – especially now as companies are looking to refresh and move to IP VPN networks.

joost

very cool post.

i guess it depends on who you are. if you are walmart you care about the details of your utility network in the same way that if you are youtube or flickr you care about the details of your network infrastructure.

however, if you just open a new retail boutique you would be crazy to care about details of your utility providers, b/c you have more important things to care about that will make or brake your business.

when things go well and you open, say, your tenth store, you may start looking into how you can save on utilities and make sure everything is reliable.

but all that said, i do think that generally speaking in-depth knowledge of network infrastructure matters a lot less today than it mattered 5 years ago for a web start-up. but i may be biased b/c we have been running a web start-up on a t-mobile hotspot network for almost a year already :)

Ed Byrne

Coming from a company that hosts many web 2.0 and SaaS applications, I know all to well the CTO that thinks hosting is a utility like electricity.

I believe that statement is true at the low end – but not when hosting a serious application. An example that comes to mind is when a client wanted full redundancy in their architecture and it all fully load balanced. Sounds standard – we will spec and build. Of course our network engineering team asked all the usual questions – but when it came down to going live – we discovered the application needed layer 7 load balancing, not layer 4 – which our devices provide. A layer 7 load balancing is at least 4 times more expensive.

This is clearly a case where the client’s lack of understanding is going to have an impact on their deployment time as well as a financial impact.

John Furrier

Great post. The net has delivered a backbone of utility that has been amazing for startups. My prediction for successful web 2.0 companies: the innovation will come from mastery of network theory.

tomo

Allan,

I hope you didn’t invest in that company.

How could a CTO not have network and serverinfrastructure plans? He or she must not have been around in the 90s when boat loads of $$ were invested in the MSP/ASPs of web1.0 So what, even if Amazon, Salesforce.com, hosted app providers and/or SaaS providers and their customers now actually have access to the infrastructure which supports their requirements, you can’t just solve on of the biggest problem of the last ten years, network and infrastructure scaling, and walk away. Amazon doesn’t have 100% uptime. It and everyone else will go down. Don’t you need at least n+1 redundancy to be legit these days? What, is the CFO of that company developing it’s redudancy plans?

Even if you can plug in to electricity, network connections, computing/processing on demand, etc you still need to know what your plugging in to and if its right for you.

Gaurav Chawla

Interesting. I don’t think internet is there yet. If a CTO of a web 2.0 startup doesn’t know what OSI stack is and how her/his network works, that should be fine as far as s/he is very well aware of how important her/his network is to her/his business. Also, s/he better have a strong VP or director of operations who has a good network engineer in their IT team if the network matters.

Don MacAskill

For a brand-new startup that doesn’t know if it has a hit or a dud on its hands, they can certainly find out first and worry about scaling later.

But ask any so-called Web 2.0 company with scale, and they’ll tell you figuring out the server- and network-level scaling is both vital and difficult. Leaving it to someone else who doesn’t intimately know the details of your app is very risky and error-prone.

I’d say that a scenario like you described sounds like nirvana. I’d love to be there. But currently, it’s just a dream.

Oh, and you’ll also find that “electricity” is no longer a faceless utility but a very real problem for a popular web app. Power and heat densities (which is really another way of saying power and power :) are something we spend a lot of time thinking and dealing with.

James

These days you can add month’s use of a dedicated server for the cost of an hour worth of labor. Focus on what differentiates your product. In some business that’s the network infrastructure. But most startups are not in that business these days. That doesn’t mean you should ignore the design of your network, and not have a plan to scale. But doing so at the expense of developing your core product is premature optimization. Make it work first, have a plan to scale, and focus on that plan when the time comes, not before. There’s a tipping point when it’s smart to optimize your application instead of doubling the number of servers needed to run it. Both extreme denial of the need to understand network optimization and extreme focus on network optimization are silly moves.

Dan

It all depends on what scale they’re operating at. I had a small Web 2.0 startup that was acquired by a much larger company. When we were operating with ~1M pageviews of basic web stuff, it was all a service.

Now that I’m working on services that are constantly pumping out many Gb/s of data, it’s entirely different. I still mostly deal with the software architecture, but I certainly do talk to our ops folks on a regular basis. I make sure I know what’s up with everything from our various POPs and fiber to Netscalers and Netapps.

Beyond a few special deals we have, do I care about routing tables and the like? Not really, our providers handle most of that. It’s still useful to know so we can make back-of-the-envelope guesses when designing new services.

Is this CTO implicitly saying he doesn’t think he’ll get significant usage?

Jim

Has the internet reached a level of abstraction comparable to plugging into an electrical socket?

If one where a “CTO”, I would hope the answer is no. A CTO really should be aware of not only the application his/her company is providing but the pipes that connect that application to the outside world. Does this mean knowledge of things down at the packet level? No. But, if something goes wrong I would hope that the CTO has the knowledge on where to start troubleshooting. This includes attacks (security), performance (servers/bandwidth), and overall availability (all of the above).

Title are thrown around a little too easily in this day and age. However, for large scale deployments, knowing what is going on from application down to IP packets is critical to making sure one’s application is on a stable infrastructure.

Jesse Kopelman

The problem with electricity is that maybe we take it a little too much for granted. Lack of choice, ever increasing costs, rolling blackouts anyone? Is this really what we want in terms of Internet?

Mike D.

I tend to agree with you about what lies outside the rack being largely irrelevant to most upstart web application/service providers, but I wouldn’t go so far as to throw load balancers, switches, and other technologies clearly within the control of the company into that mix.

Do I care about the internals of how my co-lo communicates with the outside world? Probably not. But do I care how my Netscaler distributes traffic between all of my own servers? Hell yes. Some people may do without a traditional load balancer in favor of robin-robin DNS (WordPress.com and Bloglines do this) but even in that case, you need to know what it does and how it works.

The other option, of course, is running everything on one machine, which a low-volume company can get away with, but you almost don’t even need a CTO for that sort of company. Or you can use managed hosting, which is extremely expensive.

I guess my point is, I would separate pools of knowledge into “outside the rack” and “inside the rack”. Does outside the rack knowledge matter a lot less now? Sure. Inside the rack? Probably not.

Sean

The internet is like electricity – you need to understand it well once your needs scale. The same is true for heating, hardware failure rates and disk throughput. Most companies will never get to the critical mass where these are even an issue (it’s amazing what you can do with even just one server these days). However, from my experience once you do you find good network engineers hard to find.

namhanoi

If the company is at the stage of concept building, the CTO may more focus on the application technology like J2EE or PHP or Rails. Until they want they have beta users and really concern about operation, then networks knowledge is must.

John Koetsier

Interesting post. I tend to agree with the CTO … if they’re plugging into something like Amazon’s server services.

If not, however, they better know something about the mechanics of how their solution will be delivered. Even a trucking company cares about the road conditions, which routes are faster, and so on.

Isaac

Internet business is highly layered today. And Web 2.0 is qualified to be an application layer and really don’t need to worry more about the infrastructure. They can start from commodity hosting plan as “electricity”. However, if they dig into depth with growing number of users and joint social connections between those users, they have to pay more attention on the peak of networking. Like the problem with Friendster two years ago.

Comments are closed.