13 Comments

Summary:

For the Internet of Things to achieve its full potential, Alex Salkever of Joyent believes that operators must fundamentally change the way they build and run clouds. In particular, they need to update the decades-old infrastructure technology and create more flexible APIs.

Cloud

We are in the early stages of the Internet of Things, the much anticipated era when all manner of devices can talk to each other and to intermediary services. But for this era to achieve its full potential, operators must fundamentally change the way they build and run clouds. Why? Machine-to-machine (M2M) interactions are far less failure tolerant than machine-to-human interactions. Yes, it sucks when your Netflix subscription goes dark in a big cloud outage, and it’s bad when your cloud provider loses user data. But its far worse when a fleet of trucks can no longer report their whereabouts to a central control system designed to regulate how long drivers can stay on the road without resting or all the lights in your building turn out and the HVAC system dies on a hot day because of a cloud outage.

The current cloud infrastructure could crumble under the data weight

In the very near future, everything from banks of elevators to cell phones to city buses will either be subject to IP-connected control systems or use IP networks to report back critical information. IP addressability will become nearly ubiquitous. The sheer volume of data flowing through IP networks will mushroom. In a dedicated or co-located hardware world, that increase would result in prohibitively expensive hardware requirements. Thus, the cloud becomes the only viable option to affordably connect, track and manage the new Internet of Things.

In this new role, the cloud will have to step up its game to accommodate more exacting demands. The current storage infrastructure and file systems that backup and form the backbone of the cloud are archaic, dating back 20 years. These systems may be familiar and comfortable for infrastructure providers. But over time, block-storage architectures that cannot provide instant snapshots of machine images (copy-on-write) will continue to be prone to all sorts of failures. Those failures will grow more pronounced in the M2M world when a five-second failure could result in the loss of many millions of dollars worth of time-specific information.

API keys will need to be more flexible

The current API key infrastructure of the cloud cannot easily handle the sorts of critical and highly-secure information flows required for true M2M communications. This architecture of public keys, for the most part, relies on third-party authorization schemes that make it very easy for bad actors to perpetrate a “man-in-the-middle” attack. (It’s for precisely this reason that we at Joyent designed our APIs to accept SSH key appendages, removing the middleman and locking down an API far more tightly). These secure APIs not only need to have better hooks for user-specified authentication schemes (from SSH to LDAP to less secure mechanisms like OAuth), but they also need to be far more flexible and fast in order to support the higher volume of transactions. That is critical, in turn, to mitigate growing latency risks for mobile connectivity resulting from the wild proliferation of IP enabled devices on mobile networks coming in the new era of the Internet of Things.

Let’s be clear. Right now no one is putting truly mission critical or “bet your life” applications in the cloud. But in coming era of the Internet of Things, that is a near-guaranteed eventuality, either through intentional or unintentional actions.  As we build out the Internet of Things and slowly ease it first onto private clouds and later onto public clouds, we have no choice but improve the core of the cloud or risk catastrophic consequences from failures. Because on the Internet of Things, no one can blame it on user error and simply ask a hotel air conditioner, an airplane, or a bank of traffic lights to restart their virtual server on the fly and reset their machine image.

Alex Salkever is Director of Product Marketing at Joyent Cloud (@Joyent). He was formerly a technology editor at BusinessWeek.com.

 Image courtesy of  Flickr user karindalziel.

  1. Andrew Zolnai Sunday, October 9, 2011

    You meant: long HAUL drivers? Great piece!

    Share
  2. (My comments may only be as unfair as the article, but imo fear mongering wont help)

    It may not be impossible to imagine an aircraft flight plan being on the cloud, and the aircraft failing to get the flight plan because the cloud was overloaded with people wanting to playback a hot episode a fav soap or what have you. But is this the “internet of things” and “cloud computing” that we want..or is this mere a stretch of imagination ?

    I expect that many (types of) “Cloud Service Providers” will emerge, and each will need to specify their Quality of Service (QoS), in much the same way as telcos have had to do. I can very well imagine that telcos faced the same challenge a long while ago..when for example, railway signals were automated. Banks have gone through this route earlier too..moving away from manual to automated systems. In general, networked systems (which we now take for granted) have all gone through resilience requirements. The internet of things is likely to take this further, but we are not starting from scratch.

    Share
  3. Joao de Oliveira Monday, October 10, 2011

    Economies of scale can be attractive propositions, and certainly the cloud offers one such economy. It’s principle attraction is its economic model which is revolutionary, pay per use is only made possible when scale exists to the extent that the question about how much capacity you require is not relevant but rather how much capacity you want at anyone time. However its achilles heel is in the fact that the cost of failure to the general economy is much greater than in a convention set-up where the maximum damage which can be done to the general economy is the damage that failure has on a specific client. However when all clients are reliant on the same backbone infrastructure a failure there will affect all equally. This raises the stakes to a much greater degree. For this reason cloud adoption ought to be a government regulated industry as the risks of failure promise to have a huge impact of national governments once adoption reaches serious levels.

    Share
    1. Joao, I have to respond to your reasoning that this should be a government regulated industry. If you ‘truly’ look at governments, not just here in the United States, but overseas as well, I, for one, have valid concerns on the ability of government, any government, to provide the type of leadership where creative technologies can be put into a competitive environment where consumers, such as businesses, individuals and governments, can select which technology works best. Our current internet ‘may’ have been better if the government had controlled everything from the beginning but I doubt it. I believe that ‘cloud’ technology that is developed by individuals working in a commercial competitive world will be better since those that ‘depend’ on secure systems will gravitate to the technology that works best.

      Share
      1. Joao de Oliveira Monday, October 10, 2011

        Thanks Luke. This is a thorny issue because it raises the question as to the role of government. The move towards de-regulation has resulted in some recent spectacular failures in particular the financial services industry. As I see it, when a national threat exists then government has a responsibility to step in and regulate. The recent collapse of the banking industry represented a national threat to many governments, it arose through lack of government control, and in the end the tax payer was made to carry the can.

        My question is whether broad based cloud adoption could place too much power into the hands of too few, with potentially disastrous consequences. Does this not represent an instance of putting all of ones eggs into too few baskets? I my bank and my insurance company and my medical aid are all on the same cloud and it fails, would this not mean a failure of services of all three service providers?

        Share
    2. Joao, it goes to each individual and company. You are correct in that government ‘must’ be involved to a degree. I’m just not sure I want them writing ‘all’ of the rules.

      The lack of government oversight in the financial industry, particular in the housing sector, was brought on by ‘government’ that wanted the financial sector to expand housing opportunities to a broader base so they re-wrote the rules. Why? Political gain for the ruling party! It didn’t matter what kind of financial capability the individual borrowers may have had at the time of borrowing, in addition to people stretching mortgages out to 30 and 40 years when ARM rates could dramatically change (God, I wish the government would ‘outlaw’ Adjustable Rate Mortgages). This opened up a lot of opportunities for companies to expand home borrowing even though red flags were out there. In this situation, all three (government, business and individuals) were at fault. I know this will sound awful but I have no sympathy for individuals who purchased homes with ARM and long-term mortgages who couldn’t afford the homes in the first place.

      The same could be said for your example concerning cloud failure of your banking, insurance and medical aid. Each has to keep their eye on ‘how’ mission critical information is maintained. Whether the ‘cloud’ is regulated by government or industry, there are real potential problems that must be addressed. But in the end, it is the consumers, like you and me, that will suffer the consequences of ‘cloud’ failures.

      For one, I still receive bank statements and all financial dealings by mail as I simply do not trust having everything in electronic form or ‘virtual clouds’. I hate that it is not ‘environmentally friendly’ but that is my land-based ‘physical cloud’.

      I have worked in the computer industry most of my life so I know enough to understand, you can’t put all of your trust into bits and pieces of electricity. :)

      Share
  4. Let’s face it. Do we ‘really’ want to put ‘our’ faith in central ‘cloud’ controls? Sure, distributed controls are wasteful to a degree but the failure of one ‘normally’ isn’t catastrophic. With the security breeches at ‘all’ levels of business and government, I have ‘real’ concerns on our ability to provide a ‘totally’ secure system. It’s the nature of things that anything man-made is prone to failure. I do see where ‘clouds’ can be a valid business and organizational tool but this potential future of everything being controlled in ‘cloud’ technology gives me the ‘willies’, to coin an old phrase from my Grandmother. FYI, I’ve been in the computer industry since 1972 so I do know a thing or two about it. :)

    Share
  5. This whole cloud scenario reminds me of a decade or so ago when “thin clients” were the talk of the town…

    As we continue to learn from the cloud, I’m sure we’ll figure out what works and doesn’t but for the carriers, this is a huge opportunity
    to take advantage of what content is delivered on their respective networks and models for “data” will become ones that the carriers can monetize based on what they serve. Right now, data plans

    Share
  6. quicloud, LLC Monday, October 10, 2011

    “Right now no one is putting truly mission critical or “bet your life” applications in the cloud”. Huh? Netflix, Reddit, Foursquare, Hootsuite, Heroku and many others have all or most of their apps hosted at Cloud providers. Most of our clients put *everything* in the Cloud. For a business starting today, hosting it all in the Cloud is the only thing that makes sense.

    What is totally missed by this article is that these outages you speak of or allude to are, 99% of the time, just due to bad design. These sites would’ve went down on their OWN infrastructure as well, because they did not design for High Availability.

    The Cloud makes HA a whole lot easier than non-cloud, but it’s just like any other tool, you have to both understand and use it properly.

    Share
    1. Alex Salkever Monday, October 10, 2011

      “bet your life” means if this app fails, people will / could / might die. No one is doing that sort of things right now. Are people putting mission critical production apps in the cloud? Sure, of course. But the downside there is only economic. With M2M, it becomes different because inevitably the devices that control our world (and not just our ecommerce) have some control over our lives.

      Share
  7. Um, Amazon has been running it’s mission critical order processing infrastructure on it’s cloud for several years now. Precisely because the cloud is more fault-tolerant and scalable….

    Share
  8. George Birbilis Tuesday, October 11, 2011

    come on, no talk about denial-of-service etc.?

    Share
  9. tip for the gigaom/cloud crowd:

    http://owncloud.org/announcement

    owncloud 2.0 – built into opensuse 12.1:

    https://features.opensuse.org/312726

    Share

Comments have been disabled for this post