There’s a famous adage in programming that goes “Nothing is more dangerous than an idea when it is the only one you have”. Thanks to Moore’s Law, the cost of a 32-bit microprocessor is on par with that of an 8-bit one. Given this cost parity, engineers tasked with creating a connected product often start the design process by deciding which Linux distribution to use and which 32-bit chip to use it on. The unquestioned assumption here is that bigger is better. But is embedding a general-purpose computer into every household device the only way to build the internet of things?
By their very nature, general-purpose computers are extremely complex. Even the most lightweight Linux embedded system contains hundreds of thousands of lines of code. It’s impossible to root out every potential critical failing in these systems, which is why we occasionally need to reboot our laptops and smartphones.
Although inconvenient, we’re generally OK with this tradeoff. But are we prepared to make the same tradeoff for household devices? Is it acceptable for a refrigerator to hang? How about a smoke detector? On the business side, a single customer support call can wipe out a manufacturer’s profit on that unit. Complexity in this case is unacceptable for both consumer and business.
Communication protocols are another crucial design decision upon which many engineers have failed to turn a critical eye. The architecture of the internet doesn’t translate well to the world of things.
Think about the behavior of our most common internet applications: media streaming, web browsing, P2P transfers, financial transactions, email. All involve a long, continuous stream of large amounts of data, sent in real-time, with high accuracy requirements. Our most celebrated Internet protocols (TCP/IP, HTTP, SSL, etc.) are celebrated precisely because they handle these forms of activity well. Yet connected things behave entirely differently.
We’re talking millions, possibly billions, of machines sending small bursts of (relatively) low-priority data 24/7. Take TCP and SSL — if you’re shipping 10,000 units or less, then no problem. But if you’re shipping over 100,000 units a year, a connected product architecture built on top of TCP and SSL becomes prohibitively expensive at the data center level. A number of high-profile projects by some of the top names in electronics and computing have quietly stalled because of this misstep.
A new internet for the internet of things
What would you do if you had the chance to reinvent the internet to support the connection of low-cost and simple to use things? To reduce device complexity you first have to push as much complexity as possible out of the device and into the cloud.
For example, pretend you are building a smart thermostat akin to Nest, which learns your behavior over time and creates custom-tailored schedules based on those routines. You’ve got two options: store the behavior data and the learning algorithms in the device, or store them in the cloud. You can only go so far with the first approach; sooner or later you will be constrained by storage, processing power or cost. Furthermore, you’ll need to charge $250 for the device, which makes it hard to break out as more than a niche product.
Or, you can store the computational power in the cloud, and then use the data feed between your product and the cloud to emulate the same behavior. Whenever the user locally or remotely adjusts the temperature, that event is stored in the cloud. Once the cloud has enough data, it constructs a schedule for that user and sends each individual temperature update down to the device at the appropriate time. For users, the end result is the same, yet the second approach is dramatically simpler for manufacturers and enables them to sell the product for $100 or less (right on par with non-connected thermostats).
The sky’s the limit with the cloud-based approach. What’s to stop you from integrating that thermostat into a demand-response smart grid? Or even doing away with the expensive display and scheduling controls on the chassis, replacing them with simple “warmer” and “cooler” buttons? What would then be possible is to return the local thermostat device to a simple temperature sensor and switch, harking back to the classic Honeywell T87 mercury switch thermostat, which was not only cheap but easy to use. Now we can make the T87 smart, delivering a true set-and-forget experience and save you a few dollars in the process.
Simple things need simple protocols
As for networking complexity, sending data over UDP (encrypting messages with 128-bit AES) rather than TCP keeps the Cloud data center scalable and efficient. TCP requires a complicated session coordination procedure before any data can be sent between device and server (useful for web browsing or file downloads, but completely unnecessary for connected devices), whereas UDP enables devices to just send and receive data unannounced, dramatically reducing the data center footprint required by millions of connected devices.
The downside of cloud connectivity, of course, is that ongoing operating costs associated with power, bandwidth, equipment and tech support in the data center are costly. The pathway to minimizing operating costs is putting in place a linearly scalable communication switch and a database-software architecture.
Such a modern cloud communications architecture dramatically reduces the cost per connected device from dollars per month (e.g. cellular M2M) down to pennies per month. This extremely low operating cost makes practical Facebook-like and Google-like business models for connected device manufacturers to make use of. In other words, consumers benefiting from connected-product convenience can make connected-product brand owners more profit.
In the end, the internet of things isn’t about cramming computers into places where they don’t belong, but by doing more with a mere eight bits than anyone ever thought possible.
Shane Dyer is president and founder of Arrayent, a company that builds platforms for the internet of things. This is essay is part of package exploring the topics that will be discussed at GigaOM’s Mobilize conference Oct. 16 and 17.