1 Comment

Summary:

In discussions about the Internet of Things, the lines between the concepts of “security,” “privacy,” “data integrity” and “liability” quickly get blurred. To discuss these issues seriously, we must separate them out and clarify the most important issues that need to be addressed for each.

Security and privacy are a constant in every Internet of Things conference. In public institutions, security and privacy could be ranked as the number one concern. People are simply not comfortable with the idea of having 50 billion connected devices posing 50 billion potential threats.

But I’ve found that talks that start with the words security and privacy are soon blended with the concepts of data integrity and liability. I’d like to drill down into these concepts to clarify what each of them mean for the IoT and open the discussion on the most important issues.

Security

There was a time when having a Linux laptop meant being virus-free. Now you find news about malware like the Linux.Darlioz worm that can infect home routers, security cameras or other consumer devices connected to the Internet.
While encryption and authentication algorithms and procedures exist, they are usually expensive in power performance. Many IoT appliances and devices run Linux or other open-source code that may not be regularly updated and can be vulnerable. Thus, the biggest challenge is not in finding new methods for security, but in making sure that the new ultra-low power devices connected to the Internet can be run and patched efficiently.

Privacy

I heard about the Green Button project, an application that gives utility customers access to their energy usage information, for the first time at Connectivity Week in Santa Clara. I was terrified. The idea of opening your data to third-party companies triggered all my privacy alarms. What if someone used that data to track your habits to rob your house or kidnap your kids? I raised my hand at the conference session and the speaker listened to me patiently. After asking if I came from Europe, he just said “Are any of those threats enabled because of this kind of initiative, or would they exist anyway?”

Maybe this particular example cannot be generalized, but we have to admit the speaker has a point. The Green Button project did not create the danger, and it isn’t promoting the use of personal information outside of tracking energy use. But we have to consider that privacy is also a cultural issue. In Europe the public utilities don’t have permission to disclose information on their subscribers. There, to protect your privacy, your phone bill doesn’t list the full phone numbers you called during the month. This data is not universally considered to be public information.

In the U.S. and in Europe, people don’t like being tracked in a shopping mall just to receive customized ads, but if data sharing can give them a benefit they value, they are totally in. Waze users share their location in exchange for traffic congestion information. After all, in a world where everyone is already sharing their updates on Facebook and Twitter, our ideas about privacy will never be the same. The question is all about the trade-off, the quid pro quo.

Integrity

Many devices means many data sources that include individuals—so, we have to ask, to what extent can someone introduce spurious data? When citizens shared radiation levels in Japan after the Fukushima disaster, the crowd-sourced network served as both data source and control because the more people contribute data, the sooner anomalies can be detected. This type of community sharing is not new, it is the same type of use case as a well-proven system that we trust every day: Wikipedia.

When it comes to Smart Cities, one of the first questions our city council customers raise is how they can prevent unauthorized personnel from injecting bad data into the network. This is not surprising: compared to other IoT projects, Smart Cities potentially deal with the largest amount of data.

Liability

In the aftermath of Hurricane Sandy, there was a lot of discussion at an Internet of Things conference in Washington, DC, about using the IoT in disasters, such as in flood warning system. Someone in the audience raised the question that if the sensor data were wrong, who is liable? Is it the sensor manufacturer, the device company or the government?

This was the most accurate question about the topic I have ever heard. It’s too bad nobody in the room dared to answer; that showed the long road we still have ahead of us.

At a recent IoT workshop held by the U.S. Federal Trade Commission, not even Vint Cerf, one of the fathers of the Internet, would pronounce how best to legislate on liability. Who is going to decide what factors should be taken into account? Who is liable in the case of error? There was a similar liability debate in the early days of the Internet—was the ISP or network provider responsible for malicious content on a site?

A gradual resolution

Privacy and security are not trivial subjects, but I know people who swore 20 years ago they wouldn’t carry a cell phone because it invaded their privacy and who now would never go anywhere without one in their pocket. Utility won over privacy. Again.

Along the same lines, we will get used to including security procedures in our routine, the same as we have learned how to update our computers by following a downloaded wizard.

As the Internet of Things is built out and gains ground, we may even find new ways to implement privacy and security. The good thing is that this is now clearly a societal matter and governments will be forced to regulate these issues at last.

Alicia Asin is the co-founder and CEO of Libelium, a hardware provider for wireless sensor networks used in Smart Cities and Internet of Things projects. You can follow her on Twitter: @aliciaasin

  1. Thoughtful post that is greatly appreciated.

    When it comes to privacy however, I don’t think individuals can make an informed determination of whether the benefits of a new technology if they don’t fully understand the potential risks (quid pro quo).

    Most users of these services don’t know who will have access to their information, for how long, and for what purposes. Many also don’t have an understand of the power of metadata and algorithms (NSA notwithstanding), and how seemingly unimportant bits of information can be combined to create very detailed pictures of us and our behavior. While some companies do provide general information about the data they collect, I would submit that most simply focus on all the great benefits that their new whiz-bang service will offer.

    Without a sensible, unhyped source of information educating the consumer on the possible pitfalls of sharing so much of their data, I’m afraid most of us will continue to make decisions that we may one day regret.

    Share

Comments have been disabled for this post