Blog Post

Why the internet of things gives us a second chance to define digital trust and privacy

As we connect more things to the web and share more data, there hasn’t been a coherent effort to build trust and privacy into the internet of things. This is why a conversation I had with ARM’s CTO Mike Muller last month was so refreshing. He made a distinction between trust, privacy and security and explained what our inability to address trust and privacy might mean for the internet of things.

It’s a topic we’ll also explore with his boss, ARM CEO Simon Segars at our Mobilize event in October in San Francisco. We’ll also have a security and regulatory panel moderated by Adrian Turner, of Mocana discussing these issues. But first, let’s lay the groundwork, starting with trust and privacy.

Data is powerful, so who can you trust with it?

“We should think about trust as who has access to your data and what they can do with it,” Muller said, in explaining the best way to approach security and the internet of things. “For example, I’ll know where you bought something, when you bought it, how often and who did you tweet about it.” A simple transaction can gather a chain of data that can now be linked across a variety of locations and platforms.

“When you put the long tail of lots of bits of information and big data analytics associated with today’s applications we can discern a lot. And people are not thinking it through. … I think it’s the responsibility of the industry that, as people connect, to make them socially aware of what’s happening with their data and the methods that are in place to make connections between disparate sets of data. In the web that didn’t happen, and the sense of lost privacy proliferated and it’s all out there. People are trying to claw that back and implement privacy after the fact.”

And what troubles Muller is that today, there’s nothing that supports trust and privacy in the infrastructure associated with the internet of things. And of course, less focus on privacy can also impact security, making it easier to see when someone is vulnerable or upping the amount of damage someone can do.

But instead of a nuanced discussion of the topic, we tend to use security, trust and privacy interchangeably and wring our hands about the dangers of connectivity. That’s got to stop, so let’s start with breaking up these concepts and starting a discussion around how to start tackling each problem.

Defining trust

Trust is the easiest to define and the hardest to implement. It relies on both transparency and making an effort to behave consistently. So if you are transparent that you are changing your terms of service to broadcast data that was once kept private, but you’ve promised users that you wouldn’t do that, you are being transparent but inconsistent.

That’s the equivalent of a friend telling you they are going to screw you over. It’s transparent, but it violates trust nonetheless, because screwing people over is inconsistent with being a friend.

When it comes to connected devices and apps, trust is probably most easily gained by explaining what you do with people’s data: what you share and with whom. It might also extend to promises about interoperability and supporting different platforms. Implicitly trust with connected devices also means you will respect people’s privacy and follow the best security practices. So let’s get to those.

Defining privacy

Privacy is more a construct of place as opposed to something associated with a specific device. So a connected camera on a public street is different from a connected camera inside your home. It’s easy to say that people shouldn’t be able to just grab a feed from inside your home — either from a malicious hack or the government (or a business) doing a random data scrape. But when it comes to newer connected devices like wearables it gets even┬ámore murky: Consider that something like a smart meter can share information about the user to someone who knows what to look for.

So when thinking about the internet of things and privacy, it’s probably useful to start with thinking about the data the device generates. Muller offered a good analogy, comparing the connected individual to a music publisher.

“You have to think of yourself as Universal Music,” he said. “Your data is your catalog and that catalog is valuable in a variety of formats.” Of course, that implies the user is in control of their data, which is true for some places but not all. It also means the user needs to take responsibility for understanding what happens with their data and the function and implications of the devices they bring inside their houses.

Privacy isn’t a given, and it must be protected

Big Brother is watching you
There’s also a problem of your privacy being violated, as was the case with Google sniffing out Wi-Fi passwords or the Renew system grabbing the MAC addresses off of the phones from passersby in London in order to show them a more relevant ad.

To protect privacy when everything is connected will require laws that punish violations of people’s privacy and draw lines that companies and governments can’t step over; but it will also require vigilance by users. To get this right, users should be reading the agreements they click through when they connect a device, but companies should also create those agreements, especially around data sharing transparent, in a way that inspires trust.

Governments and companies need to think about updating laws for a connected age and set criteria about how different types of data are transported and shared. Health data might still need the HIPAA-levels of regulations, but maybe looser standards can prevail for connected thermostats.

Either way, we need to stop freaking out about the dangers of connected devices and start having productive discussions about implementing trust and security before the internet of things goes the way of the web. Wonderful, free and a total wild west when it comes to privacy.

9 Responses to “Why the internet of things gives us a second chance to define digital trust and privacy”

  1. W. David Stephenson

    Hi. I just blogged about this post:

    My big take-away from Muller’s remarks is that IoT companies have an affirmative requirement to not only engineer in protections, but must actively earn users’ trust by communicating forthrightly about how the company collects user data and will use it: it’s not enough to include an agate-type legal disclaimer: they must prominently display their privacy policies right on the home page, and make opt-in the default: the customer must control their data.

    I’ll moderate a panel on these critical issues at the IoT Summit in DC on October 1st. Hope you can attend!

  2. Stacey, very thoughtful piece and I disagree with John Seldan’s point that this is just a hyped term. I couldn’t agree more with the point that we have a chance to learn from the painful lessons of the Web and define trust and privacy differently for the sensors that will vastly outnumber (by 25X if Gartner is to believed) the number of human users. If we have privacy issues with 2B users, imagine what 50B will do.

    It’s like we struggled to be a small town and now we’re going to be a city of far more information that has even less human governance than we see today. This is a bigger issue than people are thinking about.

  3. Jon Neiditz

    One simple reason that the internet of things in the home is a good candidate for privacy protection is because US laws and norms do protect the privacy of the physical space of the home, unlike any place in cyberspace. See, e.g., Doc Searls’ latest Lakoff-inspired thoughts on digital privacy as a metaphor from the physical world, and

  4. MedicalQuack

    Furthers the point that we need an IT infrastructure set up to license and excise tax ALL data sellers, banks and companies. It’s like the wild west out there now and I’ve had my little campaign and a bunch of blog posts and emails to the FTC as any privacy laws and bills are doomed to fail as there is no IT infrastruture path built to regulate.

    We keep getting low tech band aids for high tech problems because as an example 84% of the Senate still does all their work on paper instead of taking advantage of the digital capabilities the Senate has…the digital illits who won’t even help themselves get a tool back to help them, the Office of Technology Assessment…they need it now more than ever.

    Same this with the Consumer Protection agency…Cordray has to learn and we have to wait and hope he learns something.

    • “84% of the Senate still does all their work on paper”
      Where do you get such a ridiculous number like that from? I worked in the Senate 10 years ago and we used computers for nearly all of our work – back then! Yes, you will very often find Senate staff going through paper copies of legislation but that is simply because it is easier to keep a desk reference to major laws that you look at multiple times a day.

      Please leave erroneous stats out of your arguments. Remember, 64% of all statistics are made up.

  5. John Selden

    Sorry to be a downer, but if I see one more title on this site with “internet of things” in it, I think I’ll scream. Seriously … scream.

    P.S. A few months ago, I would have said the same thing about “big data.”

    • It doesn’t bring me down. Unfortunately, like big data, the internet of things is describing a technological shift, which means it’s the shorthand for a set of technologies, changes in cost-structure and emerging business opportunities brought about by the first two. So ideally writers will accurately describe which aspects of that shift they are writing about and get specific, which should ease your consternation.