How the internet’s engineers are fighting mass surveillance

2 Comments

The Internet Engineering Task Force has played down suggestions that the NSA is weakening the security of the internet through its standardization processes, and has insisted that the nature of those processes will result in better online privacy for all.

After the Snowden documents dropped in mid-2013, the IETF said it was going to do something about mass surveillance. After all, the internet technology standards body is one of the groups that’s best placed to do so – and a year and a half after the NSA contractor blew the lid on the activities of the NSA and its international partners, it looks like real progress is being made.

Here’s a rundown on why the IETF is confident that the NSA can’t derail those efforts — and what exactly it is that the group is doing to enhance online security.

Defensive stance

The IETF doesn’t have members as such, only participants from a huge variety of companies and other organizations that have an interest in the way the internet develops. Adoption of its standards is voluntary and as a result sometimes patchy, but they are used – this is a key forum for the standardization of WebRTC and the internet of things, for example, and the place where the IPv6 communications protocol was born. And security is now a very high priority across many of these disparate strands.

With trust in the internet having been severely shaken by Snowden’s revelations, the battle is back on. In May this year, the IETF published a “best practice” document stating baldly that “pervasive monitoring is an attack.” Stephen Farrell, one of the document’s co-authors and one of the two IETF Security Area Directors, explained to me that this new stance meant focusing on embedding security in a variety of different projects that the IETF is working on.

As Arkko put it:

I think a lot of the emphasis today is on trying to make security a little more widely deployed, not just for special banking applications or websites where you provide your credit card number, but as a more general tool that is used for all communications, because we are communicating in insecure environments in many cases — cafeteria hotspots and whatever else.

On Sunday, Germany’s Der Spiegel published details of some of the efforts by the NSA and its partners – such as British signals intelligence agency GCHQ — to bypass internet security mechanisms, in some cases by trying to weaken encryption standards. The piece stated that NSA agents go to IETF meetings “to gather information but presumably also to influence the discussions there,” referring in particular to a GCHQ Wiki page that included a write-up of an IETF gathering in San Diego some years ago.

The report mentioned discussions around the formulation of emerging tools relating to the Session Initiation Protocol (SIP) used in internet telephony, specifically the GRUU extension and the SPEERMINT peering architecture, adding: “Additionally, new session policy extensions may improve our ability to passively target two sides communications by the incorporation of detailed called information being included with XML imbedded [sic] in SIP messages.”

Encryption

“The IETF meeting trip report mentioned in [the] Spiegel article reads like any boring old trip report, but is of course a bit spooky in that context,” Farrell told me by email (the piece came out after my initial interviews with Farrell and Arkko). “Hopefully intelligence agencies will someday realise that their efforts would be far better spent on improving internet security and privacy. In the meantime, their pervasive monitoring goals are part of the adversary model the IETF considers when developing protocols.”

Those open processes were apparently enough to, around a year ago, ensure the failure of a campaign to oust an NSA employee from the panel of an IETF working group that deals with cryptographic security. So, if its processes are to be trusted, what exactly can we expect from the IETF regarding combat mass surveillance by such agencies?

Fundamental rethink

Snowden’s revelations prompted a fundamental rethink within the IETF about what kind of security the internet should be aiming for overall. Specifically, the IETF is in the process of formalizing a concept called “opportunistic security” whereby — even if full end-to-end security isn’t practical for whatever reason — some security is now officially recognized as being better than nothing.

“One thing the IETF did wrong in the past is we tried to get you to be either ‘no security’ or ‘really fantastic security’,” Farrell explained. “Typically, until recently you had no choice but to run either no crypto or the full gold-plated stuff, and this slowed down the deployment of cryptographic security mechanisms. The idea of opportunistic security design is that, each time you make a connection, you’re willing to get the best security that you can for that connection.”

So, for example, a provider of a certain service may decide to turn on encryption even if they can’t authenticate the client device. As Farrell put it, these “in-between states are well defined now.”

He noted how web giants such as Facebook and Google have stepped up mail-server-to-mail-server encryption in the wake of Snowden. Facebook sends a lot of emails to its users and, according to Farrell, 90 percent of those are now encrypted between servers. Google has also done a lot of work to send encrypted mail to more providers. “This doesn’t prevent targeted attacks – man-in-the-middle is still possible in a lot of cases, but you can at least get halfway,” he said, adding that this may be enough to dampen pervasive surveillance.

Email - generic

Farrell noted:

My personal belief if that, if you get halfway, it’s much easier to get the second half. I’ve seen really large mail domains turn on the crypto, and some say they can’t see a change in CPU use. Now the next step is getting good certificates in place, getting good administration. It’s easier than going from zero to the end.

One experimental draft that Farrell is working on would see opportunistic security added to the Multiprotocol Label Switching (MPLS) transport mechanism used in core telecommunications networks, “just above the fiber.” This is some way off happening, if indeed it works out at all – it’s dealing with extremely high bitrates and would require implementation in hardware. But, as Farrell noted, it shows how the IETF is working on adding encryption to all layers of the stack.

“The MPLS issue will probably take years before we see progress, but when we do see progress it will have significant impact quickly,” he said. “One reason I understand people are interested in this is because it might be a direct mitigation for some of the fiber-tapping cases that have been reported. Even partial deployment could be quite significant.”

New versions

HTTP 2, currently being finalized by the IETF and the World Wide Web Consortium (W3C), is on the way, and it will support the padding of traffic so as to make it harder for spies to draw inferences from packet size. This will mean the addition of a few bytes here and there, which may have an impact on latency if badly executed, so that’s a challenge for both the IETF and the standard’s implementers.

The IETF is also officially killing off RC4, a cipher used in the Transport Layer Security (TLS) protocol that supposedly provides the security behind the “https” you see denoting secure connections in web addresses. RC4 is now known to be vulnerable to attack. (For that matter, TLS’s security is also up for debate – Sunday’s Spiegel article suggested the NSA and GCHQ were able to decrypt TLS sessions by stealing their keys.)

Farrell noted that TLS 1.3 should be fully-baked sometime in 2015, making it faster and more attractive to implement, and it would incorporate heftier changes than those made in previous iterations. One planned change will involve turning on encryption earlier in the “handshake” process, where the client and server exchange keys, so as to counter monitoring of the handshake contents.

Meanwhile, a separate working group is trying to develop a new DNS Private Exchange (DPRIVE) mechanism to make DNS transactions – where someone enters a web address and a Domain Name System server translates it to a machine-friendly IP address – more private.

The DNS case highlights one of the key problems that the IETF must wrestle with — encrypting traffic can make it harder to carry out certain network management operations that people are used to being able to carry out. Carriers would find it harder to do load balancing if all DNS activity was secured. As Arkko pointed out, end-to-end encryption would mess with things like caching. These problems are not easily overcome.

“We have to have some real thought go into this and understand what the trade-offs are,” Arkko said. “That is largely the debate we are having now.”

2 Comments

Ed Wood

Great, but it seems to me that a far greater threat than NSA is to be found in the ability of International hackers to hack into and disrupt our electrical grids and other critical infrastructure. Lets protact the critical things first!

Richard Bennett

The most interesting part of the article is the last two grafs. Internet fundamentalists believe they’ve got an end-to-end architecture, so just bolt on end-to-end encryption and you’re done. But the reality is that the Internet is a collection of middleboxes such as NATs, CDNs, and caches, so E2E encryption pretty much slows it to a crawl at best and breaks lots of apps at worst.

This is going to be fun to watch.

Comments are closed.