2 Comments

Summary:

In this part of our special report on reinventing the internet, we look at the internet as a shared global resource — in a perfect world, that would mean international cooperation to keep it safe and secure.

reinventing internet-04

In a perfect world, the internet would be a secure place to conduct personal and business affairs. Of course, as we know ever more painfully, it is not.

There are many reasons for that, from the mundane (there are bugs) to the existential (internet business models are built around surveillance and insecurity). Here are a few principles that I reckon would be inherent to the perfect internet and that could, with foresight and international cooperation, help brighten the future of our own imperfect version.

The fundamental idea here is that personal online security benefits everyone; well, almost everyone. It’s certainly not compatible with the NSA-style idea that personal insecurity protects the wider populace. Putting these measures in place wouldn’t be easy, and it would be unpopular in some quarters, but I think it would certainly be worth trying.

RTI mock 2-02

Responsible disclosure

Everyone shares one internet, with tools and technologies just as relevant in Kansas as they are in Kowloon. The discovery of software and hardware flaws may give spy agencies a handy tool for surveillance or attack, as long as no-one else knows about them. However, there is no way for these agencies to be sure that their rivals, or private-sector criminals, are not also aware of and exploiting these weaponized vulnerabilities. Lack of disclosure threatens the citizens and businesses of any given agency’s own country, providing risks that outweigh any benefit.

There should be mandatory disclosure, within a short set period of time (a week at most), of any flaws discovered. Subsequent public disclosure would rarely be immediate, because the relevant vendors should have time to fix the problem, but strict limitations should avoid the sort of situation alleged to have taken place with the NSA and Heartbleed. The U.S.’s CERT policy, which puts a 45-day outer limit on public disclosure, could serve as a model.

Photo by Lukiyanova Natalia / Shutterstock

Photo by Lukiyanova Natalia / Shutterstock

A neutral body such as the ITU should administer the disclosure scheme, monitoring compliance around the initial quiet-tap-on-the-shoulder stage and ensuring the transparency of subsequent public disclosures. The ITU already coordinates international “cybersecurity” efforts, and the UN agency is as good a place as any to tackle disclosure coordination.

Similarly, companies operating through the internet on an international basis should be able and encouraged to disclose any significant breaches of their systems through a centralized platform. National regulators are already trying to institute this sort of thing, and their international collaboration should be stepped up in order to ensure timeliness.

Realistically, enforcement of all this would have to be handled on a national level, with a series of global treaties trying to achieve a sufficient degree of legislative harmonization. In this sense, online security should be treated like the climate and nukes – everyone has their own interests, but has to navigate through the fact that fallout is generalized.

Audit everything

Any reputable vendor should regularly and thoroughly audit its own code anyway (cough cough), but where there is no particular vendor of a broadly-used web technology — as in the case of OpenSSL and certain other open source and community projects — there should be centralized and neutral sponsorship for an open and transparent quality assurance scheme.

This scheme should be funded by all countries and administered by the ITU or perhaps a standards-setting body like the IETF or the W3C. It should not be expensive, particularly when taking into consideration the public costs of dealing with attacks.

Photo by Kamira/Shutterstock

Photo by Kamira/Shutterstock

Such technology often gains widespread use because it is free and battle-hardened. However, its community-derived nature sometimes results in code sprawl, which makes auditing harder and therefore less frequent. The fact that open source code can be audited by anyone does not mean it is, and this omission, when it occurs, is largely a function of resource scarcity. The simplest solution would be to pay people to quickly find flaws and fix them, for the good of everyone.

There would need to be close coordination between this auditing body and the communities behind these open-source products, who would ideally end up keeping a healthy eye on one another. Such is the potential of open code.

Encrypt everything

On a technical level, the big lesson of the Snowden affair is that everything should be encrypted where possible. Of course, this clashes with many current business models that are predicated on data-mining without the subject’s ongoing and active knowledge and consent.

However, security breeds trust and no-one should underestimate the impact of people mistrusting the internet on its future growth. Pervasive encryption also runs counter to the wishes of many governments, but again they should bear in mind the security of their citizens and businesses against outside attackers.

The W3C’s HTTP Working Group is already trying to ensure that open web use will become encrypted by default. The IETF and others are also now focused on improving the usability of online security — utterly necessary if it is to become widely deployed — and on encouraging standards-setters to think about security from the start. This is an excellent start, as are the “privacy by design” principles.

Photo by Wavebreak Media/Thinkstock

Photo by Wavebreak Media/Thinkstock

Web firms such as Google have responded to the NSA revelations by shielding the data passing into and between their data centers, but they still don’t provide true end-to-end encryption as their own business models require them to be able to snoop on what their customers are doing. They are also highly resistant to any barriers or strictures that could impede their data flow. The problem is that if Google (for example) can see it, Google can offer it up to a law enforcement or intelligence agency that asks for it. You may say that’s fine, but it really depends on who we’re talking about, and in which country.

This is where, in the interests of privacy and security, strong new data protection laws should step in – not necessarily to limit data flow, because that’s often practically impossible, but to make sure that users know exactly what is happening with their data. The companies themselves are moving in that direction to an extent, with their defiant decision to tell people when their data has been subpoenaed, but it has to also become clearer to people that (for example) what they say in their private Gmail emails will inform the content of ads they see later.

This educational information should not be buried in mountains of small-print terms and conditions that no normal person reads, but up-front and in simple-to-understand terms, like nutritional information on a can of soda. And, in line with the new privacy rules Europe is bringing in, users should have to opt into having their data processed, based on a more informed perspective.

The difference between opting in and out is vast. Shifting from an opt-out to an opt-in model would certainly add friction to sign-up and update processes, and it would require a standardized template that people broadly understand, but it’s the only honest way to process people’s data.

Photo by Boscorelliart/Thinkstock

Photo by Boscorelliart/Thinkstock

Again, honesty and empowerment breeds trust and sustainable growth – and because it would discourage current monetization models, it may lead to new models that don’t potentially serve up people’s innermost thoughts to spies, cops and hackers on a silver platter. It would effectively minimize the data people generate to what they as individuals find necessary.

Not all the elements of the new European data protection laws are sensible, incidentally, in particular the right for anyone to have data about them deleted. While the European legislation does include exceptions for public interest and journalism, and while the concept is desirable, it is incredibly difficult to enforce in practical terms. Data gets copied and sent all over the place, and keeping track of this is impossible without instituting a metadata framework that would utterly destroy privacy.

Privacy-friendly principles and evolutionary rules

A crucial aspect of the ideal security and privacy framework would be its acceptance of the fact that times change. While certain principles will likely be applicable into the distant future, some practical aspects of policy may seem not too narrow and not too broad right now, but hopelessly out of date a decade down the line.

Therefore, data protection rules in particular should be revisited at least once a decade, with serious revisions probably being necessary every 15 years at most, to take into account the evolution of both technological advances and the resulting shifts in societal attitudes. If done right, this regular updating would also help regulators address the various loopholes that are found in any regulatory framework.

The core principles should ideally be enshrined in a global internet bill of rights, respected by countries and translated into national law as closely as possible. And here’s the overarching principle that should set the tone for the rest: the rights people enjoy offline should apply just as much online.

Photo by F. Schmidt/Shutterstock

Photo by F. Schmidt/Shutterstock

Just as governments should not have the right to arbitrarily keep and interrogate records of every person’s movements on the streets, they should not be able to do the same with virtual movements. Just as people should be free to hold a private conversation in a room without being bugged – outside of an active and targeted investigation – the same should apply online.

In general, mass surveillance should be outlawed. Just because it is the most efficient way to target individuals does not lessen its intrusion on everyone else whose data is collected. Even in non-totalitarian regimes, mass surveillance creates a framework that over-centralizes power and threatens democracy.

Many states are enthusiastically embracing mass surveillance, though, and they will continue to do so. Corporations will always want to know everything about you. The internet, as it stands now, is a giant spying machine. But that’s where most of the other steps listed above, already essential for personal and collective security, could prove so useful in the context of freedom.

The principles and mechanics of privacy and security inevitably converge. If you create truly secure systems and follow necessary data protection principles, you will frustrate not only the criminal hacker and foreign attacker, but also the spy and the over-inquisitive corporation.

In this scenario, almost everyone’s a winner. Where general security is compromised in the name of expediency and convenience, however, everyone will remain vulnerable.

Check out the rest of our special report below:

Images from Lukiyanova Natalia/Shutterstock, Kamira/Shutterstock, Wavebreak Media/Thinkstock, BoscorelliArt/Thinkstock and F. Schmidt/Shutterstock. Banner image adapted from Hong Li/Thinkstock. Logos adapted from The Noun Project: Castor and Pollux, Antsey Design, Mister Pixel and Bjorn Andersson.

  1. Evelyn de Souza Wednesday, May 7, 2014

    Can;t wait for further installments! On the notion of Informed consent, I think this also needs to apply to information that is used for 3rd party marketing. You have me inspired to start petitioning towards a global bill for data privacy rights!

  2. Rufo Guerreschi Wednesday, May 7, 2014

    The high likelihood that there are large quantities of mostly-maliciously-inserted critical vulnerabilities in the most widely used physical hardware components and their firmware is such, that a the international standard body that you suggest should require verifiability and extreme actual verification – not only of all software – but also all firmware in critical hardware component, as well as oversight of the manufacturing, shipping and assembly processes of such components (similar but exceeding DoD Trusted Foundries Program).
    It can be done with single or double digit millions for a single extreme device platform .. and we are doing it in an open way with FSF and other global partners …. User Verifiable Social Telematics …

Comments have been disabled for this post