One reason we form governments is to protect our communities. At the same time, our economy and human rights depend on private and encrypted online services. How do we move forward when these two agendas clash?
What’s prompting this post
Following this week’s explosion in the Manchester Arena, we in the UK are struggling to come to terms with the loss of children, the unsettling reminders of our vulnerability, and the stark contrast in our communities coming together in the aftermath.
We are having the to-be-expected conversations about why this happened, what we can learn, and how we protect ourselves. We are reexamining what we expect of our government. It’s part of how we heal as a country, how we pick ourselves back up.
Some of the discussion inevitably turns to encryption and how terror plots are organised — in the UK, abroad; face to face, over the internet. Quickly we run into the encryption question: end-to-end encrypted services can’t be decrypted in between the users’ devices, which makes it difficult for authorities to identify a conspiracy.
Home Secretary Amber Rudd outlined the problem in her comments after the Westminster attack:
“It used to be that people would steam open envelopes, or just listen in on phones, when they wanted to find out what people were doing, legally, through warrantry — but in this situation we need to make sure that our intelligences services have the ability to get into situations like encrypted WhatsApp.”
We have seen this conversation come up again and again, during the debates for the Investigatory Powers Act in 2015 and the (ultimately dropped) Communication Bill of 2012. It also resurfaced a few weeks ago, after the Westminster attack.
It feels like a discussion at a stalemate; I’m seeing government asking for the problem to be solved, and technologists rolling their eyes at the implications that “government wants to outlaw maths.”
Having been on both sides of this discussion, I want to explain the miscommunications I see happening and outline the (few) options I think we have to proceed.
The source of the conflict
There are two conflicting pressures pushing us towards this impasse.
Problem 1: The democracy problem
In the UK, we ask (and pay our taxes for) our government to keep us safe. We expect it to be in every party manifesto on which we elect the next government. We authorise it through a large percentage of our government’s budget. We, often through our press, actively get upset when our government doesn’t keep us safe, and we launch inquiries and hold leaders accountable when they fail.
Our police and national security machinery are constantly trying to keep up with the changing ways criminals act. The rise in end-to-end encryption on messaging services has complicated their jobs — and when they hear us asking to be kept safe, they have pointed to this as an obstacle.
So they’re asking us as the tech industry to “fix it”. If we don’t, they can’t do their jobs properly — which is what we, as citizens, have asked them to do.
Problem 2: The technology problem
In a completely different vein, the we — the tech community — are building an internet on which our society and economy can flourish. We are fighting a whole industry of criminals who are trying to undermine this — as we all know, we need to protect ourselves against phishing, malware, unauthorised intrusions, man-in-the-middle attacks… Our infrastructure is vulnerable in a lot of ways. As I’m fond of repeating, we initially set up the protocols in the internet and web stacks to optimise for sharing — we’re only recently retrofitting security to it.
The most effective way we’ve found to build trust in the internet and web, to maintain the privacy of our users, and to protect the integrity of the financial payments and transactions that run across it — is end-to-end encryption. This is currently our strongest guarantee that the data you receive is what the sender intended — and that no one has intercepted it or altered it in transit. (For more information on this, I highly recommend Keys under Doormats by Harold Abelson, Ross Anderson, Bruce Schneider et al.)
As an industry, we haven’t found ways to build back doors that only let the “good guys” in. At this point, it is very likely impossible. We, the W3C Technical Architecture Group, wrote about it in our finding, End-to-End Encryption and the Web:
“It is impossible to build systems that can securely support ‘exceptional access’ capabilities without breaking the trust guarantees of the web platform. Introducing such capabilities imposes known risks that far outweigh any hypothetical benefits.”
This is true not just for the web but also for internet services, further down the stack.
With our heads in this space, it can be easy to hear “we do not believe that there should be a safe space for terrorists to be able to communicate online” as “We want to accept the risks of malicious attackers getting our data, in order for criminal and terrorism investigations to be more successful.” Or, indeed, “we understand what is at risk; we just want a magic solution.” It feels like we’re being asked to do the impossible. And with our existing architectures, most of us can’t imagine a way in which it might become possible.
Where do we go from here?
Two ways forward come to my mind — and I’ll admit, neither is easy.
Option 1: We attack the technical problem
…which is a Hard Problem.
Most messaging platforms are using one of these encryption architectures:
- Clear text. All messages are unencrypted. The messaging service can read it, anyone along the network can see it, and anyone who gathers the packets can know what you’re saying.
Examples: basic email, IRC.
- TLS-protected messaging. Messages are encrypted between both users and the service, but the service itself can read it.
Examples: Facebook Messenger, Twitter Direct Messages, Google Hangouts chat messages.
- End-to-end encrypted messaging. Messages are encrypted at one end and passed through the internet (usually to the messaging service and on to the user — or peer-to-peer from one user to another) in an unreadable state. They are only decrypted at the other end.
It’s true we can’t “steam open these envelopes”; sending an end-to-end encrypted message is the equivalent of sending an unbreakable box to which only your recipient has the key.
Examples: Whatsapp, Signal.
Messages sent through clear text and TLS messaging are generally accessible to security services and criminal investigations, where they are stored on a server and someone with access to that server can be presented with a warrant. But as we move to more end-to-end encrypted messaging (and as the threats to users increase, making them more useful), more messages are inaccessible to investigations.
I don’t see us moving backwards towards less protection. If we are to keep the web and internet a strong part of our daily lives, we will need to maintain — and continually strengthen — this protection.
To solve this criminal investigation problem, we would have to create a fourth architectural option for messaging — one that only allows the “good guys” access in between the sender and receiver.
I won’t say it will never happen. At one point, both secure key exchange and verifiable integrity across a database must’ve seemed impossible, and then we came up with asymmetric encryption and blockchains. So I haven’t given up hope. But I will say that thus far, we have no solutions for this, nor am I aware of any progress toward creating one. Finding a way to do this — if it is possible — will take a big shift in how we build messaging services… and will probably require us to rebuild everything we’re currently using.
(To say nothing of how we determine who is a “good guy”… This is also a Hard Problem. Our internet — and messaging services — are international; not every government enjoys the same level of trust as the UK’s does.)
There is lots of research and exploration to do here.
Option 2: We attack the democratic problem
…by agreeing that our government will not be able to anticipate attacks planned using end-to-end messaging services. This may mean a lower level of protection for us as citizens. If we were to accept that certain communications were outside the reach of our security services and police, than we must understand that they are only able to work in other ways. And we must be patient as they reallocate resources and expand their other approaches.
(We would also have to expect the press to adapt, and to not go after democratically elected leaders for failing to anticipate attacks planned in this way.)
This option is probably easier for non-technical people to understand… but must be hard to accept when you’re mourning the loss of a family member or have been a victim yourself of an attack.
Proceeding with either option requires a lot of collaboration and discussion between all of us: both the tech community and policymakers in government. (And a lot of others, too.)
These are ubiquitous challenges; unfortunately, the crimes we’re trying to prevent can affect any of us, and the technologies we’re building are tools for all of us. We all have a stake in their outcomes. This is important enough and the problems are complicated enough that we need to listen and support each other in the years ahead. This will take a while, but — as we’ve seen — the issue will keep coming up. The more we talk about it, the easier it will be to converge on a workable solution.