I have something to say about that…

Where terrorists go to chat; government and the end-to-end encryption problem


One reason we form governments is to protect our communities. At the same time, our economy and human rights depend on private and encrypted online services. How do we move forward when these two agendas clash?


What’s prompting this post

Following this week’s explosion in the Manchester Arena, we in the UK are struggling to come to terms with the loss of children, the unsettling reminders of our vulnerability, and the stark contrast in our communities coming together in the aftermath.

We are having the to-be-expected conversations about why this happened, what we can learn, and how we protect ourselves. We are reexamining what we expect of our government. It’s part of how we heal as a country, how we pick ourselves back up.

Some of the discussion inevitably turns to encryption and how terror plots are organised — in the UK, abroad; face to face, over the internet. Quickly we run into the encryption question: end-to-end encrypted services can’t be decrypted in between the users’ devices, which makes it difficult for authorities to identify a conspiracy.

Home Secretary Amber Rudd outlined the problem in her comments after the Westminster attack:

“It used to be that people would steam open envelopes, or just listen in on phones, when they wanted to find out what people were doing, legally, through warrantry — but in this situation we need to make sure that our intelligences services have the ability to get into situations like encrypted WhatsApp.”

We have seen this conversation come up again and again, during the debates for the Investigatory Powers Act in 2015 and the (ultimately dropped) Communication Bill of 2012. It also resurfaced a few weeks ago, after the Westminster attack.

It feels like a discussion at a stalemate; I’m seeing government asking for the problem to be solved, and technologists rolling their eyes at the implications that “government wants to outlaw maths.”

Having been on both sides of this discussion, I want to explain the miscommunications I see happening and outline the (few) options I think we have to proceed.

The source of the conflict

There are two conflicting pressures pushing us towards this impasse.

Problem 1: The democracy problem

In the UK, we ask (and pay our taxes for) our government to keep us safe. We expect it to be in every party manifesto on which we elect the next government. We authorise it through a large percentage of our government’s budget. We, often through our press, actively get upset when our government doesn’t keep us safe, and we launch inquiries and hold leaders accountable when they fail.

Our police and national security machinery are constantly trying to keep up with the changing ways criminals act. The rise in end-to-end encryption on messaging services has complicated their jobs — and when they hear us asking to be kept safe, they have pointed to this as an obstacle.

So they’re asking us as the tech industry to “fix it”. If we don’t, they can’t do their jobs properly — which is what we, as citizens, have asked them to do.

Problem 2: The technology problem

In a completely different vein, the we — the tech community — are building an internet on which our society and economy can flourish. We are fighting a whole industry of criminals who are trying to undermine this — as we all know, we need to protect ourselves against phishing, malware, unauthorised intrusions, man-in-the-middle attacks… Our infrastructure is vulnerable in a lot of ways. As I’m fond of repeating, we initially set up the protocols in the internet and web stacks to optimise for sharing — we’re only recently retrofitting security to it.

Continue with reading