Who decides what we can say online? The question plagues Silicon Valley boardrooms, government committees and philosophy departments alike: should individuals have the final say on what happens to content they post online? Should lawyers? Countries? Algorithms? Where there is agreement is that the status quo isn’t working. Hate and violence online continue to threaten people daily, while decisions about whose speech is permitted are made by biased algorithms and through inscrutable corporate structures. Governments around the world are deciding, for better or for worse, that they need that power back.
The UK is the latest country to unveil its plan to take back the reins. The online safety bill demands that social media platforms put systems in place to remove and reduce content which threatens the safety of users, with huge fines if they fail to comply. The bill’s supporters claim it as a victory for democratic freedoms, shifting the power over speech even slightly away from tech companies. Its critics say the opposite: that any increase in government involvement in regulating speech is fundamentally illiberal. Paradoxically, both share at their heart a similar worry: that our online lives are being shaped, curated and overseen by institutions far removed from us as individuals.
Under the proposals, platforms will have ‘a duty to have regard to the importance of protecting users’ right to freedom of expression within the law’ – a crucial backstop against over-moderation of speech. But this doesn’t altogether alleviate the worry. Another duty, ‘to operate a service using systems and processes designed to ensure that the importance of the free expression of content of democratic importance is taken into account’, is a curious addition. Either the first duty is inadequate to protect freedom of expression – worrying in itself – or the second duty will protect speech that would otherwise be legitimately restricted – for instance, because it’s particularly harmful.
And what counts as this ‘democratically important’ content is anyone’s guess. We’ve been offered multiple definitions – from the universal ‘intending to contribute to democratic political debate’ to the narrow ‘promoting or opposing government policy…or a political party’ to the contentious ‘areas of political controversy’ or ‘campaigning on a live political issue’. So: cat videos, unlikely. Anything else? Possibly.
It’s good that the bill recognises that speech in online spaces is a crucial part of today’s democratic discourse. But the terms used in discussion around this bill (controversy, live political issue, robust debate) sound ominously similar to the government’s other ‘free speech’ agenda: the conjured conspiracy of campus censorship and new laws leaving universities facing fines for allowing no-platforming of ‘controversial’ speakers – a proposal that looks set to protect the ability of contrarians to say offensive and harmful things while curbing the ability of marginalised groups to object.
All this speaks to the dread phantom that is the ‘marketplace of ideas’: the well-worn theory saying, despite decades of evidence against it, that in a free and open public square of human opinion, the best ideas will flourish, while rational challenge will send bad ideas scurrying away.
This vision is, and always has been, a fiction. Not everyone is included in the public square: not everyone is welcomed; not everyone is safe. The safety bill promises an online space, safe for everyone, where political discussion runs free – but also where what counts as ‘political discussion’ and what counts as ‘safe’ is being decided by those already in positions of power. These cannot be simply reconciled. Some people and some views have always been privileged in the marketplace of ideas, regardless of any merit – and that includes political views and political speech which can be harmful at best, and downright dangerous at worst.
Protections for democratically important content without any corresponding duty to act on democratic harms skews the balance. The research piles up about how violent and harmful – even if legal – speech in online spaces drives out women, particularly women of colour; how public figures from journalists to politicians are at constant risk of online hate and violence; how LGBT+ people face harassment and abuse every day. Much of this will inevitably look like, or be disguised or defended as, protected ‘political debate’ – this is the same fight we are seeing around universities: marginalised groups speak out about violence they face and are told they are ‘silencing debate’.
The other spectre in the corner is disinformation. Once feared as an existential threat to democracy, now all but vanished from the online safety proposals: we know that disinformation latches onto whatever is the ‘live political issue’ of the day, to spread hate, fear, and uncertainty. If anything is going to fit squarely in the bracket of ‘appears to be specifically intended to contribute to democratic political debate’, you can be sure disinformation campaigns will evolve to look like exactly that. And in online spaces, this violence, abuse and disinformation can be multiplied, amplified, piled on to extraordinary levels. How could we ever ‘robustly scrutinise alternative viewpoints’ in this environment?
There are legitimate worries about a government requiring platforms to take down speech. There are also legitimate worries about a government requiring platforms to take no action about certain kinds of harmful speech. The narrowness of the explanation of ‘democratically important’ has been criticised, for potentially offering more protection for politicians’ speech than the general public. Across the pond, we’ve seen years of Republicans falsely claiming that right-wing voices are ‘censored’ by tech firms and threatening them with legislative consequences; currently in India, Twitter’s offices have been raided after the company added ‘manipulated media’ labels to tweets posted by members of the ruling party.
For the UK to be able to claim that it is truly putting forward a liberal democratic plan for the internet – one that protects fundamental freedoms for all while also protecting them from harm – it will have to address the substance of these questions, sooner rather than later, not just bypass them with vague definitions. Otherwise, we risk introducing a Cheshire Cat of a regime: one that simultaneously has too many teeth, and none at all.