This article originally appeared in Digital Education, my free newsletter. To subscribe to Digital Education, click here: Subscribe.
The British government wants to make technology companies more responsible for policing online harms. It's one of those ideas that sounds great in theory but is fraught with difficulties.
I wrote about this a couple of years ago, in 2019: see below. I haven't had time yet to see how far the new bill has changed from the original white paper.
(For those who don't know, the way new legislation comes about, very broadly speaking, is (1) white paper setting out the proposals, (2) after debate and discussion, a bill is published, which is like a draft Act of Parliament, and then (3) after more debate in Parliament, the bill comes into law as an Act of Parliament (or not).)
This is the end of free speech online
My article from 2019:
Opinion piece: The Proposed Online Harms Legislation
Introduction
Most of us would like to see companies, especially the “big tech” ones, held to account. Hardly a day goes by that we don’t read about dreadful content being shared online and “going viral”, while the platforms on which this happens come across as very slow to act. Thus when the Government launched its Online Harms White Paper back in April 2019, it seemed in principle a welcome development. The paper was published, and an online consultation held. (The responses are being analysed at the moment.) In a nutshell, the government wants to force companies to take responsibility for the content they allow people to share online. A regulator would be appointed to make sure that companies are abiding by relevant codes of practice. But, as the saying goes, the devil is in the details. And there are quite a few details that need to be ironed out.
The first concern is the sheer scope of the “harms” that the legislation would cover. Here is an extract from the summary fact sheet (http://bit.ly/SecEdHarmsFactSheet):
Harms with a clear legal definition
Includes:
Child sexual abuse and exploitation
Extreme pornography
Revenge pornography
Hate crime
Sexting of indecent images by under 18s
Harms with a less clear legal definition
Includes:
Cyberbullying and trolling
Disinformation
Intimidation
Advocacy of self-harm
(For the full list, see the fact sheet referred to above.)
This is like suggesting an Act of Parliament that would cover every type of crime. While all these activities are connected by the fact that they take place online (though not exclusively so), it might be less unwieldy to deal with them individually or at least in smaller groupings. It would also be useful to see if existing legislation could be used, or amended, to cover some of these.
Another issue is that these proposals appear to make non-illegal activity illegal by the back door. Indeed, at a conference about fake news in April 2019 Sarah Connolly, Director, Security and Online Harms, at the DCMS, said that companies would be held to account for tackling behaviours that may not be illegal but are clearly highly damaging, such as disinformation. How would “damaging” be defined, let alone “highly damaging”?
Or take “disinformation”. According to Connelly, the Government makes a distinction between misinformation and disinformation. The former is where someone unknowingly makes statements which are incorrect, whereas the latter is deliberate. This sounds like a useful distinction until you try to apply it in practice. It seems to me that to prove disinformation you would need to prove intent. That sounds like a job for a court of law rather than a regulator.
The way the legislation would be enforced is that organisations would need to monitor all the user-generated activity on their sites, and try to decide whether someone has contravened the codes of practice and deal with that content accordingly. Thus the organisation would need to decide if someone has been guilty of misinformation or disinformation -- assuming the content moderators knew what the objective truth was themselves.
Another problem area is hate crime. According to the Crown Prosecution Service:
“The term 'hate crime' can be used to describe a range of criminal behaviour where the perpetrator is motivated by hostility or demonstrates hostility towards the victim's disability, race, religion, sexual orientation or transgender identity.” (http://bit.ly/SecEdHate).
So if someone stated in a forum that they don’t like transgender people, the moderator would (presumably) have to decide whether or not that constituted a hate crime.
Don’t get me wrong: I find comments like that as objectionable as the next person, and would welcome more civility and consideration online. But that statement gets to the nub of what is wrong with the White Paper: it seems to be an attempt to legislate decent behaviour.
We should not underestimate the difficulties that would be faced by moderators through such a broad brush approach. If a moderator banned someone for making a statement such as the one just given, could they complain to the Regulator that their right to freedom of speech has been breached?
We should also consider the unintended consequences. The legislation would apply to start-ups and Small and Medium Enterprises, and other organisations such as charities. Would that include schools? The Government promises that it will:
“ensure a risk-based and proportionate approach. We will minimise excessive burdens, particularly on small businesses and civil society organisations.”
This gives no cause for comfort. The VAT rulings on digital goods had a hugely disproportionate effect on one-person businesses, while many schools have had a difficult time grappling with the complexities of the GDPR legislation.
Moreover, how will ordinary users be affected? There are already American websites that don't allow their content to be shown in the EU, because of the privacy laws (especially the so-called Cookie Law). Will some file-sharing websites reach the point where they think the (potential) costs of providing their services to the EU and the UK outweigh the benefits? If so, how would schools deal with a situation in which one of their content-providers metaphorically voted with their feet because of the difficulties of compliance with such a sprawling and ill-defined law?
This could be academic though. The White Paper includes this:
"ISP blocking. Internet Service Provider (ISP) blocking of non-compliant websites or apps – essentially blocking companies’ platforms from being accessible in the UK – could be an enforcement option of last resort."
Will the tech companies care? Perhaps they will if it seriously impacts their finances, but the problem is that those of us who use their services sensibly and legitimately, including schools, teachers and students, will be the ones who really suffer. It's a bit like punishing the whole class as a way of disciplining one unruly child.
To see the proposals, please visit http://bit.ly/SecEdHarms.