The House of Lords Communications and Digital Committee has launched a new inquiry, which will examine whether or not freedom of expression is under threat online and, if so, how modern technology might be used to help protect it. The inquiry will also weigh that against how such freedoms should be balanced with other rights.
Lately a lot of the talk in the news and political circles has tended to centre on how the Government and Ofcom could clampdown on “harmful” online content (e.g. hate speech, bullying, terrorism, conspiracy theories etc.), such as by imposing new rules or restrictions via internet content (websites and social media etc.) or access providers (broadband and mobile) through the much delayed Online Harms Bill (here and here).
The Government have been keen to stress that they want to keep people safe online (especially children) and that any changes must be made in a proportionate way, ensuring that freedom of expression is upheld and promoted online, and that the value of a free and independent press is preserved. In reality achieving such balance is incredibly difficult.
Trying to police the natural public expression of negative human thought and balancing that against free speech remains fraught with extreme difficulty, particularly if it results in websites introducing automated filtering systems (manually moderating all content on major platforms is impossible) – these are notorious for being overzealous and often fail to understand context (e.g. people joking about blowing up a city vs actual terrorist discussions).
Meanwhile a lot of social media today seems to have become worryingly tolerant of hate speech and dangerous conspiracy theories (e.g. anti-vaxxers), which can do real harm. In some cases politicians and celebrities have even helped to amplify such things. But once again, tackling this could be difficult without also impacting those who criticise the same topics, as well as satire, the right to cause offence, political speech and so forth.
Lord Gilbert of Panteg, Chair of the Committee, said:
“In recent years there have been a growing number of controversies relating to the use of the right to freedom of expression online. We hope to hear views from all sides of the debate on the roles that platforms and the state should play in protecting or curtailing what users can say online.”
The committee have thus launched a new Call for Evidence, which invites written contributions to its inquiry by Friday 15th January 2021. Some of the questions being posed by the committee include:
How should good digital citizenship be promoted?
Should online platforms be under a legal duty to protect freedom of expression?
To what extent should users be allowed anonymity online?
How can content moderation systems be improved?
Would strengthening competition regulation of dominant online platforms help to make them more responsive to users’ views about content and moderation?
No doubt they’ll say.. “Nothing to see here move along”.
I see it now government official’s writing up excuses from the big book of excuses to why censorship is a good thing against negative impulses & thought.
Oi you got a loicense for that tweet son ? Nah FoS never existed in the UK to begin with.
I don’t think manual moderation is successful and is wide open to abuse and others personal opinions. It seems you can be as polite as you want on a forum but if you post an opposite opinion to others and people complain you’ll be banned. That seems the status quo with certain big websites i know. They would rather just ban you then upset other members even if your not breaching rules. Moderators are jumped hitlers in some cases and the power goes to their heads. Plus they are usually volunteering to do it part time so aren’t trained in unbiased conscious thinking.
I think manual moderation is an area that they should be looking at.
The article above touches on the problem of practicality with that. Take Facebook with its 2.5-3 billion active users and then consider this against the company’s 45,000 full-time employees. There’s no way you can do effective manual moderation across an active user base of that size.
Information websites simply don’t make enough money to employ enough staff to resolve that problem via the manual method, not even close. A key problem here is the very nature of the economics that surround how websites exist and work (e.g. a single person could setup an information website that ends up catering for 100,000 members, but which may only make enough money to employ its sole creator or might not make any money at all).
This is a tough one. Freedom of expression includes the right to talk nonsense. Freedom of religion arguably amounts to the same thing. These things are accepted as fundamental rights. And yet it also seems to be agreed that the nonsense talked by (say) anti-vaxxers needs to be banned.
What about moon-landing-deniers and flat-earthers? Religious cults? Is “harmless” nonsense OK, but not “dangerous” nonsense? Who decides which is which?
Not to mention that if the government suppresses the speech of conspiracy theorists, in their eyes that will provide them with hard proof of what they believed all along.
It is already illegal, rightly so, to incite crimes, spread child pornography etc. It is quite a different matter to express opinions even if some find them offensive or misguided.
For example, I know people who claim 5G is dangerous. I have told them I think they are wrong but IMO they are entitled to express their opinion. Were they to indulge in or incite criminal damage or violence, then that would invite prosecution.
With respect, you’re a bit naive about how the system works. For example, violent BLM rioters won’t ever be prosecuted because they are working with/for the system, as part of the DNC political machine (which is all out in the open).
Yet here we are, with ‘reasonable’ types on a telecoms industry discussion forum, suggesting that people who don’t believe in the moon landing are potentially dangerous. Complete face-palm all around at some of the clueless posts I’m reading here.
There seems to be a loophole with regard to the right for free speech of persons, who participate in online citizen science projects (mainly non-students/ laypersons).
Scientists deleted posts/ banned users from the discussion forum of an online citizen science platform, which is operated by a university in the UK.
Mark, your article is a brilliant demonstration in itself of the problem. Having well-informed reservations about certain vaccines is NOT dangerous or harmful. It’s a legitimate opinion (see the long list of successful lawsuits against Merck and Pfizer, for example). Also, the idea that technology experts or scientists are magically impartial and always occupying a ‘sensible middle ground’ is a joke and has been discredited time and time again. They can and do very often get things wrong. Your tone is a bit smug when it comes to these sorts of articles, Mark. It gives off the ‘I know best because I have more technical expertise than thou’ kind of vibe, which is annoying.