Ofcom has today published the “first edition” of its new online UK safety codes of practice for “tech firms” (e.g. social media) and smaller websites under the government’s Online Safety Act (OSA). Providers now have 3 months to ensure they’re able to tackle “illegal harms” (content), such as terror, hate, fraud, child sexual abuse and assisting or encouraging suicide.
On the surface, it all sounds sensible and well-intentioned. After all, it’s widely understood, and few could disagree, that the old model of self-regulation has struggled to keep pace with the changing online world, which has allowed far too much “harmful” content to slip through a fairly weak net.
The new Act essentially responds to this by placing new safety duties on social media firms, search engines, messaging, gaming and dating apps, and pornography and file-sharing sites of all sizes. Failing to comply with these rules could be extremely costly: “We have the power to fine companies up to £18m or 10% of their qualifying worldwide revenue – whichever is greater,” said Ofcom.
Advertisement
In “very serious cases” they can also apply for a court order to have broadband ISPs and mobile operators block a website or service in the UK.
Types of harmful content
The Online Safety Act lists over 130 ‘priority offences’, and tech firms must assess and mitigate the risk of these occurring on their platforms. The priority offences can be split into the following categories:
➤ Terrorism
➤ Harassment, stalking, threats and abuse offences
➤ Coercive and controlling behaviour
➤ Hate offences
➤ Intimate image abuse
➤ Extreme pornography
➤ Child sexual exploitation and abuse
➤ Sexual exploitation of adults
➤ Unlawful immigration
➤ Human trafficking
➤ Fraud and financial offences
➤ Proceeds of crime
➤ Assisting or encouraging suicide
➤ Drugs and psychoactive substances
➤ Weapons offences (knives, firearms, and other weapons)
➤ Foreign interference
➤ Animal welfare
However, trying to strike the right balance between Freedom of Expression, individual Privacy and outright Censorship is a difficult thing to get right, particularly when attempting to police the common and highly subjective public expression of negative human thought. Not to mention complex issues of context (e.g. people joking about blowing up a city vs actual terrorists), parody and political speech. Humans often get it wrong, and automated filtering systems are even worse. But only time will tell whether the pros of the new approach are enough to outweigh the potential cons (e.g. overblocking of legal content that is mischaracterised).
Who the rules apply to
All in-scope services with a significant number of UK users, or targeting the UK market, are covered by the new rules, regardless of where they are based.
The rules apply to services that are made available over the internet (or ‘online services’). This might be a website, app or another type of platform. If you or your business provides an online service, then the rules might apply to you.
Specifically, the rules cover services where:
- people may encounter content (like images, videos, messages or comments), that has been generated, uploaded or shared by other users. Among other things, this includes private messaging, and services that allow users to upload, generate or share pornographic content. The Act calls these ‘user-to-user services’;
- people can search other websites or databases (‘search services’); or
- you or your business publish or display pornographic content.
To give a few examples, a ‘user-to-user’ service could be:
- a social media site or app;
- a photo- or video-sharing service;
- a chat or instant messaging service, like a dating app; or
- an online or mobile gaming service.
The rules apply to organisations big and small, from large and well-resourced companies to very small ‘micro-businesses’. They also apply to individuals who run an online service.
It doesn’t matter where you or your business is based. The new rules will apply to you (or your business) if the service you provide has a significant number of users in the UK, or if the UK is a target market.
The first step in implementing all this sees Ofcom giving in-scope providers three months to complete “illegal harms risk assessments“. Every site and app in scope of the new laws thus has from today until 16th March 2025 to complete an assessment to understand the risks illegal content poses to children and adults on their platform.
Subject to their codes completing the Parliamentary process by the above date, from 17th March 2025, sites and apps will then need to start implementing safety measures to mitigate those risks (e.g. effective moderation that can identify and remove “harmful” content), and Ofcom’s codes set out measures they can take. Some of these measures apply to all sites and apps, and others to larger or riskier platforms.
Advertisement
Dame Melanie Dawes, Ofcom’s CEO, said:
“For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise people’s safety over profits. That changes from today.
The safety spotlight is now firmly on tech firms and it’s time for them to act. We’ll be watching the industry closely to ensure firms match up to the strict safety standards set for them under our first codes and guidance, with further requirements to follow swiftly in the first half of next year.
Those that come up short can expect Ofcom to use the full extent of our enforcement powers against them.”
Peter Kyle MP, UK Technology Secretary, said:
“This government is determined to build a safer online world, where people can access its immense benefits and opportunities without being exposed to a lawless environment of harmful content.
Today we have taken a significant step on this journey. Ofcom’s illegal content codes are a material step change in online safety meaning that from March, platforms will have to proactively take down terrorist material, child and intimate image abuse, and a host of other illegal content, bridging the gap between the laws which protect us in the offline and the online world. If platforms fail to step up the regulator has my backing to use its full powers, including issuing fines and asking the courts to block access to sites.
These laws mark a fundamental re-set in society’s expectations of technology companies. I expect them to deliver and will be watching closely to make sure they do.”
The Act also enables Ofcom, where they “decide it is necessary and proportionate“, to make a provider use (or in some cases develop) a specific technology (this must be accredited by Ofcom or someone they appoint) to tackle child sexual abuse or terrorism content on their sites and apps. The regulator are consulting today on parts of the framework that will underpin this power.
Otherwise, the first set of codes and guidance sets up the enforceable regime, although Ofcom are already working towards an additional consultation on further codes measures in Spring 2025. This will include proposals in the following areas:
And today’s codes and guidance are part of a much wider package of protections, with more consultations and duties coming into force, including:
The heart of the new Act and Ofcom’s code are absolutely in the right place, even if the road to hell is paved with good intentions. The internet can be a heaven for some of the most vile hate, bullying, racism, child abuse, and terrorism etc. Whole communities have even sprung up around these topics, and hostile governments often exploit them.
Advertisement
Suffice to say, the desire to rid the online world of such things is more than understandable – particularly for those who have suffered the most. In keeping with that, it’s easy to see why the new laws have been able to attract so much support from the wider electorate and cross-party MPs. But the potential problem is not with that goal, it’s with the overly-broad and feverishly complex sledgehammer approach to achieving it.
The wrongful assumption seems to be that all sites will already have the necessary development skills, budget, knowledge, legal experience and time to implement everything. But what may be viable for bigger sites, is not workable for everybody else, especially smaller sites that lack the necessary pieces to stand any realistic chance of properly implementing such complex rules (e.g. Ofcom’s risk assessment guide alone is 84 pages long). More support should be provided for those.
Some sites may thus respond to all this, and the risk of increased legal liability, by seeking to restrict speech through the removal of user-to-user services or the imposition of much more aggressive automated filtering systems, which raises the risk of excessive overblocking (i.e. censorship by the backdoor of extreme liability).
However, the new rules will also seek to give users an avenue of appeal for any removed content, which must be replaced if found to have been wrongfully removed. But not all third-party systems work that way and this risks putting sites that allow user-generated content (millions of them) into a bit of a damned if they do, damned if they don’t boat. The risk of an intolerable level of liability and legal complexity is not to be understated in all this.
Ofcom has said they will “offer to help providers comply with these new duties“, which at present mostly seems to consist of various complex documents that, in some cases, require a degree in regulatory waffle and law to fully comprehend. But they do plan to introduce a new tool in early 2025 to help providers check how to comply with their illegal content duties, and there’s another tool for checking if the rules apply to you.
The regulator also said they were “gearing up to take early enforcement action against any platforms that ultimately fall short“, which is likely to cause most concern for the big social media sites, particularly those that have become a bit lax of late in terms of moderation (fingers tend to point toward ‘X’). Suffice to say that there are still a lot of unknowns with the new law and the next few years may be a bit bumpy.
Ofcom’s First Edition Codes of Practice and Guidance
https://www.ofcom.org.uk/../statement-protecting-people-from-illegal-harms-online/
This is pure dystopia. It fools the average person by saying it wants to tackle bad things but anyone can see in Germany criticizing the government can lead to insane fines and even jail time like the guy who called a fat politician fat and then was ordered to pay €100,000!!!!! There’s another similar high profile where a person called another German politician imbecile and got severely punished as if it was China
X does not have a lax moderation problem, it simply allows all lawful speech, which is how it should be. Unfortunately in most countries only the political elite have the fundamental human right to free speech, sod everyone else. They even get these platforms to delete the accounts of elected officials, like in Brazil
Personally can’t wait for this awful government to collapse and the ofcommunists to disband. it’s astonishing how much damage they are causing
For what it’s worth, the Online Safety Act was passed under the previous government.
However, I share your sentiment that the act is quite aggressive and I’m disappointed the current government hadn’t softened it somewhat.
Fair assessment…. If not for the fact that labour has agreed with all major policy including also pushing for this. You will not find anyone in govt fighting for actual human rights like free speech, if anything you can find many wanting people to be able to access LESS information, especially in certain topics
To add to my post, many of the things this abomination wants to tackle are already illegal under the law. Making something illegal more illegal just either means to pass the illusion they are doing something or that they want to pass something more nefarious. This is obviously the 2nd
‘X does not have a lax moderation problem, it simply allows all lawful speech…’
Have you considered a career in comedy? This is hilarious.
Don’t get me wrong, the law itself is horrendous but claiming that site allows ‘all lawful speech’ is laughable. While I’m not a fan of the terms I’m pretty sure describing someone as ‘cis’ or ‘cisgender’ isn’t illegal but isn’t permitted there to give just one example. Plenty more trivial to find.
Nothing wrong with the site’s rules being set as their management see fit but claiming they allow all lawful speech is, as mentioned, laughable.
You can laugh at whatever you believe but what you are “pretty sure” is absolutely false. Slurs are not banned
John: the rules of X are online for anyone to read and certainly don’t fit with your description.
https://help.x.com/en/rules-and-policies/x-rules
To be clear I’m not laughing at what I believe, I’m just a messenger of X’s policies here so no belief required, I’m laughing at what you believe as it’s so patently absurd.
Here’s a post from 16 minutes ago using that slur mr absurd https://x.com/big1oser/status/1868946878123196496
Twitter absolutely has a moderation problem. It was bad enough before Musk bought it and decimated the moderation teams, it’s far worse now.
Twitter doesn’t have lawful free speech, as mentioned already you can’t say cisgender or talk about zionism
There’s literal evidence posted but people will still keep lying their favorite cultist word is banned, when it clearly is not
Previous Twitter would ban according to the democrats will, to the point of banning sitting presidents, but you people will pretend it was fine because you just want to live in echo chambers
Currently X is the most balanced social media out there
Instagram better start waking up then, the things I’ve seen on there would give normal people nightmares
Animal abuse and child exploitation are rife on the platform, they’ll probably end up removing it from the UK
The same applies to YouTube, they’ve been exposed multiple times by outlets such as the BBC for allowing children to come to harm and be exploited and very little has been done. These are massive platforms with huge budgets they should allow for proper content moderation and safeguarding.
I wasn’t aware of the requirement to offer a mandatory appeals and re-instatement process for over-moderated content. Over-moderation is going to explode in order to mitigate liability under this law. The biggest boys might be able to resource that process but I would think somewhat smaller sites with a large enough number of users that they rely on probability modelling for moderation might be unviable. I think a lot of sites will choose to simply not offer user generated content in the UK. And this is just the start. Ofcom are threatening much worse in the coming months including age assurance and moderation requirements for legal but, in the censor’s opinion, harmful content.
They wont ban X because 1 it is actually used as a diplomatic tool by many politicians around the world and to reach these politicians and any other influential individual, and 2 it is by far the platform with the most reach, and this goes especially for Keir Stalin, whose almost 2 million X followers absolutely dwarf his followings on every other platform combined.
In France they actually got Rumble to stop operating there because Rumble refused to delete accounts that hadn’t broken any laws. In France they also had arrested the Telegram CEO. They won’t be able to pull the same tricks in the UK because the ice is already super thin and they need the influence. The fact that record amounts of people are cancelling the BBC and abandoning propaganda from the likes of the guardian means that they cannot afford losing X
Hi Gary.
I’m a simpleton STEM post-grad so apologies for not understanding this but if you’d explain why record amounts of people presumably cancelling their TV licence and not reading the likes of the Guardian makes X invaluable that’d be great.
I can’t really see much overlap between people cancelling their TV licence, not reading the Guardian, etc, and people who would be influenced by ‘Keir Stalin’ regardless of the platform but would welcome some insight. Doesn’t look like his 2 million followers have had much impact on you given that name.
I’m not sure what metrics you’re using but am fairly confident most diplomacy is conducted behind closed doors not via X, including even Donald Trump. The social media posts enthuse some people and get others excited but the actual business is done in private. I’m also not convinced that X followers translate to reach and multiple recent studies indicate that the X algorithm alongside those of other platforms ensure they don’t.
This is the weakest appeal to authority fallacy I’ve ever seen. Not only does it not have anything to do with the subject but believes being a post grad actually gives him any relevancy
There is obvious influence to be gained/lost in all platforms. People switching off from one side, makes the other much more valuable. It’s incredibly obvious
It wasn’t an appeal to authority, John, but self-deprication which may go some way towards explaining why it seemed so weak as one. That said if this is indicative of either your comprehension or tendency to confirmation bias it explains a few things.
They won’t ban it but there is a growing appetite towards abandoning it. Many private and public sector organisations already have or are in the process of either doing so, or substantially reducing their activity on there in favour of other platforms or channels because Twitter now offers less genuine reach and engagement with their target audiences.
X is literally at all time record users and minutes per user, as well as topping #1 on apple store and Google store apps. It is more influential than ever before and exposed many wrongdoings in the US which actually led to real political change. That and podcasts like Joe Rogan
There are “certain” activist like the guardian and police accounts leaving X because they hate their lies community noted, simple as that. They are not missed. Their articles about guilt tripping people for having pets are still widely mocked
> use of AI to tackle illegal harms, including CSAM;
Oh, nothing will definitely go wrong with this. No siree! Especially in a country that believes lines and shapes drawn in a certain way by artists is “child pornography.”