UK lawmakers push for Online Safety Bill to have a tighter focus on illegal content

N

Natasha Lomas

Guest
A UK parliamentary committee that’s spent almost half a year scrutinizing the government’s populist yet controversial plan to regulate Internet services by applying a child safety-focused framing to content moderation has today published its report on the draft legislation — offering a series of recommendations to further tighten legal requirements on platforms.

Ministers will have two months to respond to the committee’s report.

The committee broadly welcomes the government’s push to go beyond industry self regulation by enforcing compliance with a set of rules intended to hold tech giants accountable for the content they spread and monetize — including via a series of codes of practice and with the media regulator, Ofcom, given a new major oversight and enforcement role over Internet content.


In a statement accompanying the report, the joint committee on the draft Online Safety Bill’s chair, Damian Collins, said: “The Committee were unanimous in their conclusion that we need to call time on the Wild West online. What’s illegal offline should be regulated online. For too long, big tech has gotten away with being the land of the lawless. A lack of regulation online has left too many people vulnerable to abuse, fraud, violence and in some cases even loss of life.

“The Committee has set out recommendations to bring more offences clearly within the scope of the Online Safety Bill, give Ofcom the power in law to set minimum safety standards for the services they will regulate, and to take enforcement action against companies if they don’t comply.

“The era of self-regulation for big tech has come to an end. The companies are clearly responsible for services they have designed and profit from, and need to be held to account for the decisions they make.”

?
Our report on the #OnlineSafetyBill is out
?



5 months. 50 witnesses. 200 submissions. 190 pages.

?
Read our full cross-party, cross-Chamber report https://t.co/xzBjer9JRK

?
Read a summary of our unanimous recommendations https://t.co/ebUVFtg1Wg pic.twitter.com/eYkS9EyHaC

— Joint Committee on the Draft Online Safety Bill (@OnlineSafetyCom) December 14, 2021


The committee backs the overarching premise that what’s illegal offline should be illegal online — but it’s concerned that the bill, as drafted, will fall short of delivering on that, warning in a summary of its recommendations: “A law aimed at online safety that does not require companies to act on, for example, misogynistic abuse or stirring up hatred against disabled people would not be credible. Leaving such abuse unregulated would itself be deeply damaging to freedom of speech online.”

To ensure the legislation is doing what’s claimed on the tin (aka, making platforms accountable for major safety issues), the committee wants Ofcom to be “required to issue a binding Code of Practice to assist providers in identifying, reporting on and acting on illegal content, in addition to those on terrorism and child sexual exploitation and abuse content”.

Here MPs and peers are pushing for the bill to take a more comprehensive approach to tackling illegal content in what can be contested areas of hate speech, arguing that regulatory guidance from a public body will “provide an additional safeguard for freedom of expression in how providers fulfil this requirement”.

In earlier iterations the legislative plan was given the government shorthand “Online Harms” — and the draft continues to target a very broad array of content for regulation, from stuff that’s already explicitly illegal (such as terrorism or child sexual abuse material) to unpleasant but (currently) legal content such as certain types of abuse or content that celebrates self harm.

Critics have therefore warned that the bill poses huge risks to free speech and freedom of expression online as platforms will face the threat of massive fines (and even criminal liability for execs) for failing to comply with an inherently subjective concept of ‘harm’ baked into UK law.

To simplify compliance and avoid the risk of major sanctions, platforms may simply opt to purge challenging content entirely (or take other disproportionate measures), rather than risk being accused of exposing children to inappropriate/harmful content — so the committee is trying to find a way to ensure a public interest interpretation (i.e. of what content should be regulated) in order to shrink the risks the bill poses to democratic freedoms.

Despite the bill attracting huge controversy on the digital rights and speech front, where critics argue it will introduce a new form of censorship, there is broad, cross-party parliamentary support for regulating tech giants. So — in theory — the government can expect few problems getting the legislation through parliament.

This is hardly surprising. Internet giants like Facebook have spent years torching goodwill with lawmakers all over the world (and especially in the UK); and are widely deemed to have failed to self regulate given a neverending parade of content scandals — from data misuse for opaque voter targeting (Cambridge Analytica); to the hate and abuse direct at people on platforms like Twitter (UK footballers have, for example, been recently campaigning against racist abuse on social media); to suicide and self harm content circulating on Instagram — all of which has been compounded by recent revelations from Facebook whistleblower, Frances Haugen, which included the disclosure of internal research suggesting Instagram can be toxic for teens.

All of which is reflected in a pithy opener the committee pens to summarize its report: “Self-regulation of online services has failed.”

“The Online Safety Bill is a key step forward for democratic societies to bring accountability and responsibility to the internet,” it goes on, adding: “Our recommendations strengthen two core principles of responsible internet governance: that online services should be held accountable for the design and operation of their systems; and that regulation should be governed by a democratic legislature and an independent regulator — not Silicon Valley.

“We want the Online Safety Bill to be easy to understand for service providers and the public alike. We want it to have clear objectives, that lead into precise duties on the providers, with robust powers for the regulator to act when the platforms fail to meet those legal and regulatory requirements.”



The committee is suggesting the creation of a series of new criminal offences in relation to what it describes as “harmful online activities” — such as “encouraging serious self-harm”; cyberflashing (aka the sending of unsolicited nudes); and comms that are intended to stir up hatred against those with protected characteristics) — with parliamentarians endorsing recommendations by the Law Commission to modernise comms offences and hate crime laws to take account of an age of algorithmic amplification.

So the committee is pushing for the (too) subjective notion of ‘harmful’ content to be tightened to stuff that’s explicitly defined in law as illegal — to avoid the risk of tech companies being left to interpret too fuzzy rules themselves at the expense of hard won democratic freedoms. If the government picks up on that suggestion it would be a major improvement.

In another intervention, the committee has revived the thorny issue of age checks for accessing porn websites — and preventing kids from accessing adult content online is something the UK has been trying (and failing) to figure out how to do for over a decade — by suggesting: “All statutory requirements on user-to-user services, for both adults and children, should also apply to Internet Society Services likely to be accessed by children, as defined by the Age Appropriate Design Code“; and arguing that the change would “ensure all pornographic websites would have to prevent children from accessing their content”.

Back in 2019 the government quietly dropped an...
Please login to view full content. Log in or register now.