FILE – Britain’s Secretary of State for Digital, Culture, Media and Sport Michelle Donelan leaves after a cabinet meeting at 10 Downing Street in London, Tuesday, Oct. 18, 2022. The British government has abandoned a plan to force tech firms to remove internet content that is harmful but legal, a proposal that drew strong criticism from lawmakers and civil liberties groups. Digital Secretary Michelle Donelan said the plan has been dropped because it would have created “a quasi-legal category between illegal and legal.” (AP Photo/Kin Cheung, File)


LONDON (AP) — The British government has abandoned a plan to force tech firms to remove internet content that is harmful but legal, after the proposal drew strong criticism from lawmakers and civil liberties groups.

The U.K. on Tuesday defended its decision to water down the Online Safety Bill, an ambitious but controversial attempt to crack down on online racism, sexual abuse, bullying, fraud and other harmful material.

Similar efforts are underway in the European Union and the United States, but the U.K.’s was one of the most sweeping. In its original form, the bill gave regulators wide-ranging powers to sanction digital and social media companies like Google, Facebook, Twitter and TikTok.

Critics had expressed concern that a requirement for the biggest platforms to remove “legal but harmful” content could lead to censorship and undermine free speech.

The Conservative government of Prime Minister Rishi Sunak, who took office last month, has now dropped that part of the bill, saying it could “over-criminalize” online content. The government hopes the change will be enough to get the bill through Parliament, where it has languished for 18 months, by mid-2023.

Digital Secretary Michelle Donelan said the change removed the risk that “tech firms or future governments could use the laws as a license to censor legitimate views.”

“It was a creation of a quasi-legal category between illegal and legal,” she told Sky News. “That’s not what a government should be doing. It’s confusing. It would create a different kind of set of rules online to offline in the legal sphere.”

Instead, the bill says companies must set out clear terms of service, and stick to them. Companies will be free to allow adults to post and see offensive or harmful material, as long as it is not illegal. But platforms that pledge to ban racist, homophobic or other offensive content and then fail to live up to the promise can be fined up to 10% of their annual turnover.

The legislation also requires firms to help people avoid seeing content that is legal but may be harmful — such as the glorification of eating disorders, misogyny and some other forms of abuse — through warnings, content moderation or other means.

Companies also will have to show how they enforce user age limits designed to keep children from seeing harmful material.

The bill still criminalizes some online activity, including cyberflashing — sending someone unwanted explicit images — and epilepsy trolling, sending flashing images that can trigger seizures. It also makes it an offense to assist or encourage self-harm, a step that follows a campaign by the family of Molly Russell, a 14-year-old who ended her life in 2017 after viewing self-harm and suicide content online.

Her father, Ian Russell, said he was relieved the stalled bill was moving forward at last. But he said it was “very hard to understand” why protections against harmful material had been watered down.

Donelan stressed that “legal but harmful” material would only be permitted for adults, and children would still be protected.

“The content that Molly Russell saw will not be allowed as a result of this bill,” she said.

Copyright 2022 The Associated Press. All rights reserved