In Poland, an initiative announced in late December could see social media companies fined roughly two million dollars if they remove content or block an account if the content does not break Polish law.
“In the event of removal or blockage, a complaint can be sent to the platform, which will have 24 hours to consider it. Within 48 hours of the decision, the user will be able to file a petition to the court for the return of access. The court will consider complaints within seven days of receipt,” Poland In reported.
Zbigniew Ziobro, the Polish Justice Minister, stated:
Often, the victims of tendencies for ideological censorship are also representatives of various groups operating in Poland, whose content is removed or blocked, just because they express views and refer to values that are unacceptable from the point of view of communities… with an ever-stronger influence on the functioning of social media.
We realize that it is not an easy topic; we realize that on the internet there should also be a sphere of guarantees for everybody who feels slandered, a sphere of limitation of various content which may carry with it a negative impact on the sphere of other people’s freedom. But we would like to propose such tools that will enable both one side and the other to call for the decision of a body that will be able to adjudicate whether content appearing on such and such a social media account really violates personal rights, whether it can be eliminated, or whether there is censorship.
In mid-December, reports indicated that the U.K.’s media regulator, Ofcom, would be given the power to enforce new rules regarding censorship. As TechCrunch reported on December 14:
Under the plans announced today, the government said Ofcom will be able to levy fines of up to 10% of a company’s annual global turnover (or £18 million, whichever is higher) on those that are deemed to have failed in their duty of care to protect impression eyeballs from being exposed to illegal material — such as child sexual abuse, terrorist material or suicide-promoting content.
“The online safety ‘duty of care’ rules are intended to cover not just social media giants like Facebook but a wide range of internet services — from dating apps and search engines to online marketplaces, video sharing platforms and instant messaging tools, as well as consumer cloud storage and even video games that allow relevant user interaction,” TechCrunch added.
The British government stated:
The new regulations will apply to any company in the world hosting user-generated content online accessible by people in the UK or enabling them to privately or publicly interact with others online. … These companies will need to assess the risk of legal content or activity on their services with “a reasonably foreseeable risk of causing significant physical or psychological harm to adults.” They will then need to make clear what type of “legal but harmful” content is acceptable on their platforms in their terms and conditions and enforce this transparently and consistently.