In the United States, fact-checkers are being dismissed after Meta admitted that Facebook censored on political grounds.Continue reading
Facebook may have already restricted half a million Hungarians, while the content and account moderation rules of the online platform are practically opaque, a recent study commissioned by the National Media and Infocommunications Authority (NMHH) has revealed.
The study analyzed the practices and policies of Facebook and YouTube to see how the major platform providers restrict users. It reveals that online platforms use artificial intelligence to make millions of sanctioning decisions every month, usually not reviewed by real people. The study points out that the European Union’s Digital Services Act (DSA) now requires all online platforms to report on their moderation decisions. This has revealed that hundreds of thousands of restrictive decisions are made every day, and millions more every month, typically with the help of artificial intelligence (AI).
These decisions are usually not explained or justified to users, leaving them in the dark about how the restriction algorithms work.
Although the affected parties can always challenge the actions of platform providers, their complaints are usually also handled by AI. Meaningful complaint handling and the restoration of restricted content or accounts is therefore becoming increasingly rare, they added.
The lack of clarity of platform policies also makes it difficult for users to understand them: they are not set out in a single, transparent document, and their policies on smaller sub-areas often group violations together in an illogical way, the study detailed, adding that there are also frequent cases of overly broad wording.
YouTube, for instance, lays down in its terms and conditions that it can remove not only content that is explicitly illegal or against the rules, but also anything that “may cause harm” to the platform. Service providers mostly moderate content identified as spam or accounts deemed to be fake, but they also take restrictive decisions in the hundreds of thousands of posts in two categories that have a significant impact on freedom of expression (hate speech and disinformation).
The study also examined the restrictive practices of the relevant platform providers towards Hungarian users, looking separately at bans and shadow bans. In the former cases, the service provider sends a notice of moderation. The latter means that the service provider applies restrictions without the user’s knowledge, for instance by not displaying a post in other users’ news feeds.
The giant platforms do not provide country-by-country data on their moderation practices, but according to a representative questionnaire survey by the National University of Public Service,
almost 15% of the Hungarian population surveyed in 2024 (roughly half a million people out of the total population) have experienced a content deletion or restriction by their service provider, or even a ban on their account, a five percent increase compared to previous years.
In addition, half of the affected users have already been subject to a restriction more than once, and a quarter have had their accounts suspended by their service provider. A third of them have explicitly asked for a restriction to be lifted, but in 2024 only a tenth of restricted content was restored by the platform.
In a survey published in 2020, the proportion was much higher, with the service provider restoring around a fifth of suspended content, which may be mainly due to the fact that platform providers are now increasingly relying on artificial intelligence to handle complaints, they wrote.
Via MTI, Featured image: Pixabay