Moderating Facebook Kenyan terrorists’ posts aren’t removed


Facebook News Feeds are personalised based on users’ social networks and hobbies; but a recent analysis on terrorist activities in East Africa says country and language have a bigger influence than predicted.

ISD argues the platform’s moderation algorithms struggle to detect hate speech and violence in non-English posts.

Two terrorist groups in Kenya—al-Shabaab and the Islamic State—exploit content-moderation constraints to upload recruitment propaganda and horrific films.

ISD Reports

ISD report: “Language moderation gaps play into the hands of governments conducting human rights abuses or spreading hate speech”
The study cites 30 public Facebook sites from extremist groups that spread scepticism in democracy and the government. The most active al-Shabaab and Islamic State profiles advocate for election violence and an East African caliphate.
Meta, Facebook and Instagram’s parent company, has a team monitoring platform abuses during Kenya’s 2022 elections. Many Kenyans speak Arabic, Somali, and Swahili.

More From Us:AI-powered 4K webcam with 3-axis gimbal: Insta360 Link

Five people shared a video of a Somali guy getting shot in the back of the head posted on al-official Shabaab’s page. However Any content-moderation system in that region should recognise the video’s al-Shabaab branding. 445 Arabic, Kiswahili, and Somali users shared unofficial, official, and creative content promoting al-Shabaab and the Islamic State.

Meta conducted an internal experiment in 2019 to create a dummy account in India, The Washington Post said. Facebook employees were stunned by the soft-core porn, hate speech, and “staggering number of dead bodies” shown to a new user in India, according to a memo in the Facebook Papers.

The algorithm suggested harmless content to a new US user, highlighting how the platform behaves for various countries.

Content-moderation gaps could sway voters toward violent extremist groups. Myanmar’s military organised a Facebook hate speech campaign against the primarily Muslim Rohingya in 2016-2018.

Meta System

The posts supported Rohingya genocide, which led to thousands of fatalities and a global refugee catastrophe, The Guardian said (Opens in a new window). In 2021, US and UK refugees sued Facebook for $150 billion over hate speech.
Meta invests in content moderation systems globally. We don’t accept terrorist organisations on Facebook and remove praise, support, and representation when we find it “Meta spokeswoman. “These antagonistic groups are constantly changing their techniques to evade platforms, researchers, and law enforcement. To keep our systems ahead, we invest in people, technology, and partnerships.
While moderating the world’s largest social media site is a huge undertaking, discovering content moderation gaps is a key first step, including taking stock of languages and photos not recognised by Meta’s systems. Second, the research proposes boosting the identification and removal of terrorist-specific content, especially in high-risk countries.

The ISD advises against using Meta exclusively. An independent content-moderation body would enable internet businesses uncover gaps in moderation standards and comprehend their function in the regional ecosystem, beyond one group or platform. Twitter and YouTube also contribute to the harmful activities, the survey found.


Leave a Comment

Your email address will not be published.