Meta’s rollback on fact-checking and its implications for feminist digital justice advocacy in Africa
By: Tricia Gloria Nabaye and Bobina Zulfa
Meta’s Free Speech Pivot: A Double-Edged Sword
Meta, Facebook’s parent company, said it will eliminate traditional fact-checking and ease its policies. The move was part of a pivot to a new emphasis on free speech by the corporation, however, this action will allow more politics to enter users’ feeds. Mark Zuckerberg noted that Meta had reached a point where “there were too many mistakes and too much censorship and the fact-checkers have been too politically biased and have destroyed more trust that they have created.” Meta notes that they will now rely on community notes, and user inputs similar to what X, formerly Twitter uses.
However, fact-checking is not censorship in the age of an ever-evolving digital information ecosystem and more people online. Without it, misinformation and disinformation will continue to become rampant without the necessary safety nets that content moderation offers. As a third-world continent, Africa continues to grapple with the realities of unscalable truth from deeply rooted misinformation tied to local politics, traditional beliefs, or cultural practices, which demand context-aware, human-led interventions rather than automated systems alone. In other words, some truths require significant human effort, investigation, or expertise to confirm, which may not be feasible to implement on large platforms like Meta or across entire populations, making content moderators and fact-checkers all the more necessary. The demographics that are less present on such platforms are also more at risk of being unable to flag their truths from falsities because they are outnumbered. This issue is most likely to play out on contentious issues that lie across the conservative-to-liberal spectrum.
Overall, Meta’s position risks creating a digital environment that is more hostile to African women and other disenfranchised groups, undermining years of progress in feminist advocacy for safer, more equitable online spaces. Community notes will not sufficiently prevent the spread of fake news and mis/disinformation and it will be harder to create measures to keep social platform users safe and protected from violence, especially for women and girls.
While Meta’s announcement is central to the United States of America, the impact of this decision across other regions especially in the Global Majority world may be far more reaching eventually due to their positionality with Meta platforms. In Africa for instance, Facebook is said to account for an estimated 20% of all internet traffic from the continent which is an indicator of the continent’s dependency on its information ecosystem and consequently, the impacts that lie therein as they relate to the rolling back of fact-checking in content moderation.
The Feminist Perspective: How Meta’s Decision Endangers African Women
Essentially, Meta’s rollback on fact-checking is not just a technical decision — it is a feminist issue for Africa for several reasons. First is the power asymmetries between billionaire platform controllers such as Mark Zuckerberg and their users from the region who are at the mercy of how critical issues such as ‘truth’ are defined without much room for reaching a compromise. As self-designated arbiters of truth, we see how an information ecosystem already muddled with unrestrained action and a lot of untruths and manipulation is possibly only going to worsen.
This is also a background of minimal transparency, as Meta barely shares data, showing how content moderation, misinformation, and disinformation play out in the African context. Many of the discoveries made to this end have resulted from private investigations by journalists, researchers, and other curious parties, who also have to bypass increasingly tighter thresholds of information access from their platforms. Here, the question then emerges: What does a minority power holder shaping “the truth” for a majority of people likely lead to, if not tyranny and undemocratic rule?
Furthermore to this absolute brokering of what constitutes facts is the empowerment of harmful narratives especially extremist, anti-women views ranging from biological essentialism to rape myths, to the objectification of women and anti-reproductive health issues among other issues. The perpetuation of these and other marginalising viewpoints to different groups under the guise of free speech undoes efforts to create a free and equitable world for all people. A good example here is the ever-growing male supremacist ‘manospheres’ where misogynist ideas thrive as well as the pushback on Diversity, Equity and Inclusion programs through sexist and racist rhetoric by some powerful persons, including X’s Elon Musk platforming these narratives.
In addition, the persistent digital divide in Africa, compounded by high levels of digital illiteracy, creates a dangerous context for Meta’s rollback of fact-checkers and content moderators. Women, particularly in last-mile communities are disproportionately affected by digital illiteracy and lack of access to critical digital skills. Without fact-checkers and content moderators, they are more vulnerable to consuming and believing misinformation, often perpetuating harmful gender stereotypes and norms. Platforms like Meta, with significant influence on the information ecosystems, have a responsibility to bridge gaps in digital literacy rather than exacerbate these vulnerabilities.
Collectively these challenges reinforce systemic inequalities with the capacity to further endanger African women and other vulnerable groups through gendered disinformation, targeted online attacks and hate speech, ultimately undermining the promise of the internet as a space for empowerment and equality. A critical example here is that as Africa’s digital spaces continue to evolve with platforms such as Facebook, Instagram, and Threads becoming ever more pivotal for connection, advocacy and expression, it is clear that for African women leaders, activists and journalists, a lot is at stake given their role in this ecosystem. A widened pool for propaganda, hate speech and fake news especially as enabled by advancements in generative AI cuts into what they do by undermining their credibility and directing violence towards them. This ultimately leads to some public figures resigning from public life, hindering the causes they advocate for and leaving their constituents without effective representation.
Legal and Labor Advocacy: African Content Moderators vs. Big Tech
It should also be noted that the Meta group have a legal complaint in Kenya from content moderators, who argue a lack of psychological support, and inhumane working conditions among other labour grievances. This is not the first time that action from content moderators in Africa has strived to hold Big Tech to account. The formation of the African Content Moderators Union by 150 African artificial intelligence (AI) workers — which seeks to secure better working conditions for content moderators, micro workers and data labellers — was an act of historic defiance against big tech, with one of its organisers Kauna Malgwi named as one of TIME’s 100 most influential people in AI.
Legal actions, like the one in Kenya, set critical precedents for Big Tech’s responsibility and accountability to meet international labour standards. Such advocacy amplifies African perspectives in global discussions about ethical AI development and deployment and platform governance, shifting the narrative and norm from exploitation to empowerment. Holding Big Tech accountable is pivotal to ensuring a safer, more equitable, and user-focused digital experience for Africans. The intersection of digital rights advocacy and user safety reveals systemic gaps in how these platforms operate in Africa, disproportionately impacting both workers and users.
In addition, it should be noted that even in the experience of online platforms African users continue to grapple with language and lexicon inadequacies that pose a threat to how adequate Meta’s automated systems navigate the racial slurs, gender profiling and culturally derogative language that may go unchecked. Meta’s lax moderation could amplify such attacks, making platforms less safe for women. Without active content regulation, the burden of reporting and managing abuse shifts to victims, exacerbating emotional distress and silencing voices. By leaving harmful content in African languages largely unchecked this will perpetuate systemic neglect of local contexts and exacerbate harms and inequalities.
The Way Forward: Charting Africa’s Digital Justice Future
Strengthening Legal Frameworks: Holding Big Tech Accountable
African governments must establish and enforce robust laws compelling Big Tech to meet globally accepted standards of transparency and accountability. These regulations should require clear, publicly accessible reporting mechanisms for platform content moderation policies, algorithmic decision-making processes, and data usage practices. Non-compliance must carry penalties to ensure genuine corporate responsibility. Furthermore, policies should mandate platforms to implement accessible systems for reporting online harm. These mechanisms must enable users to report abuse, misinformation, or harassment and receive timely responses. Importantly, they should be localised to provide support in regional languages and offer culturally sensitive responses.
In conjunction with regulatory oversight, civil society organisations should collaborate with governments to monitor Big Tech’s adherence to these frameworks and advocate for stronger protections where gaps exist. This includes lobbying for inclusive digital policies and leading public campaigns to underscore the importance of platform accountability. Transparency is critical — platforms must disclose the workings of their algorithms to allow public scrutiny and identify biases that may harm users, particularly women and marginalised groups. Governments and civil society should champion independent audits of systems and algorithms to ensure fairness and equity, paving the way for safer digital platforms.
Amplifying African Voices: Inclusion in Global AI and Digital Governance
Global advocacy must also prioritise the inclusion of African governments, civil society, and feminist voices in global AI and digital governance policy-making and forums. African stakeholders should have a central role in decision-making to ensure international policies reflect regional needs and realities. These realities include linguistic diversity, cultural contexts, and infrastructure challenges. Additionally, governance frameworks must address algorithmic biases and surveillance risks that disproportionately affect African communities.
Feminist Advocacy: Building Coalitions for Digital Justice
Feminist organising for digital justice remains crucial. Building Feminist Digital Coalitions can amplify advocacy for safer digital spaces across regions. These coalitions should press for policies that tackle gendered disinformation, online harassment, and algorithms that disproportionately harm women. Intersectional advocacy must recognise that factors such as race, class, sexuality, and disability intensify gendered harms online. Addressing these vulnerabilities requires approaches that consider the diverse needs of women and gender-diverse individuals.
Feminist networks should also advocate for platform design improvements that prioritise inclusivity and safety. This includes advancing content moderation practices that protect women and creating tools to prevent harassment. Capacity-building initiatives and educational programmes are equally vital to communities most likely to be impacted by the current platform design. These initiatives should empower women and marginalised groups by equipping them with knowledge of their digital rights, enabling them to navigate and report online harms. Training in identifying algorithmic content with its biases and misinformation can also further foster critical engagement with digital content.
Humanising Digital Harms: Elevating Lived Experiences for Change
Through storytelling and advocacy, the lived experiences of women impacted by gendered digital harms can be elevated. Sharing these narratives humanises the impact of platform inaction and generates pressure for meaningful change.
Finally, advocating for decolonised knowledge is critical to Africa’s digital experience. Governance frameworks must respect and integrate Africa’s intellectual contributions to the framing of a digital economy that centres African people meaningfully engaging with that space. This includes promoting African languages and indigenous knowledge in AI training datasets while ensuring fair compensation and recognition for these contributions. Partnerships between African governments, research institutions, and Big Tech must focus on creating AI solutions tailored to Africa’s unique needs. These collaborations should uphold ethical data-sharing principles, with safeguards against exploitation.
Tricia Gloria Nabaye is the Advocacy and Engagement Coordinator at Pollicy, and Bobina Zulfa is a Data and Digital Rights Researcher at Pollicy.