Manufactured Narratives: Astroturfing, Gendered Disinformation, and their impact on Technology-Facilitated Violence

Astroturfing and gendered disinformation distort public discourse and fuel tech-facilitated violence against women. This article examines digital platforms, calling for stronger accountability, transparent moderation, and improved digital literacy.

Author: Aparna Bhatnagar

Introduction

On September 22, 2024, the United Nations General Assembly adopted the Global Digital Compact under the ‘Pact for the Future’ initiative. The Compact recognizes technology-facilitated violations, abuse, and discrimination against women as part of broader systemic gender-based inequality. These acts, designed to discredit and incite further harm, underscore the urgent need to address the digital dimensions of gender discrimination. One critical concept within this context is ‘Astroturfing,’ which originally referred to an “organized activity intended to create a false impression of a widespread, spontaneously arising movement.”

This dynamic is particularly evident in the experiences of women in public life, who increasingly face the intersection of astroturfing, misinformation, and misogyny. In December 2024, Hollywood actress Blake Lively instituted a lawsuit against actor and director Justin Baldoni, alleging that his PR firm had orchestrated a smear campaign against her in retaliation for her sexual harassment claims.   The case offers a striking example of astroturfing—where coordinated efforts manufacture public sentiment under the guise of organic discourse. While later developments have added complexity to the narrative, the initial media frenzy and the way the story was framed underscore how easily perception can be shaped, reinforcing the broader conversation around reputation management and digital influence.

By simulating grassroots movements, disseminating sexist disinformation, and exploiting technological systems, these strategies perpetuate patriarchal control in digital physical spaces, and contribute to Technology-Facilitated Gender-Based Violence (TFGBV). While astroturfing often operates in a legal gray area, its ethical implications demand greater accountability, platform transparency, and the integration of feminist perspectives into digital governance. This article examines these intersecting issues on digital rights, gender equity, and the ethical regulation of online spaces.

The Intersection of Technology, Gender, and Power

[Note to the reader: Disinformation, misinformation, and astroturfing are distinct concepts with nuanced differences, each carrying significant regulatory implications. While their distinctions are elaborated upon in later sections, the terms may be used interchangeably in the preliminary discussion for simplicity.]

Astroturfing, as the term suggests, refers to fake grassroots activism. It is not limited to the online space but extends beyond it, occurring in situations where individuals are incentivized to disrupt processions, stage protests, sign petitions, or push propaganda and false narratives. In 2018, Meta (then Facebook) introduced the term “coordinated inauthentic behavior” (CIB) to describe organized efforts by groups to mislead others about their identity or objectives. By fabricating the illusion of widespread public sentiment where little to none actually exists, astroturfing manipulates perceptions and distorts public discourse.

This distortion is particularly harmful when intersected with existing social hierarchies, as it amplifies power imbalances and reinforces systemic discrimination. The interplay between technology, gender, and power reveals a deeply entrenched asymmetry in access, representation, and control within digital spaces.  A study by the Economist Intelligence Unit (EIU) found that online violence, particularly in the form of misinformation and defamation, is alarmingly prevalent, with 67% of cases involving the spread of rumors and slander to damage a woman’s reputation. Astroturfing weaponizes coordinated disinformation and harassment to undermine women’s presence and agency in digital spaces. Experts at the Oxford Internet Institute highlight that astroturfing exploits cognitive biases, making it increasingly difficult to differentiate genuine public reactions from orchestrated campaigns—particularly in politics, entertainment, and corporate spheres.

Beyond individual actors, the power dynamics of digital ecosystems are shaped by the algorithmic systems that govern content dissemination and visibility. In Algorithms of Oppression, Safiya Umoja Noble argues that these systems are far from neutral; they encode biases that disproportionately disadvantage women and marginalized communities by reinforcing harmful stereotypes and amplifying discriminatory narratives. Women, especially those in positions of influence or advocacy, face compounded risks as their digital identities are targeted to delegitimize their credibility and agency. By employing tactics like character assassination, doxxing, and the dissemination of sexist tropes, these campaigns exacerbate the already hostile environment many women face online.

Regulatory Framework for Addressing Gendered Disinformation

Effective regulation of gendered disinformation necessitates the establishment of a clear and precise definition. Although academic literature lacks a universally accepted definition, Edda Humprecht’s conceptualization of disinformation as “the intentional or knowingly false dissemination of statements for strategic purposes or social influence” serves as a valuable starting point. Misinformation, by contrast, refers to false information spread unintentionally, often resulting from a genuine mistake by individuals unaware of its inaccuracy. Astroturfing differs from both of these phenomena because it entails selective use of facts to construct a misleading narrative. A recent example is the reaction to the death of an individual whose personal struggles were evident, yet whose words were selectively amplified to serve a broader narrative. While their experience deserved empathy, this was sensationalized,  with certain groups using it to push misogynistic rhetoric. Statements were cherry-picked to fit this agenda, fueling reactions like, “if you have no empathy for victimized men, then you deserve toxic and revengeful men.” One report noted that it culminated into a “peculiar kind of brotherhood that awakens only when male pain can be transformed into ammunition against women’s rights”. This underscores how sensationalized and strategically framed narratives can escalate societal tensions and produce real-world consequences.

Legal and Policy Measures

Indian law addresses certain forms of disinformation, primarily focusing on content that incites violence, disrupts public order, or harms religious sentiments and individual reputations. Section 299 of the Bharatiya Nyaya Sanhita, 2023 (Section 295A of the erstwhile Indian Penal Code) penalizes deliberate and malicious acts intended to outrage religious feelings through false statements. Sections 356(1) and 356(2) of BNS (Sections 499 and 500 of IPC respectively) deal with defamation, criminalizing the spread of false information that damages an individual’s reputation. Section 353 of BNS (Section 505 of IPC) prohibits the publication or circulation of false statements or rumors that could incite violence or create public disorder. Additionally, Sections 192 of BNS (corresponding Section 153 of IPC) along with Section 54 of the Disaster Management Act, specifically target disinformation that leads to unrest. While these provisions serve as important safeguards, they do not comprehensively regulate fake news or disinformation that manipulates public perception without directly provoking violence. This legal gap makes it particularly challenging to address astroturfing, which distorts public discourse without necessarily crossing the legal threshold of incitement.

Indian jurisprudence has recognized the right to accurate information, the right to be exposed to diverse perspectives, and the right to uninterrupted internet access as essential to the fundamental right to informed decision-making under the framework of free speech. To balance regulation with the protection of free speech, some scholars advocate for the application of the “actual malice” standard. While the Court in the landmark case of New York Times Co. v. Sullivan upheld this standard to protect the free and robust exchange of ideas essential to democracy; it may not be an effective tool for addressing the nuanced and insidious nature of astroturfing. The ‘actual malice’ standard requires proving that a statement was made with knowledge of its falsity or with reckless disregard for the truth. However, astroturfing is not about spreading outright falsehoods—it involves coordinated efforts to create the illusion of widespread, organic support or opposition to an issue, even when such sentiment is artificially manufactured. This distinction makes proving actual malice particularly difficult, as much of astroturfing operates in a legal gray area where individual statements may be technically true but are deliberately framed to mislead and misrepresent. Defamation and libel laws offer limited recourse since they primarily address harm to an individual’s reputation rather than the broader societal damage caused by coordinated propaganda. Gendered astroturfing, for example, often does not defame an individual in the legal sense but instead works to undermine entire groups by reinforcing misogynistic narratives and discrediting women’s voices in public discourse. Ultimately, the difficulty in pinpointing liability—whether it rests with the orchestrators, platforms, or those amplifying the narrative—underscores the inadequacy of existing legal frameworks in addressing astroturfing.

Given the difficulty of regulating astroturfing through existing legal frameworks, a broader approach that includes policy interventions and platform accountability becomes essential. While the introduction of specialized legislation or a dedicated legal provision may seem like a solution, it presents its own challenges, as defining and proving astroturfing in legal terms remains complex and could risk overreach or misuse.

Platform Accountability: Platforms must adopt clear and enforceable policies to identify and remove gendered disinformation effectively. Transparent content moderation systems should be implemented, with regular publication of transparency reports detailing the actions taken against gendered disinformation, including the number of complaints addressed and content removed. Strengthened intermediary liability laws could also hold platforms accountable for hosting and failing to mitigate gendered disinformation, fostering proactive measures to curb its proliferation while safeguarding freedom of expression. For instance, WhatsApp has introduced a feature that flags messages forwarded multiple times, alerting users to potential virality. Similar mechanisms could be implemented on platforms like Twitter/X. Additionally, platforms could offer built-in tools for users to reverse-search images or videos to check if content has been altered or taken out of context. Loughborough University based Everyday Misinformation Project identifies five key principles for addressing misinformation: effective warnings should extend beyond simple labels, introduce friction to encourage reflection, raise awareness through media campaigns, account for contextual usage, and integrate technology with social initiatives to empower users. These principles can serve as a foundation for developing platform policies that promote algorithmic accountability.

Mechanisms for Reporting and Redressal: Accessible and user-friendly reporting mechanisms are vital for addressing gendered disinformation. Platforms must develop intuitive tools that allow users to report instances of disinformation easily, ensuring these tools are multilingual and inclusive. Regulations should require platforms to resolve complaints within strict timelines to provide timely relief to affected individuals. For instance, in August 2021, Twitter (now X) introduced a feature enabling users to report posts flagged as “misleading,” particularly concerning political content. This mechanism complemented existing categories such as “Hate,” “Abuse & Harassment,” and “Violent Speech.” However, in 2023, X discontinued the political misinformation reporting feature, drawing significant criticism. The platform, however, retained its “Community Notes” feature, which allows users to collaboratively add context to posts, aiming to surface credible information and enhance content transparency. In addition to this, comprehensive victim support services, such as helplines, counselling, and legal aid, should be established to assist those targeted by gendered disinformation, fostering a safer digital environment.

Digital Literacy: Promoting digital literacy and public awareness is a critical component of any regulatory framework. Collaborative efforts with educational institutions and civil society organizations can enhance critical thinking, media literacy, and awareness of gendered disinformation. Public awareness campaigns should aim to educate users about identifying and reporting gendered disinformation, empowering them to navigate the digital space more safely. This dual approach of education and awareness ensures a well-informed and resilient user base.

Conclusion

Astroturfing thrives in a space where misinformation is not always outright false but carefully curated to distort reality. By blurring the line between organic public sentiment and manufactured influence, it manipulates discourse while evading legal scrutiny  At the heart of this issue lies the economic model of digital platforms, where engagement—regardless of accuracy—translates into profit. Sensationalism, outrage, and controversy drive clicks, pushing harmful narratives into mainstream discourse while reinforcing biases and deepening societal divisions. Algorithmic amplification further entrenches these distortions, creating echo chambers that insulate users from diverse perspectives and critical discourse. When gendered disinformation is monetized and weaponized, it not only misrepresents reality but also silences voices, marginalizes women, and fuels technology-facilitated abuse.

Addressing this requires a shift in both policy and platform accountability. Given the significant role that social media platforms and search engines play in shaping public opinion, regulating disinformation is essential—not just as a means of protecting truth, but also to preserve an individual’s mental and decisional autonomy. Strengthening transparency in content moderation, refining platform policies to detect and curb gendered astroturfing, and fostering digital literacy are crucial steps in mitigating its impact. Furthermore, holding platforms accountable—whether through intermediary liability, oversight bodies, or consumer-driven advocacy—can create incentives to prioritize truth over virality.

Scroll to Top