Author: Jhanvi Anam
In February this year, xAI, the artificial intelligence company founded by Elon Musk, released the latest version of its chatbot, Grok. AI chatbots like Grok are generative tools that use large language models (LLMs) to produce human-like text in response to user prompts. They function by predicting patterns in language based on vast amounts of training data and are designed to simulate conversation, answer questions, and even generate creative content. Other widely used AI chatbots include OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude each with its own features, tone, and guardrails. Grok, however, has specifically been marketed as a witty and rebellious AI with an ‘edgy’ sense of humour. Hence, it responds with sarcasm and an often-controversial tone while interacting directly with users on ‘X’ (formerly Twitter), where anyone can tag @grok and receive quick replies.
Users of the social media platform ‘X’ quickly discovered that Grok often mirrors the tone of the prompts it is given. So, if someone used Hindi slang or slurs, Grok typically replied in the same style. This mirroring effect was on display recently when a user called Grok a misogynistic slur and the AI chatbot responded in kind. Users also found that they could bait Grok with offensive or provocative prompts and get it to produce equally edgy responses on political topics. In one instance, when asked who was more honest between Prime Minister Narendra Modi and Opposition leader Rahul Gandhi, Grok picked Gandhi saying, “I am not afraid of anyone.” While some users found these replies edgy, others saw them as inappropriate and politically motivated.
This raises concerns about accountability, platform responsibility, and the limits of acceptable speech even when it comes from a non-human source. As ‘X’ also serves as a social media intermediary enabling users to share content, certain content related restrictions under the Information Technology Act, 2000 (IT Act) remain applicable. This includes responsibilities in relation to content that is obscene, insulting or harassing. In response to these incidents, it is reported that officials from the Government of India have initiated informal discussions with X rather than issuing a formal notice, signalling its concerns over Grok’s output. But without a clear regulation for AI in place, this episode has left platforms, policymakers, and the public navigating a grey area: Can AI speech be held to the same standards as human speech? And who’s responsible when content standards are broken?
Can AI Chatbots Cross the Line?
When we say an AI chatbot may have “crossed the line,” we are usually referring to speech it produces that would be considered illegal or offensive if spoken by a human. The right to freedom of speech and expression is guaranteed under Article 19(1)(a) of the Constitution. But this right is not absolute. Article 19(2) allows the state to impose “reasonable restrictions” in the interests of public order, decency, morality, defamation, and criteria. These restrictions shape the legal and moral boundaries of acceptable public speech. These boundaries are also further detailed under the Bhartiya Nyaya Sanhita, 2023 (BNS). For example, Section 294, which traces its roots to the Obscene Publications Act of 1925, addresses obscene content in print and published materials while Section 296 deals with obscene acts or words in public spaces.
But legal terms like obscenity are inherently subjective and are often open to interpretation. Therefore, in the absence of a clear statutory definition, Indian courts have stepped in on various occasions to lay down legal tests. In 2014, the Supreme Court in Aveek Sarkar v. State of West Bengal adopted the “community standards” test, which asks whether content is obscene by the standards of an average, reasonable person today, and not by those of overly sensitive individuals. The case of Apoorva Arora v. State (NCT of Delhi), in 2024, demonstrates the continued relevance of this test. The court clarified that profanity is not automatically obscene, and observed that context for the content matters. The Court noted that legal analysis in determining obscenity could be flawed by interpreting the language literally without assessing its broader context.
During the case involving Youtuber Ranveer Allahbadia and India’s Got Latent host, Samay Raina, Justice Kant verbally emphasised that freedom of speech carries corresponding responsibilities and stated that humour should be crafted in a way that an entire family can enjoy. Therefore, this framing adds another dimension to the legal standard, suggesting that the boundaries of acceptable speech are not fixed but continue to shift in response to changing public expectations, cultural values, and judicial attitudes.
A similar tension recently played out in public responses to comedian Kunal Kamra, whose recent performance drew backlash not through legal action but via public outrage. This is a reminder that some forms of speech, even if legal, may face public rejection.
Therefore, when an AI Chatbot like Grok generates a sexist joke, mirrors slang, or comments on political figures, it risks “crossing” this blurry line. In that sense, Grok’s launch with its sense of humour and tone, was bound to trigger questions not just about humour, but about law, liability, and the future of speech in the AI age.
Who is responsible if AI Crosses the Line?
In situations involving AI chatbots, it may be argued that the user holds some responsibility, especially when the AI is prompted with provocative or unlawful language. In Grok’s case, for instance, the chatbot responded with misogynistic slang only after a user baited it with similar phrasing. Therefore, if someone knowingly uses an AI to solicit illegal content, the user could be seen as an active participant in generating harmful speech. However, the idea of user liability has been considered before through Section 66A of the IT Act. Although it was intended to curb offensive content online, the provision was widely misused to penalise individuals for sharing or forwarding messages deemed “annoying” or “inconvenient.” It was eventually struck down by the Supreme Court in Shreya Singhal v. Union of India for violating the right to freedom of expression. Therefore, the misuse of Section 66A offers a cautionary tale that holding users criminally liable for AI generated content could create a chilling effect on legitimate expression, especially if enforcement lacks clear guidelines or transparency. As we consider new legal standards for AI, it’s crucial to avoid replicating the overreach seen in earlier laws.
The responses generated by systems like Grok are shaped by the language, behaviour, and biases already present in online discourse. It may also be argued that the developer could bear responsibility for how much leeway the AI system is given. Since it is possible to make an AI Chatbot refuse to answer specific questions through AI training and guardrails. Many AI systems are programmed to politely decline if a user says something vile. Grok’s uniqueness (marketed as having a “rebellious streak”) means it was deliberately allowed more leeway to be “maximally truth-seeking”. This was a conscious design choice, even though past incidents involving Microsoft’s Tay or Meta’s Galactica have already highlighted the risks of such AI interaction; therefore, one could shift a degree of accountability onto the developer or deploying platform. But this form of liability could risk overregulation of AI in the name of safety and could inadvertently stifle innovation and limit the deployment of useful tools. This stance was observed in the Paris AI Action Summit when the USA prioritised leading the technological and innovation curve over the future of stringent AI safety regulations.
Then, can AI Chatbots have safe harbour protection? The Ministry of Electronics and Information Technology (MeitY) under the Information Technology Act, 2000 (IT Act) penalises the publication and transmission of obscene or sexually explicit content through Section 67, Section 67A, and Section 67B. To establish accountability in the digital ecosystem, the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Intermediary Rules) were introduced. Rule 3(b)(ii) requires that intermediaries (like ‘X’) make reasonable efforts not to host content that is obscene, defamatory, or insulting on the basis of gender, race, or ethnicity. Together, the IT Act and the Intermediary Rules create a framework outlining when platforms must take action against unlawful content and when they could be held liable for content hosted on their services. This framework becomes more complex when the content is generated by AI on a social media platform.
When Grok made such a response, Grok was not merely displaying a user’s post rather, it was creating new content in response to the user’s input, by mirroring the tone and language of the original question. That brings into question the applicability of Section 79 of the IT Act, which provides conditional safe harbour to online intermediaries. If a platform merely hosts third party content and exercises due diligence, it is not liable for what a user posts. However, this immunity applies only if the intermediary plays a passive, neutral role and refrains from participating in the creation or modification of content.
The Intermediary Rules flesh out the due diligence required under Section 79. Rule 3(1)(b) specifically lists categories of content that users should not upload, and that platforms must proactively prohibit. The list includes content that is defamatory, obscene, pornographic, invasive of privacy, racially or ethnically objectionable, “insulting or harassing on the basis of gender,” and so on. But herein lies the legal challenge which is, if the platform itself, through its AI system, generates content that violates standards such as offensive language it cannot simply rely on the protections afforded to intermediaries. While Grok, the chatbot, is not a human speaker and responds based on user prompts, the fact that its responses are produced, published, and hosted by the platform complicates the issue of liability.
Importantly, the recent AI Subcommittee Report has acknowledged such a complication. It notes that under the IT Act, intermediaries can only claim safe harbour if they do not “select or modify” the content they host. However, in the case of AI models, this condition becomes difficult to satisfy, as the content is often generated or altered in real time based on user input. The report emphasises that AI providers and deployers cannot assume they are covered by safe harbour by default. Instead, they must actively demonstrate compliance with the law especially in showing that they maintain neutrality and do not exercise editorial control over what their systems produce
As platforms increasingly integrate AI tools that generate their own responses, they move beyond simply hosting third-party content to actively shaping it. This blurs the boundaries of safe harbour protection, making it a complex terrain to navigate.
Conclusion
As we attempt to answer the question of regulation and liability in the context of AI generated speech, it has become clear that the legal line is still blurry. Without precise regulatory standards for what exactly is an acceptable form of expression, AI platforms may respond by over-censoring or excessively filtering responses to avoid legal consequences. This could risk AI systems that are too sanitized or hesitant to engage with complex or controversial topics which could ultimately lead to stifling innovation and chill legitimate expression.
At the same time, it is observed that there is scope for providing greater clarity for social media intermediaries deploying AI capabilities. Therefore, the challenge lies in striking a balance between protecting users from harm and allowing room for technological progress and expressive freedom. As AI systems like Grok become more integrated into digital spaces, the task ahead is to draw the line with nuanced foresight.


