Author: Aayushi Gupta
Introduction
The year 2025 has already proven to be eventful in the field of Artificial Intelligence (AI). This year has been marked by developments such as the release of the Union Ministry of Electronics and Information Technology’s (MeitY) Subcommittee Report on ‘AI Governance and Guidelines Development’, the launch of the Chinese-origin ‘DeepSeek R1’ open-source AI, and the new United States administration’s focus on furthering AI innovation while limiting regulation. These parallel developments highlight AI as a technology that every nation is eager to harness. While AI poses significant opportunities for advancement and innovation, it also brings substantial risks such as algorithmic bias, privacy concerns, and cyberattacks. This makes its effective governance an imperative. On the central question of governing AI, contemporary perspectives were offered at the AI Action Summit held in Paris in February 2025. Some nations advocated minimal oversight to foster innovation, whereas others stressed the need for ethical, safe, and trustworthy AI systems. These competing stances highlight the current global tug-of-war over the regulation of AI.
Key Regulatory Approaches and Shifts in 2025
Regulatory approaches to AI are fragmented across regions, as highlighted at the Paris AI Summit. The European Union (EU) has adopted a pro-regulatory stance with its AI Act, which applies a risk-based framework. However, concerns have been raised that overly stringent regulation might impede innovation. Hence, the opposing view suggests that innovation should lead, with regulation following later—a strategy pursued by countries such as the United States. A third approach, one that seeks a balance between innovation and oversight, is being considered in countries such as India and South Korea.
In India, the regulatory framework was outlined in MeitY’s Subcommittee Report on AI Governance and Guidelines Development, released in January 2025. This report proposes a balance between fostering innovation and imposing necessary regulations—tailored to India’s stage of technological development and socioeconomic context.
To put this into a global perspective, with the election of President Donald Trump, there has been a departure internationally from the cautious regulatory path of the previous administration, which had introduced measures such as the Blueprint for an AI Bill of Rights and the 2023 Executive Order on Safe, Secure, and Trustworthy Development and Use of AI. The new administration revoked these measures and issued a new directive on ‘Removing Barriers to American Leadership in AI’, aimed at accelerating AI development while reducing regulatory constraints. The announcement of the $500 billion Stargate Project further signals a strong commitment to a pro-innovation agenda.
Rapid technological advances—exemplified by DeepSeek R1—have also intensified the global AI arms race, particularly between the United States and China. This development challenges the longstanding assumption of American dominance and underscores the need for nations to build their technological capabilities. Recent United States export restrictions—including the 2024 measures targeting advanced computing items and semiconductors, along with a de facto ban on the sale of cutting-edge AI chip hardware to China in 2022—appear to have been be less effective than expected. DeepSeek’s breakthrough, reminiscent of ChatGPT’s impact in 2022, is expected to accelerate the commercial uptake of AI while calling into question previous investment assumptions by demonstrating that advanced AI models may require less infrastructure and capital than once believed.
A similar ‘all gas, no brakes’ approach was evident at the Paris Summit. Both the United States and the United Kingdom refrained from signing a declaration calling for an open, inclusive, and ethical framework for AI development. Vice President JD Vance argued that excessive oversight—akin to the EU’s approach—could stifle innovation. In the United Kingdom, rapid AI advancement was underscored by renaming the AI Safety Institute to the AI Security Institute, a move aimed at addressing immediate security concerns while fuelling economic growth through innovation. This reluctance to impose stringent regulatory measures reflects a belief that innovation and regulation are mutually exclusive. Yet, while a light-touch approach may benefit technological progress, businesses still require regulatory certainty and technical standards to thrive, as insufficient oversight risks both potential harm and eroded public trust.
In contrast to these deregulation efforts, nations in the Global South are working towards a measured regulatory approach. Following the EU’s lead, South Korea passed its AI Basic Act in December 2024, aiming to protect citizens’ rights while ensuring sound AI development. Similarly, India’s Subcommittee Report emphasizes that regulatory measures should not hinder progress. At the Paris Summit, India stressed the need for governance that aligns with its technological capabilities and socioeconomic context. Initiatives such as launching an indigenous AI model development project, and expanding computing capacity further illustrate India’s balanced stance.
Beyond in Western countries, the importance of major AI companies is evident. For example, Sam Altman’s visit to India and OpenAI’s commitment to support the IndiaAI initiative, including investments in the developer community, highlight an open-arms approach to AI. This positions India as an attractive destination for setting up tech infrastructure and markets. Such investments can spur economic growth, allowing countries to build their technological capabilities.
Way Forward
The AI Action Summit in Paris underscored a stark divide between global AI approaches. Much like the competing ideologies of the Cold War, leading nations will continue to push the frontiers of AI for strategic advantage, while also developing robust frameworks to minimize genuine harm.
As AI adoption continues to grow, the varying degrees of regulation pursued by different jurisdictions—ranging from stringent to minimal—will shape discourse. National approaches must be informed by an understanding of AI’s evolving nature, balancing its potential for growth with the risks it poses. Ultimately, fostering innovation while ensuring safe, ethical, and trustworthy AI may be the best way forward. Amid these complexities, India finds itself at a critical juncture in assessing its governance stance. Its recent efforts—illustrated by proactive GPU procurement and accelerated attempts to develop an indigenous foundational model—are laudable. Recognizing that regulating AI is inherently complex, India could adopt a balanced strategy by capitalizing on its data-rich digital environment, bolstering its research and development ecosystem, and leveraging international cooperation. This is exemplified by the TRUST initiative for US-India AI collaboration and the India-France Declaration on AI, which may help secure a competitive position in the AI race, while maintaining a precautions that ensures safe and ethical deployment.