There are no items in your cart
Add More
Add More
| Item Details | Price | ||
|---|---|---|---|
On February 17, 2026, a profound paradox played out in New Delhi. At the Bharat Mandapam, representatives from 88 countries were signing the "New Delhi Declaration," pledging to build a human-centric, trusted AI for global governance. Simultaneously, just a few kilometers away, a Bench of the Supreme Court led by Chief Justice of India (CJI) Surya Kant was issuing a warning of an entirely different nature. The Court flagged an "alarming" trend of lawyers filing petitions citing cases like Mercy vs Mankind—a landmark authority that possesses the fatal flaw of being entirely non-existent.
As the legal profession grapples with a staggering backlog of over 5.4 crore pending cases, Artificial Intelligence is migrating from administrative automation to the core of the adjudicatory function. While India positions itself as a global leader in AI ethics, its courtrooms are already confronting the first systemic failures of agentic AI: the rise of "ghost law." We are at a crossroads where the drive for efficiency threatens to collide with the foundational integrity of the rule of law.
The phenomenon of "hallucinated" judgments has transitioned from a technical quirk to a documented courtroom reality. During the February 17 hearing, the Bench—including Justices BV Nagarathna and Joymalya Bagchi—recalled instances where petitions cited not just non-existent cases, but also invented quotations within real judgments.
These "ghost citations" occur because generative AI models prioritize linguistic fluency over factual accuracy. Designed as predictive engines, they assemble party names and volume numbers into formats that appear authentic because they are statistically probable in a legal context, not because they exist in a verified database. This creates a critical threat to judicial legitimacy, shifting the court's focus from high-level reasoning to clerical verification.
"There was a case of Mercy vs Mankind which does not exist... [this creates] an additional burden on the part of the judges who must now independently verify basic citations before engaging with a petition's merits."
The judiciary has signaled that the "good faith" defense for AI errors is reaching its expiration date. In August 2025, an additional junior civil judge in Vijayawada dismissed objections in a property dispute by citing four Supreme Court judgments—including the fictitious Subramani v. M. Natarajan (2013) 14 SCC 95. The judge admitted to using an AI tool for research and failing to verify the results, a mistake the Andhra Pradesh High Court initially excused as a "good faith" error.
However, on February 27, 2026, the Supreme Court took a far sterner view. Staying the proceedings, a Bench of Justices P.S. Narasimha and Alok Aradhe declared that a decision founded on fake AI judgments is not a mere "error in reasoning" but constitutes professional misconduct. By invoking "considerable institutional concern," the apex court has made it clear: when an algorithm compromises the integrity of the adjudicatory process, the legal consequence falls squarely on the human operator.
The integration of AI into Indian e-governance—what many describe as "Algorithms of Oppression"—risks creating a "digital poorhouse." While intended to streamline welfare, these systems often mirror and amplify historical prejudices. The human cost of these "efficiency" tools is not abstract;
Specific risks of this automated exclusion include:
Despite the technical allure of AI, the judiciary maintains that justice remains a "profoundly human enterprise." It has been argued that while technology can highlight inconsistencies in a statement with machine-like precision, it lacks the moral weight required for adjudication. AI operates on probabilities and patterns; it cannot perceive the atmospheric nuances of a courtroom or the weight of human anguish.
“Artificial intelligence may assist in researching authorities, generating drafts, or highlighting inconsistencies, but it cannot perceive the tremor in a witness's voice, the anguish behind a petition, or the moral weight of a decision.”
The current state of AI adoption in Indian courts is fragile and ad hoc. According to a global UNESCO survey, a staggering 44% of global judicial operators have already used AI tools for work-related tasks, often through "Shadow AI"—the unauthorized use of unvetted, free tools by staff and judicial officers.
This creates a system vulnerable to two major traps:
To move from ad hoc experimentation to responsible governance, the DAKSH/UNDP "AI for Justice" report proposes a reproducible assessment framework:
Artificial Intelligence is an inevitable force in the legal sector, but the rise of "ghost law" and the "digital poorhouse" proves that technology cannot be left to govern itself. As we implement the aspirations of the New Delhi Declaration, the judiciary must ensure that technology serves as a guide while humans remain the governors.
As we digitize our halls of justice, we should aim to build a bridge to efficiency.

anita
A Bangalore based legal professional