The Rise of “Ghost Law”: AI, Fake Citations, and Judicial Risk in India




1. Introduction: The Gavel and the Algorithm

On February 17, 2026, a profound paradox played out in New Delhi. At the Bharat Mandapam, representatives from 88 countries were signing the "New Delhi Declaration," pledging to build a human-centric, trusted AI for global governance. Simultaneously, just a few kilometers away, a Bench of the Supreme Court led by Chief Justice of India (CJI) Surya Kant was issuing a warning of an entirely different nature. The Court flagged an "alarming" trend of lawyers filing petitions citing cases like Mercy vs Mankind—a landmark authority that possesses the fatal flaw of being entirely non-existent.

As the legal profession grapples with a staggering backlog of over 5.4 crore pending cases, Artificial Intelligence is migrating from administrative automation to the core of the adjudicatory function. While India positions itself as a global leader in AI ethics, its courtrooms are already confronting the first systemic failures of agentic AI: the rise of "ghost law." We are at a crossroads where the drive for efficiency threatens to collide with the foundational integrity of the rule of law.

2. Takeaway 1: "Mercy vs Mankind" and the Rise of Ghost Citations

The phenomenon of "hallucinated" judgments has transitioned from a technical quirk to a documented courtroom reality. During the February 17 hearing, the Bench—including Justices BV Nagarathna and Joymalya Bagchi—recalled instances where petitions cited not just non-existent cases, but also invented quotations within real judgments.

These "ghost citations" occur because generative AI models prioritize linguistic fluency over factual accuracy. Designed as predictive engines, they assemble party names and volume numbers into formats that appear authentic because they are statistically probable in a legal context, not because they exist in a verified database. This creates a critical threat to judicial legitimacy, shifting the court's focus from high-level reasoning to clerical verification.

"There was a case of Mercy vs Mankind which does not exist... [this creates] an additional burden on the part of the judges who must now independently verify basic citations before engaging with a petition's merits."  

3. Takeaway 2: When an AI Error Becomes Judicial "Misconduct"

The judiciary has signaled that the "good faith" defense for AI errors is reaching its expiration date. In August 2025, an additional junior civil judge in Vijayawada dismissed objections in a property dispute by citing four Supreme Court judgments—including the fictitious Subramani v. M. Natarajan (2013) 14 SCC 95. The judge admitted to using an AI tool for research and failing to verify the results, a mistake the Andhra Pradesh High Court initially excused as a "good faith" error.

However, on February 27, 2026, the Supreme Court took a far sterner view. Staying the proceedings, a Bench of Justices P.S. Narasimha and Alok Aradhe declared that a decision founded on fake AI judgments is not a mere "error in reasoning" but constitutes professional misconduct. By invoking "considerable institutional concern," the apex court has made it clear: when an algorithm compromises the integrity of the adjudicatory process, the legal consequence falls squarely on the human operator.

4. Takeaway 3: The "Digital Poorhouse" and the Bias in the Machine

The integration of AI into Indian e-governance—what many describe as "Algorithms of Oppression"—risks creating a "digital poorhouse." While intended to streamline welfare, these systems often mirror and amplify historical prejudices. The human cost of these "efficiency" tools is not abstract;

Specific risks of this automated exclusion include:

  • Biased Welfare Schemes: Algorithmic screening meant to prevent "leakage" often excludes the most vulnerable, such as Adivasi and Dalit communities, due to digital identity errors or unrepresentative datasets that favor urban populations.
  • Biased Criminal Justice: Predictive policing and Facial Recognition Technology (FRT) rely on skewed historical data that over-represents marginalized communities as "high-risk." This erodes the presumption of innocence and subjects historically targeted groups to invasive surveillance.

5. Takeaway 4: Why AI Cannot Perceive "the Tremor in a Witness’s Voice"

Despite the technical allure of AI, the judiciary maintains that justice remains a "profoundly human enterprise." It has been argued that while technology can highlight inconsistencies in a statement with machine-like precision, it lacks the moral weight required for adjudication. AI operates on probabilities and patterns; it cannot perceive the atmospheric nuances of a courtroom or the weight of human anguish.

“Artificial intelligence may assist in researching authorities, generating drafts, or highlighting inconsistencies, but it cannot perceive the tremor in a witness's voice, the anguish behind a petition, or the moral weight of a decision.” 

6. Takeaway 5: The "Shadow AI" and "Perpetual Pilot" Trap

The current state of AI adoption in Indian courts is fragile and ad hoc. According to a global UNESCO survey, a staggering 44% of global judicial operators have already used AI tools for work-related tasks, often through "Shadow AI"—the unauthorized use of unvetted, free tools by staff and judicial officers.

This creates a system vulnerable to two major traps:

  • Perpetual Pilots: Many courts remain stuck in experimental phases without clear success metrics or exit criteria, leading to prolonged use of technology without evidence of its impact on efficiency.
  • The Fragility of Individual Champions: AI adoption is currently driven by "tech-savvy" individual judges. When these champions are transferred or retired, the lack of an institutional "technical cadre" or stable scaffolding often causes the entire digital infrastructure of that specific court to collapse.

7. The Blueprint: A 4-Step Framework for the Future

To move from ad hoc experimentation to responsible governance, the DAKSH/UNDP "AI for Justice" report proposes a reproducible assessment framework:

  1. Institutional Readiness Assessment: Evaluating the "human infrastructure"—checking if the court has the technical cadre and human resources to manage AI before adoption.
  2. Risk Assessment: Identifying potential harms at the use-case level, distinguishing between a low-risk translation tool and a high-risk bail-recommendation tool.
  3. Technical Assessment: Rigorously examining vendor security, data governance protocols, and the transparency of the "black box" logic.
  4. Ongoing Assessment: Implementing continuous monitoring of real-world impacts to track success metrics and emergent risks after deployment.

8. Conclusion: Toward an Intelligent—But Human—Gavel

Artificial Intelligence is an inevitable force in the legal sector, but the rise of "ghost law" and the "digital poorhouse" proves that technology cannot be left to govern itself. As we implement the aspirations of the New Delhi Declaration, the judiciary must ensure that technology serves as a guide while humans remain the governors.

As we digitize our halls of justice, we should aim to build a bridge to efficiency.



Note: AI was used for research and collating data points for this article

anita
A Bangalore based legal professional