In the United States, cases of detection of errors associated with the use of AI in courts have become more frequent.
Previously, scandals affected mainly lawyers who referred to non -existent cases. Now, judges also turn to generative models, which has already led to the publication of decisions with “hallucinations” AI.
For example, a federal judge in New Jersey was forced to rewrite the decision, and in the Mississippi, the judge refused to explain the origin of errors in his decision.
Some, like Judge Xavier Rodriguez, use AI for auxiliary tasks – compiling timline or questions to lawyers – avoiding application in matters requiring judgment.
Others, for example, judge Ellison Godard, use Chatgpt and Claude to structuring documents, but carefully relate to criminal cases.
Experts warn: a judge’s mistake, unlike a lawyer’s mistake, actually becomes the law and it is difficult to cancel it. With uncontrolled use of AI, such misses can undermine confidence in the judicial system.