
In two recent cases, one in Georgia and one in New Jersey, it appears that hallucinations from generative artificial intelligence have actually made it into judicial decisions. In the first case, an appellate court recently noted that in a divorce proceeding in Georgia, the trial judge cited fictitious cases supplied by one of the litigants in a final decree in the matter. In the second, in federal court in New Jersey, Judge Neals withdrew an order when it was brought to his attention that some of the quotations in the opinion, which were purportedly drawn from actual cases, cannot be found in the referenced opinions. He has not indicated whether that was a fault of generative AI but it seems highly likely. While we have seen case after case in which lawyers and pro se litigants have tried to pass off fictitious cases as genuine authority, these are the first (known) cases in which courts appear to have incorporated such sources in their work product. And when they do, like a sort of legal Pinocchio, they get transmogrified into actual legal authority. It is certainly the case that judges have at their disposal tools to punish lawyers and litigants who make baseless claims, but those tools are generic, and do not make explicit reference to the use of generative AI. At the same time, in an effort to make it clear the risks associated with using these tools, which can bring with it serious punishment, some judges and court systems have adopted specific rules and standing orders that address the improper use of generative AI, or ban it outright. Perhaps it is time for the adoption of some version of these rules in a more comprehensive fashion. I detail these standing orders in a recent piece published in the North Carolina Journal of Law & Technology, which can be accessed here.
Leave a comment