Fri Oct 31 05:38:34 UTC 2025: Summary:
Judges worldwide are increasingly encountering legal briefs generated by AI that contain errors, such as citations to nonexistent cases. A French data scientist, Damien Charlotin, has cataloged nearly 500 such instances in the past six months. This trend highlights the risks of relying on AI in professional settings, where employers are eager to hire AI-proficient workers. While AI can assist with tasks, it’s prone to mistakes and raises privacy concerns. Experts advise users to treat AI as an assistant, not a substitute, and to verify its outputs. They also caution against sharing confidential information with AI tools and suggest seeking AI training. Ultimately, they stress the importance of learning to use AI responsibly and effectively, despite its pitfalls.
News Article:
AI-Generated Legal Blunders on the Rise, Courts Warn of “Hallucinations”
NEW YORK – October 31, 2025 – Courts across the globe are grappling with a surge in legal filings riddled with inaccuracies generated by artificial intelligence, raising concerns about the reliability of AI in professional settings. A French data scientist and lawyer, Damien Charlotin, has identified at least 490 instances in the last six months where AI-generated legal briefs contained “hallucinations” – false or misleading information, including citations to nonexistent cases.
This alarming trend serves as a cautionary tale for businesses embracing AI, as many employers actively seek to hire workers skilled in using the technology. While AI offers potential benefits for tasks like research and report drafting, it is also prone to errors and poses privacy risks.
Charlotin, a senior research fellow at HEC Paris, created a database to track cases in which judges ruled that generative AI produced fabricated case law and false quotes. The majority of rulings are from U.S. cases in which plaintiffs represented themselves without an attorney, he said. While most judges issued warnings about the errors, some levied fines.
Even high-profile companies have fallen victim to AI’s shortcomings. A federal judge in Colorado ruled that a lawyer for MyPillow Inc., filed a brief containing nearly 30 defective citations as part of a defamation case against the company and founder Michael Lindell.
Experts are urging users to approach AI with caution. Maria Flynn, CEO of Jobs for the Future, advises treating AI as an assistant that augments workflow, rather than a substitute for human judgment. She and others stress the importance of verifying AI-generated outputs and protecting confidential information when using AI tools.
Furthermore, access to AI training, whether through employer-provided programs or free online resources like ChatGPT and Microsoft Copilot, is deemed crucial.
“The largest potential pitfall in learning to use AI is not learning to use it at at all,” Flynn said. “We’re all going to need to become fluent in AI, and taking the early steps of building your familiarity, your literacy, your comfort with the tool is going to be critically important.”