OpenAI founder Sam Altman says that soon, everything everywhere will start using Artificial Intelligence and Large Language Models for entire professions, causing them to “disappear.”
Meanwhile, people actually using these services, including attorneys in Alabama, are being sanctioned for the pervasive AI/LLM flaw of ‘hallucinating’ fake citations and fake references.
A federal judge in Birmingham, Alabama, Judge Anna Manasco, issued formal sanctions this week against three attorneys from the law firm Butler Snow after they submitted legal filings containing fabricated case citations generated by ChatGPT.
Manasco, appointed to the court by President Trump, described the citations as “completely made up” and removed the attorneys from the case.
The filings were part of a lawsuit brought by an inmate who alleged repeated stabbings at the William E. Donaldson Correctional Facility. Manasco referred the case to the Alabama State Bar and ordered the attorneys to share the sanctions order with all current and future clients, as well as all opposing counsel and courts where they are actively involved.
Even the attorneys overseeing the ones who made the mistake of using ChatGPT were also sanctioned. The supervisors claimed they ‘skimmed’ the filings and did not notice the fabricated legal authorities used to support their written arguments.
The lawsuit centers on claims by inmate Frankie Johnson, who alleges that prison officials failed to prevent multiple assaults despite prior warnings. Johnson is housed at Donaldson Correctional Facility, one of the state’s most overcrowded and violent prisons. The firm representing the Alabama Department of Corrections, Butler Snow, filed motions in the case that included five legal citations meant to support its arguments on scheduling and discovery disputes. Upon review, none of the referenced decisions existed.
News in the past month also suggests that, when measured, heavy AI/LLM reliance stunts the cognitive growth in its users, effectively making them dumber.
The judge investigated the filings further in this case and determined that the cases cited had never been published, logged, or recorded in any known legal database. They were simply made up out of thin air.
One of the attorneys, Matt Reeves, later admitted that he had used ChatGPT to generate the citations without verifying their authenticity. Two senior attorneys, William R. Lunsford and William J. Cranford, signed off on the filings without independently confirming the legal authorities included in the documents.
The causes of “AI hallucination” where it invents references, authorities, and citations, is common and according to the New York Times in May, actually getting worse. The companies involved don’t have a coherent explanation why. The Times report said that hallucination rates on new AI systems were as high as 79% when measured.
Experts say the complex way in which the programs are processing information is causing these errors but they are at a loss as to explain precisely why.
According to the Times, when measured, the most powerful AI systems are still generating hallucination error rates at 33%.
Judge Manasco responded to the attorneys involved with a sharply worded sanction order. She found that submitting fabricated legal precedent constitutes a serious ethical violation and ordered all three attorneys removed from the case.
Additionally, she mandated that they distribute her ruling to their professional contacts and clients, including any court where they are currently active. While she did not impose immediate monetary penalties, she referred the matter to the Alabama State Bar for further disciplinary review.
Manasco wrote that the attorneys had shown “recklessness in the extreme,” emphasizing that the duty to verify cited material lies with the lawyer, not the technology. She expressed concern about the broader impact of submitting false citations to a federal court, stating that it erodes public trust and undermines the legal process. Her ruling underscored that the misconduct stemmed not only from using AI, but from the failure to follow long-standing professional norms that require careful review of legal filings.
In a high-profile New York case in 2023, attorneys representing a plaintiff in an airline dispute submitted a filing with multiple non-existent cases also generated by ChatGPT. That incident led to court sanctions and triggered nationwide debate over the proper role of AI in legal practice.
In recent years, courts and professional associations have moved to clarify that lawyers are responsible for any content they submit, regardless of whether AI tools were involved. In 2024, the American Bar Association released its first ethics opinion on AI use, warning attorneys that the convenience of such tools does not reduce their obligation to ensure accuracy and truthfulness in court documents.
Butler Snow, a firm that has received tens of millions in taxpayer funding for its prison defense work, acknowledged the error. Reeves admitted responsibility and expressed regret. Lunsford, who heads the firm’s public law division, conceded he failed to confirm the accuracy of the citations. The firm pledged to implement additional oversight mechanisms and initiated an internal review of recent filings to identify any similar issues.
Legal observers have noted that while AI tools can offer efficiency in early research and drafting, they remain fallible and should never substitute for manual verification. Experts warn that failure to follow due diligence protocols could result in professional discipline, public censure, or disbarment. Courts are increasingly alert to the use of AI in filings and may begin requiring declarations that content has been reviewed for accuracy.
Meanwhile this week OpenAI signed a deal with the British government this week to use AI in the delivery of government services.
The post Federal Judge Sanctions Alabama Lawyers for Submitting Fake AI‑Generated Case Citations, Highlighting Systemic, Ongoing AI Problems Making up Facts appeared first on The Gateway Pundit.