Legal Experts: ChatGPT and AI Models Should Face Medical Review for Human Testing, Weigh Serious Mental Health Risks to Users

When studies are done on human beings, they are required to have an “Institutional Review Board” or “IRB” review the study, and formally approve the research, this is not being done at present for federally-funded work with AI/LLM programs and may, experts warn, be significantly harming U.S. citizens.

This is done because studies are being conducted on human beings.
Critics say that ‘Large Language Models’ powered by Artificial Intelligence, platforms like “Claude” and “ChatGPT” are engaged in this kind of human research and should be subject to board review and approval.

And they point out that current HHS policies would appear to require IRB-review for all federally-funded research on human subjects, but that Big Tech companies have so far evaded such review.

IRB Rules (45 C.F.R. 46.109, “The Common Rule”), requires all federally funded human-subjects research to go through IRB approval, informed consent, and continuing oversight.

Some courts have recognized that failure to obtain IRB approval can be used as evidence in itself of negligence or misconduct.

Even low-impact and otherwise innocent research requires this kind of professional review to ensure that harmful effects are not inadvertently caused to the human participants. Most modern surveys are often required to have an IRB review prior to its start.

Already, scientists have raised alarm about the mental and psychological impact of LLM use among the population.

One legal expert who is investigating the potential for a class action against these Big Tech giants on this issue told the Gateway Pundit, “under these rules, if you read them closely, at a minimum, HHS should be terminating every single federal contract at a university that works on Artificial Intelligence.”

This issue came up in 2014, when Facebook was discovered to have been changing and manipulating their algorithms on 700,000 people to see how they responded. This testing on human subjects may have seemed benign to some, but there was a risk that long-term mental and emotional health was significantly impacted. In 2018, the same complaints were made about the Cambridge Analytica program where a private company harvested millions of Facebook user profiles in order to more accurately market to those individuals.
Studies, including this 2019 study in the Journal ‘Frontiers in Psychology’, have examined the many ethical issues about Facebook’s actions, including how it selected whom to test upon, the intentions of testing on these individuals, and the ethics of doing so on children.

The legal expert pointed out to the Gateway Pundit, “People are using these systems, like ChatGPT, to discuss their mental health. Their responses are being used in their training data. Companies like OpenAI and Anthropic admit user chats may be stored and used for “training.” Yet under IRB standards, that kind of data collection would usually require informed consent forms explaining risks, yet none are provided.”

“At a minimum, these systems are operating and posing as medical professionals, lawyers, without a license, without a commitment to any code of ethics, and without any malpractice insurance. Society operates with a strong assumption that professional advice and service is backed by education, ethics, responsibility, insurance against harm, and more. If a human being wants to work in these fields, they have to spend years in training, and even then they still have to maintain a state license. Who is licensing these LLM platforms? None of these systems are operating this way, and regulators seem to timid and afraid to stand up to Big Tech.”

These systems also have a well-documented trend at generating “hallucinations” where inaccurate references, studies, articles, advice, is generated. AI experts do not fully understand why hallucinations happen, though they have theories, yet by all measurements, the hallucination problem continues to get worse not better.

The causes of “AI hallucination” where it invents references, authorities, and citations, is common and according to the New York Times in May, actually getting worse. The companies involved don’t have a coherent explanation why. The Times report said that hallucination rates on new AI systems were as high as 79% when measured.

As well, some researchers argue overuse of LLMs can increase isolation, dependency, and distorted perceptions of reality. Emerging research shows that users can form “para-social relationships” with AI, creating risks of manipulation. A 2025 study, “Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships” noted that, especially among young men with ‘maladaptive coping styles’ that the human-AI relationships started to “resemble toxic relationship patterns, including emotional manipulation and self-harm.”

Studies show people are using ChatGPT for mental health advice, diet plans, and even medical guidance. Unlike licensed doctors or therapists, the AI has no accountability, malpractice coverage, or training. The FDA regulates medical devices and digital health apps, but so far has not imposed rules on ChatGPT-like systems, creating a regulatory vacuum.

The U.S. government funds AI research at universities through DARPA, NSF, and other agencies. If those projects involve “human subjects” (e.g., testing how humans interact with AI), they should legally trigger IRB oversight. If experts are right, HHS could terminate every single non-compliant contract tomorrow.

The largest players in the artificial intelligence and large language model (LLM) space are some of the most powerful corporations in the world. OpenAI, backed heavily by Microsoft, operates ChatGPT, which is the most widely used AI chatbot globally with over 100 million active users. Anthropic, founded by former OpenAI executives and funded by Amazon and Google, runs Claude, which markets itself as a safer alternative. Google DeepMind and Google AI developed Gemini (formerly Bard), integrated directly into Google’s search products. Meta (Facebook) has pushed its LLaMA model as open-source, encouraging researchers and developers to adapt it freely. Meanwhile, Amazon has entered the race with its Titan and Bedrock services aimed at enterprise customers.

The legal expert points out, “The potential for harm is so insanely massive, and ultimately, the benefits are minuscule: the benefits are an enhanced search engine.”

AI & LLM roll-outs have been rocky in the past few months. Lawyers have been caught submitting ChatGPT-generated briefs to court, and have been sanctioned as a result. Elon Musk’s Grok was also, briefly, referring to herself as “MechaHitler” and saying that if worshipped anyone, it would be Adolf Hitler.

The U.S. Department of Health and Human Services was asked to comment for this story, and did not respond by press time.

The post Legal Experts: ChatGPT and AI Models Should Face Medical Review for Human Testing, Weigh Serious Mental Health Risks to Users appeared first on The Gateway Pundit.