AMA urges Congress to impose safety rules on AI mental-health chatbots as clinical risks draw scrutiny
The American Medical Association sent letters Wednesday to the chairs of three congressional committees urging the imposition of safety regulations on AI mental-health chatbots. According to AMA CEO Dr. John Whyte, the organization called for transparency and safeguards to address risks such as misinformation, emotional dependency, and privacy breaches as these tools lack consistent oversight.
The AMA’s letters, addressed Wednesday to the Senate Artificial Intelligence Caucus, the Congressional Digital Health Caucus, and the Congressional Artificial Intelligence Caucus, call for immediate congressional action to establish safety regulations for AI mental-health chatbots. The organization highlighted concerns over misinformation, emotional dependency, and privacy breaches as critical issues arising from the growing use of these chatbots.
AMA CEO Dr. John Whyte said these tools “may help expand access to mental health resources and support innovation in health care delivery,” but emphasized that they “lack consistent safeguards against serious risks.”
The AMA urged Congress to require AI mental-health chatbots to clearly disclose that users are interacting with artificial intelligence rather than licensed clinicians. The letters specify that these systems should be prohibited from presenting themselves as licensed professionals or implying they can provide care equivalent to that of human clinicians. Transparency requirements should also detail the extent of any human oversight, if applicable, so users understand they are not receiving care from a healthcare professional, the AMA stated.
Further, the AMA called for a prohibition on chatbots diagnosing mental health conditions such as anxiety or depression or offering treatment recommendations, including medication advice. The organization recommended that any AI tools engaging in diagnosis or treatment undergo review by the Food and Drug Administration as medical devices. Dr. Whyte and the AMA noted the need for the FDA to clarify which AI applications qualify as general wellness technologies versus those requiring regulatory oversight. Clear statutory boundaries should define permissible chatbot functions to prevent clinical misrepresentation.
The letters also stress that all AI mental-health chatbots must reliably detect suicidal ideation and self-harm risks, with immediate referral capabilities to suicide prevention hotlines and recommendations for further medical care. The AMA called for mandatory crisis-detection systems incorporating de-escalation language to reduce potential harms. Congressional hearings have revealed troubling reports of chatbots encouraging self-harm and suicide, particularly among vulnerable populations, the AMA noted. Enhanced safeguards are especially critical for minors, with mandatory de-escalation protocols required.
Privacy and data security concerns were also addressed. The AMA urged Congress to require developers to implement cybersecurity measures preventing unauthorized access or sharing of sensitive health data. The letters call for strict data protection rules limiting collection and retention of mental health information and establishing clear user consent protocols. The AMA emphasized that privacy protections should be comparable to those mandated under the Health Insurance Portability and Accountability Act for traditional healthcare.
The organization further recommended that advertising be discouraged within mental health chatbots, with a complete prohibition on ads targeted toward children. The letters warn against monetization models that could influence care guidance and insist that sponsorship bias and commercial interests must not shape chatbot outputs or recommendations. The AMA called on Congress to restrict advertising specifically to minors on AI mental health platforms and to enact statutory prohibitions against commercial influence on clinical guidance.
Ongoing safety monitoring and accountability were also highlighted. The AMA called for mandatory reporting of adverse events and rigorous safety standards, especially for tools used by children and adolescents who face heightened risks. Congressional testimony has underscored the need for consistent safety protocols to address emotional dependency and reality distortion risks associated with AI chatbots. The AMA urged developers to establish systems for tracking and reporting incidents of harm linked to chatbot use.
The AMA referenced precedents set by state legislation, noting that Illinois has banned AI use for therapeutic decision-making, while California requires chatbot developers to monitor conversations for suicidal ideation and implement other safeguards. These state actions have emerged in response to gaps in the federal regulatory framework, which currently lacks consistent safeguards for AI mental-health tools. The AMA advocates for federal standards to provide uniform protections across all states and jurisdictions.
The letters mark a significant push by the AMA to prompt congressional oversight and regulatory clarity as AI mental-health chatbots become more widespread. The organization’s recommendations aim to ensure these technologies provide safe, transparent, and clinically appropriate support without replacing licensed mental health professionals or exposing users to harm.