AMA Issues Policy Framework to Combat AI Deepfakes Impersonating Physicians
The American Medical Association on Tuesday issued a comprehensive policy framework to combat AI-generated deepfakes impersonating physicians. According to AMA officials, the framework aims to modernize identity protections, close legal gaps, and address the growing risk of deceptive AI content that threatens patient safety and professional integrity.
The policy framework, developed by the AMA Center for Digital Health and AI, establishes enforceable protections against unauthorized AI-generated deepfakes impersonating physicians, AMA officials said. The framework is designed to modernize identity protections for physicians and close existing legal gaps that have allowed deceptive AI content to proliferate, according to the association. It specifically addresses the escalating risk posed by manipulated images, videos, and audio that can be used to impersonate medical professionals.
The framework mandates a prohibition on deceptive medical impersonation without clear, informed consent.
According to AMA representatives, deepfakes have been maliciously employed to impersonate doctors and disseminate false endorsements of unproven treatments, posing significant threats to individual patients and the healthcare system at large. The association noted that such impersonation scams undermine patient-physician relationships and erode public confidence in evidence-based care, increasing the risk of medical harm through deception.
AMA officials said that any AI-generated or altered content impersonating physicians must be explicitly prohibited when used to mislead patients by falsely conveying endorsement or authorship. The framework further establishes that failure to provide clear disclosure of synthetic content will be considered evidence of deception. These provisions set enforceable legal standards to deter unauthorized impersonation.
Consent requirements are a key component of the policy. The AMA stipulates that the use of a physician’s identity in AI-created or manipulated content requires separate, explicit opt-in consent that is never implied or bundled within general agreements. Consent must specify the intended use, audience, purpose, and duration, and must be revocable if circumstances change, according to AMA officials. This informed authorization is intended to protect physician identity rights while allowing legitimate uses of synthetic content.
To ensure transparency, the framework requires mandatory labeling of all AI-generated or altered depictions of physicians in plain language. Digital watermarks must be embedded on all synthetic physician content, and patients must be proactively notified before any interaction with synthetic professionals, the AMA said. These labeling and disclosure requirements aim to prevent patient deception and facilitate informed decision-making.
The AMA also calls for shared responsibility among platforms, hospitals, and AI vendors to prevent impersonation. The framework requires the implementation of rapid takedown mechanisms for unauthorized deepfakes and conspicuous labeling across all distribution platforms. It prohibits the use of health professional titles in AI content without authorization. The association emphasizes a multi-stakeholder approach to address deepfake threats throughout the healthcare ecosystem.
Enforcement provisions include designation of a federal agency with explicit authority to investigate violations of deepfake impersonation rules. The framework mandates preservation of relevant records and audit logs to support enforcement actions. It also calls for mechanisms to compel cooperation from violators, according to AMA officials. The association is collaborating with lawmakers, regulators, and industry partners to implement the framework and ensure regulatory oversight.
The AMA’s announcement follows growing concerns over the misuse of AI technologies in healthcare communications. The association said the framework represents a proactive step to safeguard patient safety, uphold professional integrity, and maintain public trust amid rapid advances in synthetic media. The AMA plans to continue working with federal authorities and industry stakeholders to refine and enforce these policies as AI technology evolves.