Menu

When the American Medical Association launched its Center for Digital Health and AI last month, it joined a growing list of organizations seeking to shape the governance of artificial intelligence in American healthcare. The AMA’s announcement came as the Trump administration has rejected the Biden-era embrace of government-affiliated coalitions and signaled a preference for industry-led innovation in AI. The field is increasingly crowded with contenders—coalitions, technology alliances, medical societies, and accreditation bodies—all vying to define standards for AI in both clinical practice and healthcare administration.

The AMA has previously been specific in defining AI as “augmentative intelligence,” stressing that technology should assist rather than supplant human providers. Industry leaders and policymakers generally share that perspective. At a hearing in September, Rep. Morgan Griffith (R-VA), Chair of the House Health Subcommittee, observed that “AI applications can be hugely beneficial to patients and providers, but they are to assist – and not replace – the clinical workforce.” His remarks reflected a growing consensus among policymakers that the realizing the value of AI in medicine depends on maintaining human oversight and control. This principle has been echoed by industry leaders such as Clover Health CEO Andrew Toy, who testified that “AI should never be used to deny care or replace physicians. It should be used to empower physicians, helping doctors identify diseases earlier, personalize treatments, and spend more time with their patients.”

Yet, with so many parties at the table, it remains unclear who will ultimately define the guardrails and set the standards for AI governance in healthcare.

Machines mimic intelligence, but trust lags

This summer marked 70 years since a group of researchers at Dartmouth College undertook a two-month project “to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” What began with a theoretical exercise has become integral to nearly every sector of modern life. In medicine, machine learning now encompasses training on imaging data, parsing electronic records, recommending treatment paths, and predicting hospital readmissions. While the science has gone beyond what the Dartmouth group may have dared to imagine, the policies and institutions meant to guide it have struggled to keep pace. In healthcare, the question is no longer whether machines can mimic human intelligence but who decides where and how that ability is applied.

That uncertainty has fueled what Stanford University professor Michelle M. Mello has described as a “foundational trust deficit” in healthcare AI. “The key problem isn’t that there isn’t a lot of innovation,” she explained, “it’s that uptake of new innovations is low.” Mello urged lawmakers to strengthen governance requirements for hospitals and insurers, noting that “most healthcare organizations and health insurers do little vetting of AI tools before they put them into use, and often no meaningful monitoring of their impact afterward.” The result, she said, is a persistent lack of confidence that has become one of the central challenges in determining how—and by whom—AI should be governed.

Filling the governance void

A growing number of organizations have sought to answer the question of who should establish standards. In 2021, the Consumer Technology Association (CTA), North America’s largest technology trade association, published The Use of Artificial Intelligence in Health Care: Trustworthiness (ANSI/CTA-2090), developed with input from America’s Health Insurance Plans and AdvaMed. The standard identified what CTA called “the core requirements and baseline for AI solutions in health care to be deemed as trustworthy,” emphasizing transparency, fairness, data quality, and human oversight.

Another major initiative, the Trustworthy & Responsible AI Network (TRAIN), brought together health systems including the Cleveland Clinic, Duke Health, Mass General Brigham, and Vanderbilt University Medical Center, along with Microsoft and other partners. TRAIN operates shared clinical and technical testing environments and offers what it describes as “governance as a service,” helping hospitals assess, monitor, and validate AI tools in real-world use.

Perhaps the most visible effort to fill the governance void has been the Coalition for Health AI (CHAI). Formed in 2022 as a cross-sector alliance of academic medical centers, technology companies, and healthcare organizations—including Amazon, Google, Microsoft, Mayo Clinic, and CVS Health—CHAI quickly positioned itself as a convener of stakeholders ready to establish guardrails for the use of AI in healthcare.

Rather than wait for government regulators, CHAI embraced the tech sector’s “move quickly” ethos in developing consensus standards and best practices for “trustworthy AI.” In April 2023, it released its Blueprint for Trustworthy AI Implementation —a 24-page framework with recommendations to enhance transparency, safety, and fairness in AI tools. CHAI co-founder Dr. Brian Anderson described the blueprint as an effort to align AI standards so that “patients and clinicians [can] better evaluate the algorithms that may be contributing to their care”.

In addition to leveraging the expertise of the nation’s largest technology firms, CHAI enjoyed the tacit endorsement of the Biden administration. Notably, senior federal officials participated directly: Micky Tripathi, then National Coordinator for Health IT, and Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence, both served at one point as non-voting “federal liaisons” on CHAI’s board.

Their involvement signaled the Biden administration’s embrace of CHAI as a partner in shaping national AI policy. CHAI cultivated ties across HHS agencies; observers from the Food and Drug Administration, the National Institutes for Health, the Centers for Medicare & Medicaid Services, and the White House Office of Science and Technology Policy engaged with the group.

Even as Tripathi stepped down from CHAI’s board upon taking an official HHS AI role in late 2023, he emphasized that his withdrawal was “not a reflection at all on CHAI or their mission,” underscoring continued support for the effort.

CHAI partnered with the Joint Commission in June, and soon after the two organizations released Responsible Use of AI in Healthcare, a framework calling on healthcare organizations to implement formal governance structures for AI oversight.

The collaboration suggests that CHAI’s recommendations could be leveraged as an enforcement hook: hospitals could incorporate the AI guidelines into their Joint Commission accreditation process, lending weight to what are nominally voluntary standards. CHAI has also outlined plans for “playbooks” on safe AI implementation and a voluntary certification program, potentially creating a new layer of accountability akin to the Joint Commission’s role in patient safety.

By the end of 2024, CHAI’s roster included tech giants and leading research hospitals, and its recommendations were on track to become de facto benchmarks for AI in healthcare. Although critics questioned outsourcing AI oversight to a consortium funded by industry heavyweights, supporters argued that CHAI’s broad coalition and agile standard-setting would accelerate the adoption of trustworthy AI.

However, the momentum CHAI experienced at the close of the Biden administration faltered when President Trump returned to office with a different view of AI governance.

A new administration ushers in a new approach

As Applied Policy has previously reported, President Trump’s second term brought a shift in federal AI policy. In his first week back in office, he rescinded a Biden-era executive order on AI and signed a new one directing agencies to “remove barriers to American leadership in AI.” The Administration’s AI Action Plan, released in July 2025, encourages agency experimentation and identified healthcare as a priority for AI adoption. Officials described AI as an engine of national competitiveness that should be governed by “innovation and accountability, not bureaucracy.”

To the surprise of many, senior officials at the Department of Health and Human Services (HHS) began actively distancing the government from CHAI. Deputy Secretary Jim O’Neill has lambasted the coalition, stating that the Trump administration does not recognize it as “a regulator or a pseudo-regulator” and that “they don’t speak for us.” In an op-ed in the Washington Examiner, O’Neill joined Food and Drug Commissioner Marty Makary in accusing the Biden administration of having effectively outsourced regulatory authority to CHAI and its “Big Tech backers.” Sharing a link to the piece on X, HHS Secretary Robert F. Kennedy Jr. warned that the coalition risked becoming “a regulatory cartel.”

Last month, several outlets reported that Amazon had quietly withdrawn from CHAI, a move some interpreted as reflecting both political sensitivity and strategic repositioning. Amazon has not commented publicly on its departure, but its absence has been widely noted given that it was among CHAI’s founders and played an early role in the development of AI infrastructure for health systems.

CHAI’s Brian Anderson has sought to defuse the conflict. In a letter to coalition members, he emphasized that the coalition “has no regulatory authority” and reiterated its willingness to engage with HHS and other federal agencies. “We are eager and willing to meet with leadership,” Anderson wrote, “and to continue operating in a nonpartisan space where policy leaders and regulators can easily engage with private-sector clinicians and technologists—learning about real-world use cases, emerging technologies, and best practices.” The statement underscores CHAI’s intent to remain a neutral convener amid a shifting political landscape.

Looking ahead

CHAI’s effort to clarify its role comes as the federal government refines its own. In September, the Office of Science and Technology Policy issued a Request for Information on Regulatory Reform on Artificial Intelligence, noting that “realization of the benefits from AI applications cannot be done through complete deregulation, but requires policy frameworks—both regulatory and non-regulatory.”

That the AMA has chosen to enter an already crowded field suggests that voices seeking to define what “trustworthy AI” means in practice will only continue to multiply. Each is likely to advance its own vision of how innovation should be guided, governed, and ultimately judged. How those efforts converge or collide will shape the boundaries of oversight in the years ahead.