Proposed legislation paves the way for AI to prescribe drugs

Proposed legislation introduced in the U.S. House of Representatives will allow AI and machine learning technology to prescribe drugs approved by the FDA autonomously.
The bill, H.R.238, introduced on Jan. 7, would require amending the Federal Food, Drug, and Cosmetic Act (FFDCA) to “clarify that artificial intelligence and machine learning technologies can qualify as a practitioner eligible, to prescribe drugs if authorized by the State involved and approved, cleared or authorized by the Food and Drug Administration and for other purposes.”
The Prescription of Drugs by Artificial Intelligence or Machine Learning Technologies or, if approved, the “Healthy Technology Act of 2025,” was sponsored by Rep. David Schweikert (R -Az.).
If approved, section 503(b) of FFDCA would be amended, adding that the term would recognize AI as a “practitioner licensed by law to administer such drugs.”
This would include artificial intelligence and machine learning technology that are authorized pursuant to a “statute of the state” and approved, cleared or authorized under sections 510(k), 513, 515 or 564 of the FDA.
The bill was referred to the Committee on Energy and Commerce.
THE LARGER TREND
AI can accurately perform tasks such as “radiological detection of lung nodules in imaging and other applications already approved for clinical use,” according to a study published in the NIH National Library of Medicine.
AI is being used in various ways within healthcare, including for ambient documentation, accelerating drug discovery, reducing administrative burden and more.
Still, some experts believe AI is not ready to be used in many aspects of care.
“Recent studies indicate that Generative Pre-trained Transformer 4 with Vision (GPT-4V) outperforms human physicians in medical challenge tasks. However, these evaluations primarily focused on the accuracy of multi-choice questions alone,” according to researchers at the National Institutes of Health.
However, the researchers discovered that GPT-4V frequently presented flawed rationales as to why it made the correct final choice, especially pertaining to image comprehension.
“Our findings emphasize the necessity for further in-depth evaluations of its rationales before integrating such multimodal AI models into clinical workflows,” the researchers wrote.
In an interview with HIMSS TV, Harjinder Sandhu, CTO of health platforms and solutions at Microsoft, described the high-value and high-risk use cases pertaining to AI utilization in healthcare.
“If the AI systems hallucinates information, it makes up information about that patient or omits important information, that can lead to catastrophic consequences for that patient,” Sandhu said.