Blog | ConcertAI

Lighting Strikes: The Paradigm Shift in Regulatory AI

Written by ConcertAI | May 12, 2025 4:23:01 PM

By Jeff Elton, PhD, Vice Chairman, ConcertAI

In just six months, the U.S. FDA has gone from presenting its first guidance for comment on the use of AI in support of regulatory decisions to now equipping all parts of the agency with generative AI as a complement to all human review and oversight activities. This is just the beginning.

First, a brief bit of history. The U.S. government has done chemical testing of agricultural products since 1848, making it one of the oldest agencies to provide for the protection of consumers. The U.S. FDA in its current form is 124 years old, established in 1906 with the passing of the “Pure Food and Drug Act” which provides oversight for interstate commerce of food, ingredients, and medicines. Its name formally changed to the Food and Drug Administration in 1930. The first electronic oversight of systems and data was part of 21 CFR Part 11, published in March 1977. This was expanded and complemented with risk-based approaches, released in 2003. It was another 10 years before software evolved to being considered a medical device with therapeutic potential. This evolved from work of the International Medical Device Regulators Forum in late 2013, and was then integrated to form FDA guidance as Software as a Medical Device (SaMD) in December of 2017. Additional guidance was developed for software that complemented medical therapeutics in November of 2018.

In January 2021, SaMD and 510(K)[1] evolved to consider AI and Machine Learning-based software to be medical devices. New devices, now numbering well over 1,000 approved, are published regularly. In March of 2024, the FDA published an intra-department view of AI, “Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together” as an extension to the original work of “Artificial Intelligence/Machine Learning (AI/ML)-based Software as a Medical Device (SaMD) Action Plan” from January 2021. This signaled the full integration of, and consistent views on, AI across the agency. In January 2025, the final medical device guidance was summarized and extended as “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” So, while the agency considered the potential importance and impact of AI before 2020, the active history has taken place over the past five years.

Still, this all considered AI as “software” and something that was discrete and separable from the domain of therapeutics. It is now clear that AI is changing the world of drug discovery. There are AI-native drug discovery companies that are accelerating their build of highly differentiated early-phase clinical pipelines. In 2020, there were 17 new therapeutic programs entering clinical development. This number grew to just under 70 in 2023, and that is expected to more than double in 2025, with the number at Phase 2 or later at more than 70. While this is a small percentage of the more than 1,000 to 1,500 new industry-sponsored trials that may be conducted each year (phases 1 through 3), most biopharma now assume that AI will become an ever-more integral part of their discovery model.  Complete or partial AI discovery originated programs will likely achieve 25 to 35% of all programs by 2030 at the current rate.

AI has also become an active part of clinical development operating models over the past five years. But discovery, translation, and clinical development require an ecosystem of companies, partners, service providers, and regulators. So, AI in one phase may accelerate and improve the precision of those activities (e.g., AI discovery may be 60% faster than traditional approaches), only to then see the traditional model of translation and clinical development phases consume these savings through their relative inefficiency and imprecision.

One does not normally consider regulatory guidelines and operating model changes as being the foundation for beneficial disruption, but that is just what occurred the week of May 5th, 2025. While the foundations of the disruption started earlier in 2025, the full significance was not apparent until May 8th. In the first five months of the year, the FDA provided preliminary guidance for AI in support of regulatory decisions; formal guidance for the elimination of animal studies and replacement with AI-centric approaches for IND enabling and safety studies; and the appointment of Jeremy Walsh, as the agency’s first Chief AI Officer – all culminating in a broad set of statements on AI being the complement to talent for agency program reviews, processing of safety data, and supporting all routine work.

In January of 2025, the U.S. FDA advanced critical draft guidance and formal guidance for new therapeutic discovery and development programs. The first of these is Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products: Guidance for Industry and Other Interested Parties. This represents a broad framework for how AI can be used in clinical development with a “good AI practices” (my language) seven-step process that defines the “context of use” of AI, selection of a model, and how specific use would be trained and validated. It is the openness of the framework that makes this draft guidance so powerful – it is not trying to narrow applications, but rather invites uses with clear criteria for acceptability of the methods, not the use-case.

The second FDA initiative was the publication of “Roadmap to Reducing Animal Testing in Preclinical Safety Studies,” seeking the ultimate elimination of animal testing in favor of cell line, organoid, and comparable uses that represent models of human response based on human-derived methods and real-world data that is the result of treatment and care. This transition is significant, with the animal testing requirements being reduced, refined, and potentially replaced by largely AI-based computational models of toxicity and other safety criteria (so-called New Approach Methodologies or NAMs data). Implementation of the new approach for investigational new drug (IND) applications, with inclusion of NAMs data, is being actively encouraged with a clear pathway for use spelled out in the roadmap. Just as important is the elevation and broadening of the role of real-world data in the determination of efficacy. Where a drug is already in use and approved, including ex-U.S., real-world safety and outcomes data can be used with the caveat that those jurisdictions must have comparable regulatory standards. As part of these announcements, FDA Commissioner Martin (Marty) A. Makary, M.D., M.P.H. noted:

“By leveraging AI-based computational modeling, human organ model-based lab testing, and real-world human data, we can get safer treatments to patients faster and more reliably, while also reducing R&D costs and drug prices. It is a win-win for public health and ethics.”

On May 8th, the agency published a new release, “FDA Announces Completion of First AI-Assisted Scientific Review Pilot and Aggressive Agency-Wide AI Rollout Timeline,” which provides provocative, albeit general, comments on the depth and breadth of AI deployment across all agency activities. If we reference the March 2024 announcement, it is clear that this deployment will take place across major departments at the agency and lead to a new level of integrality and accessibility of documents and data in their secure infrastructure. In the statement, Dr. Makary, after seeing results of scientific reviews being completed in minutes versus three days, went on to note:

“I was blown away by the success of our first AI-assisted scientific review pilot. We need to value our scientists’ time and reduce the amount of non-productive busywork that has historically consumed much of the review process. The agency-wide deployment of these capabilities holds tremendous promise in accelerating the review time for new therapies.”

While we don’t normally reference rumors, the speed of these changes and their implications for biopharma sponsors requires some “connecting of dots” to fully understand the capabilities that may be deployed and their scope of impact. Recently, it was documented that Dr. Makary had several meetings with OpenAI regarding a tuned LLM solution colloquially termed cderGPT. Given that the solutions will be constrained to the FDA’s secure cloud and have access to all submissions – and potentially all datasets – the FDA LLM or LLMs/LRMs have the potential to rapidly (e.g. within 90 days) generate insights and co-author assessments that biopharma sponsors may see as early as August or September. While use of agentic AI was not formally noted, this foundational generative AI infrastructure would support targeted agent deployment to further agency productivity and responsiveness. If you place the recent staffing reductions into this broader context, it is likely that part of the belief that reductions could be made without any diminution in capacity or capabilities was founded on the expectation of a generative AI operating infrastructure that could rapidly evolve. Regardless, it appears that sustaining or improving the productivity of the FDA will be rooted in a Generative AI with experts in the loop.

What does this all mean for biopharma sponsors? Perhaps most importantly, all biopharma will need their own end-to-end enterprise-wide and program-specific AI strategy, operating architecture, decision review approaches, and AI-aligned data strategy. Full stop. Everything that is done, presented, and analyzed will now be assessed through generative AI with expert human-in-the-loop processes. With the latest FDA draft guidances and frameworks for AI in IND enabling studies, we will see programs entering the clinical where all aspects of those programs have been defined, tested, and advanced with use of an array of AI solutions. Trial planning and identification of patients to be matched to those trials is also increasingly guided by AI solutions. In some cases, as the FDA draft guidance would indicate, the novel therapeutic entity may be accompanied by a package of AI models defining the population of interest, why that group was selected, and the standard of care control, among other considerations. All of this work will most certainly be reviewed by the agency’s own models, placed into a context of past studies relative to documented safety, and viewed relative to other aspects of standard-of-care outcomes.

Clinical development and regulatory interactions used to be a Shakespearian world where “past was prologue.” Companies would initiate designs of new studies using past ones as a reference. Sponsors would look at the most recent approvals and even FDA review committee membership as predictors of what they would need to address, and for setting expectations. That is not the world we are entering. Generative AI, or LLMs and LRMs, learn and progress, operating on a much larger and broader corpus of information and data than any single biopharma clinical development would, or could, have accessed. So, it is in this context that we note that most biopharma are not AI-ready, or at least not ready for the new decision and interaction environment that is forming around their programs and about to be the basis of oversight.

You can’t predict an outcome or respond to actions if the basis of your own predictions fails to accommodate all the information, decisions, and consequent actions that the other party has accessible to them. As of the end of June, that is the situation that most biopharma will find themselves in. The answer lies in changing the architecture of R&D, the data and AI SaaS solutions available to programs and leaders at multiple levels, and the leveraging of open-source and proprietary information assets such as PubMed, ClinTrials.gov, medical society guidelines, and large-scale real-world datasets. All of this provides a context to the response of LLMs and accelerates the learning in ways that are meaningful to recommendations and predictions.

While this is evolving faster than we anticipated, ConcertAI is ready. We will be launching a new set of solutions that provide an AI SaaS foundation for next-generation biopharma operating models in three weeks at the annual American Society of Clinical Oncology meeting in Chicago on May 30th, 2025. We were pleased to have our solutions and roadmap be so well received and highly rated in a recent Cowen report. If you are at ASCO, we encourage you to set a time with us for a working session. Finally, not to be outdone by the pace set by a regulatory agency, over the next six weeks we will be publishing a set of specific recommendations that we believe all biopharma need to consider in their next-generation regulatory, safety, R&D, and medical organizations.

 

[1] The 510(K) process was part of the Medical Device Amendments to the FD&C act of 1976 to assure the safety of medical devices, including how to study device performance when implanted in a human and post-marketing surveillance of ongoing safety and performance. These guidelines were extended and strengthened in 1990 as part of the Safe Medical Devices Act, which held a heavy emphasis on collection of safety data at the healthcare provider level and defined “substantial equivalence” as a predicate device. The 510(K) under the FDA Modernization Act of 1997 the use of data from earlier versions of a device was allowed with specific caveats, and introduced the notion of Class I (low risk) and Class II (moderate risk) devices versus a more binary model of limited and high risk. The FDA Reauthorization Act of 2017 introduced rules decoupling accessory devices from a main device, premarketing user fees, accelerated approval, patient inputs, National Evaluation System for Health Technology (NEST), and risk-based inspections. The 2022 FDA User Fee Reauthorization Act, reauthorized the user fee structure from earlier legislation, with critical criteria being added around patient science and international harmonization of criteria for devices. Finally, the Food and Drug Omnibus Reform Act of 2022 added authority to approve or clear devices with a pre-determined change control plan, accelerating needed innovations during the period of the COVID pandemic, and added the first critical aspect of the cybersecurity of connected devices.