Lightning Strikes: AI and the FDA – Elsa Goes Live and AI Gains “Common Sense”

By Jeff Elton, PhD, Vice Chairman, ConcertAI 

Four weeks ago, we posted an initial point of view on some of the implications of the FDA implementing enterprise-wide generative AI capabilities. We’d now like to develop some narrow themes in more detail. The U.S. FDA beat its own audacious goal for implementing its new generative AI solution for pre-processing applications and other submissions almost a month earlier than it targeted, launching its new “Elsa” generative AI solution, on June 2, 2025.

Here are the details as posted by the FDA on LinkedIn:

“Today, the FDA launched Elsa, a generative AI tool designed to help employees—from scientific reviewers to investigators—work more efficiently. This innovative tool modernizes agency functions and leverages AI capabilities to better serve the American people.

➤ Accelerate clinical protocol reviews
➤ Shorten the time needed for scientific evaluations
➤ Identify high-priority inspections targets
➤ Perform faster label comparisons
➤ Summarize adverse events to support safety profile assessments
➤ Generate code to help develop databases for nonclinical applications”

The language is carefully framed to emphasize that there is a “human in the loop” where the technologies are assuring the depth and completeness of information available to a reviewer. To illustrate value and intent, FDA Commissioner Dr. Marty Makary, at the Jefferies Healthcare Conference Keynote Panel this past week, noted that their generative AI solution reduced review activities from “days” to 15 minutes for one reviewer.

Dr. Makary made a few other observations at the conference that may underlie the rapid move to AI as a core part of FDA workflows. First, he said the level of documentation that is being provided to the agency has been increasing in absolute terms, as has the number of questions being asked of sponsors by reviewers. This has created more complex processes without necessarily improving the outcomes. He noted a goal of making the FDA’s decision-making as much a gold standard as its science. Here is where Elsa fits in: the goal of world-class processes, with better decisions (his language was more “common sense”), made with higher efficiency, requires that generative and agentic AI solutions be used by the world’s top talent.  

A surprising amount of the reception to the recent U.S. FDA announcements would make it appear that the agency’s use of AI is new, the deployment unconsidered, and overall, the decision was cost- and resource-motivated. To assume this would be to question the motivation, competence, and efficacy of the agency’s review and oversight processes. Rather, we’d encourage the assumption that the FDA’s adoption of AI has been following the arc of – or perhaps been slightly more advanced than – most biopharma. In the same way that most pharma and biopharma believe that AI will be integral to their drug discovery and clinical development processes, the same is true of the regulatory realm. Stated differently, generative AI is a critical tool that can assure the broadest set of insights are brought to bear, lower administrative burden on its staff, and ultimately act as augmented intelligence and quality assurance – in essence, it is an essential part of the agency’s future armamentarium.

During the period of 2022 through 2024, there were numerous internal initiatives and external review meetings discussing the different regulatory applications of AI and in support of the agency’s own assessments, surveillance, and analyses. In September of 2022, the FDA posted a compilation of 26 areas wherein it had used AI as part of the agency’s own work and workflows. In March of 2024, the FDA published “Artificial Intelligence and Medical Products” with a focus on AI as a component element of a product or the basis of a product based on workshops done in the prior year. Another often-cited public session on “Artificial Intelligence in Drug & Biological Product Development” was held on August 6, 2024 as a collaboration between the FDA and the Clinical Trials Transformation Initiative (CTTI). At the introduction of this session, Patrizia Cavazzoni, M.D., the Director of the FDA’s Center for Drug Evaluation and Research (CDER), noted that the FDA had received more than 300 applications with AI elements.   To advance this work, the FDA has been assessing the reliability of the data, projecting the data needed (including work on synthetic data), defining how to optimize AI models for explainability and trust, and advancing themes for the AI-capable enterprise (e.g., cross-functional teams for AI-based systems; rules for developing, testing, and implementing AI-based systems). Just prior to the release of preliminary guidance for AI in support of regulatory decisions, a summary perspective piece appeared in the Journal of the American Medical Association on January 21, 2025 authored by Haider Warraich, Troy Tazbaz, and Robert Califf, who were all at the FDA when the article was written and submitted in late 2024. More recently, the FDA has indicated that the wide range of IND-enabling studies can move towards broader use of AI in replacement of animal studies and legacy approaches that have been in place for over four decades.

On June 2nd, the FDA defined their future focus as the application of AI to: (1) “Accelerate clinical protocol reviews,” (2) “Shorten the time needed for scientific evaluations,” (3) “Perform faster label comparisons,” and (4) “Summarize adverse events to support safety profile assessments.”   These are critical activities for all new INDs, NDAs, post-approval studies, and pharmacovigilance activities. Based on the data that the FDA has presented, their experience is at least on par with that of any other healthcare or life science organization – if not somewhat ahead (e.g., see latest approval of an AI platform that is predictive of breast cancer five years in the future, a true public health contribution). The most robust assumption is that the agency has the expertise and technology savvy to broaden AI’s use in support of their internal workflows. In fact, it would not serve their broader mission of assuring public safety and access to meaningful medical innovations if they didn’t have this.

At the panel this past week, Dr. Makary made two other statements of significance to this topic. The first is that the well-publicized staff separations did not include any scientific reviewers (or key members of the data science team). The second is that PDUFAs (Prescription Drug User Fee Acts), which are supported by user fees, require a 90% performance, with the fee level and performance standard being set by Congress. So, Elsa is a focused system for the agency’s most important work, to be used by its most important staff, in a system that has multiple points of oversight and accountability. This is exactly where and how AI can deliver exceptional and differential value.

Going forward, all biopharma sponsors and manufacturers should assume that AI and expert human evaluations will occur in concert, with an ever-growing interdependence. This is a profound change with equally profound implications – read on.

Generative AI solutions are trained on a broad corpus of information and are often then tuned to specific areas of use or domains, and then exposed to ever-increasing material and user interactions that accelerate the LLM’s learning and performance. It is difficult to assess the overlap of the agency’s information access to that of pharma, but most data and training assets are either publicly or commercially available. What LLMs and LRMs are exceptional at doing is understanding context and placing the right weight on key terms to then execute analyses. Any new document or program can be assessed, compared, and presented in a manner that makes an expert much more effective, consistent, and productive. Since this is the model being used by the agency, it also needs to be followed by biopharma sponsors and manufacturers. Any other posture would leave a company unprepared for a wide range of consultative discussions or decision sessions on trial designs, trial outcomes, or matters of relative efficacy and safety.

The experience of pharma and biopharma with the latest generative AI LLMs and LRMs is that a wide array of tasks can be performed to a level that is comparable or superior to previous approaches of humans with legacy SaaS. Two of the earliest areas to be automated were scientific writing and document management for regulatory submissions. Given the rather standardized elements of each document, training and fine-tuning are relatively straightforward tasks. Most large pharma and biopharma companies are also embracing AI for new chemical entity and biological therapeutic design and optimization. In more complex and nuanced areas – such as the design of a trial, designation of the standard-of-care control population, and definition of the cohort for the active treatment arm for an experimental therapeutic – the use of AI is increasing significantly but remains relatively low and non-integral to pharma’s core scientific and management processes. All of this is about to change. In our next commentary on this topic, we will provide examples from our own CARAai™ platform and develop a range of future models for pharma.

The most popular and familiar Elsa is from the Disney movie Frozen. You may recall that Elsa is born with powers that she didn’t fully comprehend and was reluctant to use. But as the movie and movie series progresses, she becomes a highly confident, competent, and powerful leader. It’s an interesting metaphor for Elsa as an LLM – her experiences, users, and influences will also bring an evolution of character and capabilities. I hope it does so with a singular purpose of advancing and protecting the interests of patients – that would be AI for Good, empowering an Elsa for good.


 

Our next blog will be “Elsa, meet Cara – how LLM to LLM interactions will complement expert human-to-human in key regulatory processes.” We’ll be exploring how advanced LLMs with expert humans in the loop are going to be foundational to pre-regulatory analyses and regulatory submissions. We will also explore the future of a multi-LLM world across sponsors and agencies.