Lightning Strikes: Why Your Company Needs an “AI Action Plan”

By Jeff Elton, PhD, Vice Chairman, ConcertAI 

On July 23rd, the White House issued its AI Action Plan, the first of many such releases this administration intends to issue. It establishes a high-level set of priorities and a framework for how AI infrastructure, capabilities, and enabled industries might evolve. While this plan is viewed as advisory, it is anticipated that others that follow will hold requirements. For all sectors, the plan should be viewed as applying to those that produce AI solutions as well as those that will be users of these solutions. For healthcare and life sciences companies, change will come in several areas, with an accelerating pace of new policies, funding changes, and deployments.

In past perspectives published here, we noted the inevitability and potential positives of the FDA’s transition to an AI-enabled infrastructure and augmented decision support capabilities. With the administration’s AI Action Plan being directed to all agencies and oversight bodies, more aspects of U.S. enterprises will become AI-enabled, or even AI-founded, with a special emphasis on science-driven businesses.

With that in mind, I believe it’s worth noting that it is easier to change the direction of your AI program and complementing operating model changes if you are already in motion, building out your AI capabilities, aligning key talent, and effecting operating model changes to capture value. This helps avoid what I call “AI static friction,” where few AI innovations can advance because legacy ways-of-working oppose changes that appear to be unvalidated, discontinuous with an organization’s experience, or orthogonal to sources of past expertise and competence. Instead, a plan guided by AI principles, clarity of goal, and an openness to iterative refinement creates a form of “AI kinetic friction” that lowers the level of energy and effort required to advance AI innovations, pivot as the context changes, and assure operating model benefits.

In TD Cowen’s summary assessment of the AI Action Plan, they see clear linkages in intent and momentum between this and the recently announced AI FDA initiatives that span new approach methodologies (NAMs) replacing traditional animal safety testing and other traditional approaches that served as surrogates for human response; use of the generative AI platform Elsa for protocol assessments and other analyses supporting the agency’s regulatory decisions; and creation of an AI sandbox capability that allows sponsors and technology developers to test AI systems as a quid pro quo of their sharing data and results.

In the AI Action Plan, there are multiple elements of significance for life science companies and healthcare providers.

AI-enabled science will be emphasized and expected. A close reading defines AI-enabled science as a process where hypotheses and experimental designs are informed by “unbiased AI” and a variety of advanced AI computational approaches. It further calls for automated, cloud-enabled labs that allow for large-scale experimentation dwarfing what traditional methods allowed — read this as large-scale simulations that validate test model predictions and small-scale experimental outcomes.

Data quantity and quality and AI models are inseparable, with regulatory standards being established to define “high-quality” data.  Special reference is made to genomic data training underlying biological models — with this extending beyond human biology as the administration opens up access to federal lands with an encouragement to conduct next-generation genetic sequencing of all forms of life.   It is a modest extension from the language of the AI Action Plan to presume that future research and therapeutics for approval will be a composite of AI-generated hypotheses, AI-informed experimental plans, and AI experiments conducted with multiple biological models to establish greater confidence in causal relationships and predicted outcomes.

While much has been made of “eliminating bias” in AI, a simpler view is that all models need to come from what is defined as high-quality and highly representative data, with results that are consistent with observed and documented features of nature or populations. So, while there was a good deal of media-directed and sensationalized commentary on the AI Action Plan on its announcement day, the plan itself is well framed and stands as a positive manifesto for what is needed to assure scientific progress and national competitiveness.

So, what should be in your AI Action Plan? We suggest the following elements:

  1. Position AI as a fundamental part of your R&D enterprise, on the same level as medicinal chemistry or translational sciences. AI transcends modalities, organ systems, etc., and it needs to operate at the level of a fundamental science or scientific group.
  2. Enterprise scientific and clinical data should be accessible across the R&D enterprise. For too long, life science R&D has been a collective set of micro-specialized and compartmentalized functions. AI demands a new data access paradigm that will be at odds with legacy risk and information security postures.
  3. The legacy dependence on small, proprietary data and legacy public data needs to give way to data partnerships with broad access, large scale, and high recency. Data has historically been a protected asset, with access being tightly controlled and publications strategically planned. These legacy sources and operating models are entirely counter to an AI-first enterprise. In addition, the likelihood of blind-spots or bias, limited reproducibility, and insufficient biological depth will only increase with the emergence of more powerful AI models and tools. It is unlikely that any one biopharma can create the data assets required to advance AI-driven work — which creates an opportunity to forge new partnerships to accelerate new areas of biomedical innovation.
  4. Research, clinical programs, and regulatory submissions need to have an AI plan as a standard part of their design and documented outcomes. NAMs and related approaches will become ever-more standard year-over-year. Publications and regulatory submissions will increasingly include sections covering the AI-augmented hypotheses, an AI research plan, and AI-model outcomes that complement traditional approaches. This is not a future state but one that is underway and accelerating.
  5. AI lets you know what they will know and what they will ask. It is often attributed to Sean Connery in the movie “The Untouchables” that you would “never bring a knife to a gunfight.” There is a parallel now in scientific research and regulatory submissions: You need to understand what Elsa will see in your protocol and study design, how Elsa might assess your safety data, or how Elsa might reflect on the study endpoints for a specific disease or cohort. Whereas the “past was prologue” in the regulatory process of years past (yes, mixing Shakespeare’s “The Tempest” with “The Untouchables”) as assessment committee members overlapped and a new program was compared to the previous two or three — that is not the process going forward. Now, LLMs and generative AI, from which Elsa is derived, will have a broader view of a disease, treatment objectives, safety requirements, and overarching patient care goals. If you can’t produce, understand, and meet that view, you will be inadequately prepared.
  6. Open the aperture as to the source of your highest-impact innovations. AI is not just about doing traditional things faster and more efficiently; rather, it enables a new level of innovation where questions, patterns, and relationships can be discovered and explored in new ways, at scale, and apace. In many cases, the product that emerges may be an AI model or a composite of a therapeutic plus an associated model for selection of the patients to be the greatest beneficiaries, or for whom we should avoid treatment.
  7. AI collaborations will transform as early-stage companies access on-demand infrastructure and super-scale datasets. Large pharma’s biopharma partnerships and collaborations need to transform and align to an AI-centric R&D environment. This can take the form of aiding an early-stage company’s access to on-demand AI computational infrastructure and super-scale datasets. The preferred pharma partner will syndicate this as a package that aligns with a revised milestone funding process that can likely be accelerated by years.
  8. Define your AI talent and talent development approach. As the AI-enabled enterprise evolves, there will be two parallel groups that need to be the focus of leadership. The first are more modest skilled and trained individuals who can work at the “limit of their licensure” or beyond with AI augmentation. For them, learning how to work with AI, intentionally and mindfully, will be critical. The second group will be among the highest skilled in the enterprise, and likely the industry, who will be AI-augmented and have direct control over multiple agents or super-agents (e.g., agents that have a span of oversight of other agents). This group will likely be future senior executives and innovation leaders, broadly aiding in the building of an AI-first culture.
  9. Use AI to raise the bar on decision quality and speed. In the future, the best decisions will be made by AI-augmented human experts where the objective will be new levels of insight, clarity, and confidence. This is not about asking “What did ChatGPT or Grok say?” but rather “What insights did the series of deeply disease-specific LLMs and agents provide for the complex relationships surrounding this patient group’s exceptional response to this novel therapeutic?”
  10. Reset time and productivity expectations. AI is not time- or day-bound. It can execute to the limit of whatever computational infrastructure it is allowed to access. Lifecycles and generations are a fraction of a corporate fiscal year. As such, time-to-insight and time-to-decision expectations can be entirely reset. So too, value realized will increase as narrow improvements in outcomes, access, and reimbursement are supported by AI-enabled approaches and decisions.

As some of you reading this are doubtless aware, ConcertAI is an AI company whose origins included large-scale, multi-modal data. Over the last four years, these data have increased fourfold and now include more than 8 million patients in 49 U.S. states. Two years ago, we started the third-generation architecture of the company with the understanding that LLMs, generative AI, agentic AI, and other approaches were going to be foundational to biomedical research and clinical decision augmentation. We have worked with NVIDIA and other companies who are deeply committed to advancing AI with a minimum of bias, high transparency, and standard-setting trust. Our AI Action Plan calls for accelerated partnerships to deepen and broaden our data, making it accessible to leading biomedical and biopharmaceutical researchers, within the highest performing AI platform available — CARAai™. All of this is in service of transforming translational science, clinical development, and clinical care to assure and accelerate medical innovations, delivering the best possible options and outcomes to patients. We can never go backwards, and we persistently challenge ourselves with how fast we can responsibly move forward.

Every week brings a greater level of AI innovation, more advanced and capable models, and deeper pushes to evolve the traditional life science enterprise into an AI-enabled or AI-first one. The new federal AI Action Plan sets the expectation, and in certain circumstances the requirement, that advanced AI will guide hypothesis development and experimental methods, inform outcomes, and support decisions. Companies that adapt their AI Action Plan to meet that requirement will succeed in this ever-accelerating field. Those that don’t risk falling ever more behind and being potentially being uncompliant with new mandatory requirements.