Jeff Elton:
Welcome to the ConcertAI podcast, the first in our series for 2025. I have the great pleasure of being here today with Ariel Berger from Evidera, and Ariel is a multi-domain expert in RWD, RWE evidence generation. And today we're going to focus on that, but with a particularly slant towards regulatory application of RWD. Some of this could have an oncology focus, some of this may go beyond the world of oncology, and we'll try to keep it as up to date as possible with what the current practices seem to enable. We know that the regulatory environment is a constantly changing one in this area as well.
So first, before we get going, Ariel, welcome.
Ariel Berger:
Thanks for having me. Happy to be here.
Jeff Elton:
And if you would please, I know you've got a number of decades your side on this, but you have a pretty long history of working in the field, maybe you can give us some of your own background.
Ariel Berger:
No, sure. Happy to do it. So I've been in the field about 26, 27 years. After initial started in epidemiology, biostatistics, I went into consulting. And before there was such a thing as real-world data, real-world evidence, we used to do a lot of things called database studies, chart reviews, surveys and the like. But definitely been here for a lot, from the time when mortality was common in retrospective databases to where you have to go into tokenization and other things to typically have to get it, as well as the whole explosion in technology.
And it's been very exciting to see the field grow and evolve. And to your point, how the FDA and other regulatory agencies have come to the view that real-world data, real-world evidence has a place in their decision-making. And it's been very exciting to see how all of these tools have had to raise up to the bar of what's acceptable for the regulatory perspective.
Jeff Elton:
Yeah, well that's great. And I know that now that we have Rob Califf in there, at least for the time being, he's been a big [inaudible 00:01:57] fan of use of RWD and felt it was even a responsibility of that. And I look forward to hearing more from you as we get into this.
So first, maybe just a little bit of context here, because several months back we started getting some formal guidance. That guidance came after a relatively long comment period that had different parts of the industry providing their feedback and different groups coming through that. So first, how has the environment changed around application of RWD for regulatory questions or regulatory intent? And then as you think about this, have there been characteristics of either the disease, the study, the patient population that seem to be a better fit for this than others?
Ariel Berger:
It's a big question. The most recent change I think that's on my mind is this new pathway for approval, and that's this use of one well-controlled, well-designed trial, plus something called "confirmatory evidence". And I think you could drive a real-world evidence bus through those quotation marks. Although again, it depends on the indication to your point, et cetera. Even three, four years ago, the hot new topic was I think external control arms. And that's something that's been explored I think mostly for oncology populations, rare diseases, other instances where the benefit far outweighs the risk or it's unethical to randomize.
And then one other one I'll touch on just very briefly is this concept of tokenization, both to expand how much data one can collect while easing the burden on sites and patients and providers, but to also expand I think what one can look at with the data you're collecting. And even this idea of combining regulatory and payer or HTA acceptability with the ability to do trial-based economic assessments, et cetera, a lot less theoretical modeling and much more what actually happens. So it's an exciting spot to be in.
Then one other thing I'd add though, Jeff, to what you're talking about with patient populations or diseases that also had indications. Because I think anytime you get into a second, a third, a fourth indication, once you're already on the market for something, at least in the U.S., it becomes much easier to make an RWE-based argument because of off-label use or the ability to examine off-label use, because once it's available it can be prescribed for anything. And it's a very interesting place to be in given the data that's available today.
Jeff Elton:
So let's carve that one out particularly, because I think that's going to be a pretty interesting one to explore itself, particularly in areas like oncology. We do find 50% of the uses being off the approved nature of the label, but within standards of practice where it's considered permitted, reimbursed, et cetera for doing that. Areas you go back through and it sounds like you're going to put a little bit more around each of those particular areas, maybe also talk about difference between U.S. and ex-U.S. authorities as you see that being relevant, because European authorities have been pretty active in this.
Ariel Berger:
I speak a little more about U.S. because that's what I tend to do more of. We're a big organization, we tend to parse ourselves out one way or the other. But I think in general, having the ability to pivot to real-world data, to look at even the indication you want to start in, the marketplace for example, to see how crowded it is to make sense for development purposes, there's a lot of ways the data and the evidence can compliment the trial. As opposed to when you get actively into how you can best support your trial through extra collection of information through tokenization or other ways to link the patient, and ultimately using these data for a whole host of purposes, comparators, et cetera, to get that okay, and then the launch commercial.
Jeff Elton:
Okay, so let's maybe start with your first one, which was external control arm. And external control arm has been a construct that's been around and there have been approvals with a real-world data comparator, control or comparator kind of put together. It probably at least to some of my understanding may have been one of the first areas where real regulatory fundamental approval decisions were made where real-world data was a critical part of the ultimate decision there. Is that a correct understanding?
Ariel Berger:
You can't hear me nodding, so I'll say yes, exactly.
Jeff Elton:
Of course. And not every program is appropriate to use an external control as opposed to a randomized control. So where do you see this being used more and how has it evolved and how does the agency look at it now?
Ariel Berger:
I'll tell you what is the official purview, and I'll give you my opinion alongside it, because officially again, I think the agency is a bit, I'll say tepid. They're not cool, but you need the right circumstances. And again, I think that's mainly rare tumors or rare diseases. I think it's areas where it's unethical or impractical. Consider for example, CAR T, to have a sham CAR T or to mock someone to placebo arm. And then instances where the benefits are thought to far exceed the risk.
And I think it's because, and just to keep my epidemiologic hat on for a second, it's hard for a group of folks, and that's I think most regulators, to move from this idea of exposure not being randomly allocated and potential confounding channeling bias, et cetera. Even if you can design something very well, and I think that one can design something increasingly well and exceedingly well in these instances, there's still this reluctance I think to break away fully from this idea of a randomized double-blind construct. Even though, and this is now me talking personally, once you let the goat out to the barn or the horse out of the barn, it's hard to bring it back. And if it's okay for X, Y, and Z, why isn't it okay for A, B, C, D, E, F and G?
And I know a lot of folks are looking to replicate trials using external control arms in broader indications to see if they get the same answer. Because if you're getting the same answer, not necessarily the same numbers, but the same answer, the same directionality, there's no need why you can't speed development across the board. It just doesn't make sense to me-
Jeff Elton:
Which is super important. Everybody, including the agency, wants to speed development and have high value new medicines available to patients. So if I go back to your statement, you kind of used the word tepid, and I'm assuming tepid would come from, there is a gold standard, that's a randomized control design. However, if this is a very small rare cancer with an evidenced non-response to the currently available standard of care treatments where that seems to meet your criteria, it wouldn't be moral to kind of randomize at that point. Is that the sweet spot or the [inaudible 00:08:34]?
Ariel Berger:
For now, I think that's the sweet spot and I think that's going to continue to be the sweet spot I think, those rare diseases, those rare tumors, those rare conditions. Or instances where it dovetails with, like I said, a place where a sham or a placebo is unethical or impractical, that tends to be where they say yes.
Jeff Elton:
And so if I go back and say you don't just neatly fall in and say this is going to just become an external control arm study, this is probably I'm sure a consultative process with the agency, is there a form of almost baseline analysis and contextualization of the program that's helpful to a sponsor as they're thinking about whether this is a good candidate? And if I want to actually have this conversation with the agency, how do I set that conversation up with the right foundation?
Ariel Berger:
That's a great question. I think the answer is yes. It starts with really understanding your disease, being able to point to the prevalence, being able to point to the incidence, having good data to back up sort of the natural history of the condition. And for folks to understand, it's less than 1% of the population. It's very severe, there's not a good prognosis. You can really do a lot with data to make that argument.
And what we tell our clients also above and beyond anything else, with your data, with your lit reviews, with everything you know about the condition, have those conversations early and often with the agency. They'll never, I think tell you yes. They will tell you no, or at least this isn't the direction we would think you would want to go in. But the more you can back your arguments with information and solid information, I think the more solid footing you're on.
And exactly to your point, those conditions that you can point to the data, and you can say, look how few people are out there. Look how many problems we have identifying these patients, getting them into trials. And then to double that, it becomes very impractical against what you, I think accurately called out is sort of the drive of everyone, which is to get the right medicines into the hands of the right people as quickly as possible.
Jeff Elton:
Absolutely. So when you say there's probably the sponsor themselves being able to make a determination, is this even worth having the conversation? There's the consultative conversation that may not have a distinct direction, but at least lean in in terms of receptivity, non-receptivity to that, and then there's the actual study design itself. And when you think about an external control arm as you're going through and providing an active treatment arm to the population, what determines a high quality, or if you were going to lay out the considerations of an external control arm that increases the likelihood of ultimate receptivity with that study package? What are the characteristics of that?
Ariel Berger:
There's an old story that someone said, you think you're so smart, can you teach me the whole Torah in a minute? I'm standing on one leg and the guy was up on one leg and said, just treat others how you wouldn't want to be treated. The rest are just details. And I think here I can boil it down to something very similar and that's mirroring. The better you can mirror that trial design through sample selection, outcome measures, your covariate selection and identification, even the frequency of assessments, although that's the hardest I think to really mirror, but the closer those real-world data sources can mirror your trial design, I think more likely, assuming it's the right condition and everyone is okay with those parameters, those are the data sets that are worth pursuing.
Jeff Elton:
Yeah, I actually really like the way you framed it. The image you just projected has the same rigor, or at least approaching the same rigor, as a normal RCT design would, or at least that the active treatment population would be.
Ariel Berger:
I'd say the data can support that same rigor.
Jeff Elton:
Yeah, that's pretty exceptional and that can actually be a good guidance. It both tells you what the processes need to look like, the documentation of those processes, the preparation of the data, the relatively contemporaneous collection of those data, the same generalizability of the cohort that you're putting together. All those characteristics need to come together in the same way.
Ariel Berger:
Yeah, you have to have it. And I think the biggest challenge that I've seen from sponsors is they're developing treatments for new mutations and those are mutations that are then hard to find in the data. So I think that's where they're struggling now is they can find the tumor and there's a lot of detail on covariates, outcomes, et cetera, but can you find that right subgroup?
Jeff Elton:
So this may ask you to speculate a little bit, but when you think about the Inflation Reduction Act and it's changed pipelines and it's changed priority areas of what people are taking into clinical development, and some of these seem to be getting narrower, more specific and things of that nature that lower the risks that they see coming in later, if it's a broader population thing that's going to be subject to greater negotiation later on, do you believe that that may afford more opportunities to reasonably consider external control arms as part of the design?
Ariel Berger:
To me, it's still a little too new to weigh in solidly on one end or the other. I think what it informs is really understanding your population earlier and having that data strategy where if you have an asset that's active in oncology, you have to maybe think five, six years down the road instead of two years down the road in terms of where should we start development, for precisely those reasons. There's external pressures as opposed to whether the thing is going to work. And so I think it's more just understanding the lay of the land and where you can hit home runs right now that the field has changed a bit. But I still think it's too new to be that definitive just yet.
Jeff Elton:
Okay, good to know. So I want to go back to your second topic that you had, which was tokenization. And you had a lot of energy in your voice as you made that statement to it. So maybe first, you give it some definition. I've kind of heard of post follow up tokenization as a way of doing ongoing surveillance, but I was thinking I was hearing maybe a broader application.
Ariel Berger:
Sure. And one thing I didn't mention in my introduction, and I should just in the interest of full disclosure and fairness, is my company Evidera is owned by a CRO called PPD. And so for me, tokenization, and let me start now with the definition you've asked me for, it's the ability to link a person, specifically their name, who they actually are, age, gender, et cetera, with all the information that's been collected on them in various secondary sources across the globe when you think about it. It's not just healthcare claims, it's not just the CMR data, it's consumer buying habits, it's mortality data. You can get oncology data, you can get whatever has been collected in credit scores, in wherever databases they're living. Link it with that unique identifier or token that's linked to that subject's PHI or personal health information, protected health information. And then you have to purchase the data, but then you get all of those variables on the person.
So for us, a lot of times it's about reducing site burden, patient burden, provider burden. And so to tokenize and have the ability to collect all the patient's CMR data and claims data, all the diagnoses, vital signs, lab values, et cetera, dollars that are paid for care, or just cost of care I guess I should say, because we can do it in countries other than the U.S., you reduce the need to collect information directly from the person, Which frees up the site, which frees up the patient to only focus on what you absolutely must have, your PROs. Your quality of life information, et cetera.
And your tokenization can build out all of your covariates, or most of them, shall we say, because that sometimes can be very hard to get through only secondary data, all or most of your outcomes, and even some of your exposure occasionally. So it really has the ability to reduce a lot of things, make things much more efficient, kind of like external control arms do as well.
But to your point, they most definitely enable long term follow up. I mean we have a client now who is doing an evaluation of a screening tool and they want to do all their outcomes by tokenizing. So they can find all those cases through linkages to EMR data, to claims data and never have to bother the patient or the site. So it's a very powerful way to collect a lot of necessary information and really expand into a more 360 view of the patient than what was traditionally enabled through typical or traditional data collection methods.
Jeff Elton:
So let me take it back to the beginning, we're going to break it down by phase. What I was hearing in particular when you said reduction of site burden, it sounds like some of the tokenized and now federated data that could come through the tokenization and linking could be part of screening the patient for study eligibility. Was I kind of hearing that or is it around developing the study data package through more automation tools?
Ariel Berger:
I'm going to lean more on the latter, because the patient has to give permission, right? So if you don't have the permission, it's not part of the token. You have to consent the patient.
Jeff Elton:
So I have a consented subject at that point.
Ariel Berger:
Exactly.
Jeff Elton:
That's critical. So for you, the starting point is I have a consented studies subject that's now allowing me to integrate those data sources.
Ariel Berger:
Correct. I can still mine them for other reasons, I can't link them to the person until they give me the permission to do it.
Jeff Elton:
Okay. And I know in some of the recent FDA guidance, claims data, they've even outlined multiple data sources that are now allowed as part of that. But what I'm hearing you say is now both data that may come from electronic medical record medical claims data, or could come from financial databases and things where I want to be able to speak to social determinants and other characteristics of that. All of those then would be federated, definitively linked through kind of a token technology, but would be part of the data package and submission.
Ariel Berger:
Could be. Yeah, you may not need the indirect cost data, for example. It depends on what you're studying, how important it is. Social determinants of health I think are becoming increasingly important. And that's one that you may have to worry about or think about how much you want to collect directly from the patient versus what's available.
But still, I think your claims, your EMR are really the backbone of any tokenization effort, because especially for trials, the EMR data especially gives you the deep clinical information you're going to need on labs, tumor information, whatever. But to be able to do economic assessments of those trials and to substitute out those economic models, which people have sometimes I think a tough time believing so to speak, because of all the assumptions, and to be able to then instead link it right to the claims, this was actually paid. These are how many times they went to the doctor, this is what it costs, this is what happened. And the ability to gather that detail through a CRF think is impossible.
Jeff Elton:
So maybe take us through what types of studies where have you seen this used? And if just maybe even your own professional assessment, where do you think it would have the greatest utility?
Ariel Berger:
For tokenization specifically?
Yes.
Jeff Elton:
Well, first and foremost, I think those economic assessments of trials, it's not really as regulatory-facing as it is payer-facing, but every payer is asking for them, so I think they're very important. They can also get you on I think a head start with respect to safety evaluations, post-authorization safety studies, because a lot of those big signals in terms of admissions, et cetera, can be done I think easiest in claims data, if that's what the signal is you're looking for.
But I think the other way I would think about tokenization is to enable long-term registries with or without PROs. You can build an entire secondary registry where you have a very light touch just to get the permission and away you go. We just finished actually working on the design of one in mental health where almost everything is coming from the EMR and claims data. Everything for the comparator arm is coming through secondary data alone, all linked back to the characteristics of the registry participants, but incredibly light touch on the patients themselves. Just a few PROs for a couple of months and then two, three years of data, all enabled through secondary sources enabled through tokenization.
So I'm assuming that would have some, I could give a larger scale to my patient population that it's being studied, or in this case study subjects, but also the cost is probably much more efficient to pull together. So I can actually do a little bit of both, which may in turn give more confidence in the ultimate analysis that I'm actually going to conduct there, whether it's safety or outcomes or whatever that would be part of it.
Ariel Berger:
Yeah, I mean I think you get a little less variability, you get a larger sample. And to your point, it's much more efficient, a lot less money. And even if you had to refresh the data, which these registries will require every six months, every year, it's still less expensive than bespoke data collection each time.
Jeff Elton:
So do you think if I look at a category like ADCs that are becoming a very active modality within certain solid tumor cancer studies, they're complex. They have different profiles, both in terms of outcomes and safety than the individual entities that they're probably deploying. We've seen 150 of them being withdrawn somewhere during the course of the trial. And even kind of later on they do present with different toxicities and different adverse events that sometimes are controllable, but still need to be monitored. Is a category like that a good candidate to do ongoing safety surveillance almost built into the fundamental design, since it seems to be a characteristic?
Ariel Berger:
I would think yes, and I would think it's even broader answer. I think any of these newly approved molecules, nobody knows, right? It's been studied in the least amount of patients possible for the least amount of time required, because it's very expensive proposition and it's very time-consuming. But to be able to just refresh a dataset and continue to evaluate and to see what happens after two years of exposure, five years of exposure, 10 years of exposure, it's invaluable. And to your earlier point, it can be done relatively efficiently through the act of a single consent up front and then following those patients kind of passively and letting those existing systems collect the information and querying it on a regular scheduled basis. It makes perfect sense.
Jeff Elton:
So I'm almost hearing you say building this into the framework of the study design, building it into what had been consented, also affords you the greatest flexibility to make sure that the parameters you're collecting, you've already got the consent to collect and the ability to consent. Because oftentimes, and again, these are experimental designs and these are innovative therapies that have not been in human, for the most part, that would seem to be a prudent path for a lot of companies to consider.
Ariel Berger:
I'd go you one step further, Jeff. I agree a hundred percent, let me start there. But I think just having a data strategy from inception, as soon as the thing comes off the bench first in human trials, you got to start thinking, where am I getting data from? And it's not all going to be clinical. And how can I best compliment and support and expand on what I'm collecting clinically in a way that makes the most sense to me in the development, but also ultimately the commercialization of the product?
Jeff Elton:
That's great. So I want to go back to your number three category that you had at the beginning and you said this could potentially quite exciting and have very high utility, which is, I have an approved medicine, but I'm now doing an indication expansion. And in fact, some of those indications, I may be expanding into. There may be data around their use because the community of practitioners. And I tend to do most of my work in cancer care, and they tend to be very, very active in investigator-initiated in other studies, and you see uses that are outside of the parameters of the narrowly defined approved label. But you made a comment that sounded like real-world data has an application as we're now starting to take a look at those different and expanded indications around which there exists data. Say more about that and what your thoughts were on that?
Ariel Berger:
Sure, so there's a couple of things. One I've directly experienced, but others I've been peripherally involved, let's just put it that way. So working on an asset for neuropathic pain that ultimately got expanded to fibromyalgia, we were able to mine a lot of claims data to demonstrate, for example, the magnitude of what at the time was off-label use, but also then the minimal projected budget impact of a new indication because so many people were already on it for the new indication. So it's a way to, I think, mitigate some fears that a new indication is going to lead to a deluge of patients, because you already may be treating a lot of those patients already. It gives you a much better sense of what that expected impact could be.
But I think you also have a tremendous potential for a head start in picking up outcomes, patient type that gets to be treated with this thing, how they're experiencing the product, how the product is manifesting in them in terms of outcomes of interest, and all of that can be done before you even think about clinical studies. And it will help inform your design, it may even enable you to petition for registries instead of clinical studies, for example. Or potentially this new pathway I mentioned of one well-designed study and confirmatory evidence, which have come right through RWE, real-world evidence.
So there's a huge, I think, potential, for at least where it's allowed off-label use. And that's not in every country to be fair, but to see how the product is actually working in the indication you're thinking about. The real-world data could even push you into the next indication, quite frankly, if you see what providers are already moving on and thinking about. So there's so much that could enable a second, a third, a fourth indication, relatively again, efficiently, before you even think clinical studies, all through real-world data, all through real-world evidence as experience for the product accumulates.
Jeff Elton:
So as I am listening to you and the world of RWD, there can be very large biopharma organizations that oftentimes have a wealth of internal data assets available to them, their own data science teams, epidemiological teams, et cetera. But then the majority, more than half of the pipeline of the industry is biopharma, and good portion of biopharma doesn't have the luxury of having all of these same assets available to them on demand inside.
So what you just described sounds actually like you wouldn't want to approach the design of a study for any follow-on indication without actually the benefit of taking a look at some of these data, whether or not it was guidance to yourself and the design of the study, or if it was a background package as to why you picked the targeted population that you picked and why you excluded who you excluded and everything else. Or if I can bypass that altogether and create an alternative registry-based approach that actually could constitute part of that. But all of that requires access to knowledge and expertise around that.
So how do you find people both being able to access that in the right way and go through those deliberations?
Ariel Berger:
It's not one answer. To your point, you have your large companies that tend to do some of the stuff, they have the resources to do a lot of it, other people reach out and they need the help. I mean, this is something that we do every day, I'm sure you do a lot of it yourself, is you keep your finger on the pulse of what's going on. You do have to commission some work to at least take an exploratory look or take a little look, because spending a little bit up front may actually revamp your entire thinking and lead to a much more efficient design. I mean, the problem is when you don't want to spend a little bit up front, you get a long way down the road and you realize you could have done it differently or you should have done it differently, you should have pivoted a different way.
Everyone I think is interested in the most efficient way to bring something to the market. And I think this is one where if it's a second indication, an expanded label, et cetera, your initial thought should be, I'm not doing anything until I see some data, just because I can. And a database analysis, no matter how elaborate at this stage, was nowhere near what the cost of a trial would be, a trial design would be. And better to spend a little bit up front in terms of resources, time, et cetera, to make sure you're pointing in the right direction and you're going in the lowest path of resistance that's possible. At least you're arming yourself with information that will enable you to argue for the path of least resistance, let's put it that way.
Jeff Elton:
Yeah, I would entirely agree with you. I think the RWD component, even if you add it all together, is going to be less than 10% of the total. And if it influences the design and if it influences the outcome or the size, the duration, you're going to have a positive return on investment.
Ariel Berger:
Or may obviate. It may even obviate the need for return.
Jeff Elton:
Or even obviate, even better.
Ariel Berger:
It achieves a very positive return.
Jeff Elton:
But we're still talking about kind of a 10, 20, 30 time kind return on that up-front investment. And to your point, probably the highest growth, highest new demand area is actually clinical development organizations. And that's both for first in human early phase, as well as for some of the later phase and the follow-on studies. That is by far, and in fact as we grow our breadth of data and lower the latency of that data, it's primarily because of those demands.
Ariel Berger:
Well, I'll go one further with you, and you may have something on this because I know you run a technology company here, we do a lot of machine learning, a lot of data science. And I'd be very interested to hear your thoughts actually on how the data could even point to the next indication. Could you query a database? And could you have machine learning or AI say, well, it's been great here, we think now based on these attributes in this patient population, here's your next big indication?
Jeff Elton:
So yes, we agree with that. And I think one of the reason we have in ourselves put together a series of data partnerships and federated different data sources is to get access to a full exome, transcriptome, other portions of the clinical data, is you want to understand who's responding, why are they responding, why are they not responding? But you begin to get insights into who else may be a beneficiary of this.
Ariel Berger:
Exactly.
Jeff Elton:
And so these very narrowly framed hypotheses that have very strong form, data and support behind them, that's actually what we think is a real opportunity. And again, back, back, back, way before I was in ConcertAI in my McKinsey, Novartis part of my life, I never liked this idea that we talked about, attrition, as the model for the pipeline of the industry. Attrition means it's a survival, it means we can't predict. It's not an engineering, it's not really a scientific driven. The hypothesis is scientific, but the idea that we actually tolerate the failure rates as opposed to trying to predict success and have a foundation to that success, that's something we're spending a lot of time working on.
And that's also something I'm hearing from you in terms of no matter what the phase is, and even post-approval for those expansions, having that data informing where you're going and how that data constructs put together might even become the basis of how you can move a program forward in a more efficient way.
Ariel Berger:
Oh yeah. No, and I'd expand that too, I know we're not talking as much about it, but I mentioned it earlier, it's a key component of commercialization.
Jeff Elton:
Absolutely.
Ariel Berger:
Once the thing is approved, it doesn't mean everyone should get it, and everyone's looking for guidance because the trials aren't big enough. So to understand better the profile of a responder, a complete responder, a partial responder, and conversely, the profile of someone who's going to have a tough time tolerating the agent. It's going to help you treat the patient. Even if you decide to go with the product, you'll know better how to support that patient while they're on the product.
Jeff Elton:
Who to avoid or ask some really stringent questions before you treat it, maybe because there's no options. But on the other hand, to your point, and this is your tokenization question, if I can create that registry model and demonstrate the value, and value versus a range of comparables for doing that, that's a much stronger position to be in, whether it's government or private payer authorities or whatever the case may be.
Ariel Berger:
A hundred percent. But it's all rooted in a data strategy, and a data strategy that can ideally support through development, regulatory submission, but ultimately commercialization as well.
Jeff Elton:
So we're at the beginning of a new year, 2025.
Ariel Berger:
Very cold beginning.
Jeff Elton:
Very cold where we're here in Cambridge, Massachusetts in the ConcertAI podcast studio. And it is a cold January, it is meeting its promise as a New England January. So maybe as you take a look out, and I know our field changes very rapidly, but as you take a look over the next two to three years, what are you personally most excited about and your part of the field? What are you most looking forward to?
Ariel Berger:
I'm really excited about the continued march of technology and how it's integrating into everything. And I'm not just talking about a ChatGPT or generative AI, even tokenization, which was just a pipe dream a few years ago, has now opened the doors to a lot of very interesting thinking and ways to move things forward in a way that doesn't always happen. And I just see the more we continue down this road with things like 23andMe and everyone's personal information and health, you'll just have more and more abilities to pull more and more of those patient data forward to link them, to harmonize them, and to really get a better understanding of who these people are, and how who they are impact the decisions they make and how they respond or not to various things that are tried to treat them.
So I think number one, that's really something to me, and it's just been a launch straight up and it doesn't seem to be slowing down anytime soon. So I think that's pretty cool, and I think it's going to enable a lot of really neat stuff.
And then as a corollary to that, I think, and this is more professionally what I see from what we do, it's more and more this integration of clinical and commercial. And the ability, and we talked about it throughout today, but to have that strategy upfront, but to make sure that strategy is comprehensive, because again, that leads to more efficiencies. But I can clearly make an argument to tokenize a trial, and that's going to address the regulator's safety concerns for post-authorization, as well as the payer and health technology assessment agency's ability to look at a financial assessment of the trial, for example, or even to look long-term and think about even converting a clinical study to a pragmatic study post-launch. And having everything already tokenized and being able to demonstrate not only is it good science to approve this drug, but it's good business for the payer and it's good health for the population.
And so much of that I think has to start early in making sure you have that integrated planning, that comprehensive planning, and then you got to revisit it when you go for your second indication, et cetera. But to do it right up front with all these tools at your disposal, I mean it's always challenging, but I don't think we've had a better runway for drug development and ultimately drug expansion than we do now. It almost makes me forget that it's 20 degrees outside because it's very exciting to be able [inaudible 00:35:36].
Jeff Elton:
What I'm particularly excited about it as I'm listening to you, and I hadn't really thought about it in these particular terms, this idea that you could actually have a life cycle management strategy. And the fact is, we think a life cycle management oftentimes gets done as you have confidence that a drug is going to launch. But actually, if I'm listening to you, some of the optionality of what actually should be integrated could be thought forward much, much earlier. Even though these are like branching logic, if you will, and they were remain options, if you don't do it upfront, you lose, because it wasn't part of the consent or it wasn't part of this, you lose that downstream actually part of that.
And so in a way, we may need to reconceptualize a little bit about what the end-to-end process and the pieces and components. In fact, that's probably something we should come back to and could be its own focal area. And although this one I think we should probably put into a written form rather than in a podcast form.
So you said a number of things. I want to kind of give you kind of the last word here. And when you talked about patient consent earlier on, and you also talked about burden and a few other things, and I think this is where your connection to the CRO kind of comes together, all the data and all the information, these patients are under the care of healthcare providers and there's research teams there, and then you have biopharma sponsors. So we're in an ecosystem, and some of the way we're talking about integrating data provides insights that's even difficult for the providers themselves to do just because accessing and manipulating and analyzing their own data, sometimes it's not easy. That's not what EMRs were designed to do, et cetera.
So maybe as you're giving your final couple of key messages, what would you do to encourage research sites to be receptive, welcoming, and maybe even suggesting to sponsors? And what would you suggest to sponsors?
Ariel Berger:
Well, no one ever says no to a nice check. I can say that right away. But I think the bigger picture thing is, and I've seen some of this myself, especially in a registry format. If you have your registry hat on, or as we've talked, if your clinical development strategy is such that you'll be hitting sites often and repeatedly through phase one, phase two, phase three, et cetera, the more one can automate, the more one can reduce that burden upfront, I think the more amenable the sites are to participating. I think they all are interested, or many are interested in research as a theory, but when it comes to actually doing it, it becomes a drag. And it becomes very difficult to keep doing the same thing, a patient doesn't show up on time, it screws up your workflow, et cetera.
But I see a lot of potential in this idea of benchmarking, both for the patient, but also the site. I think the docs are going to want to understand how am I doing relative to my colleagues? And part of it I think is competitive pressure. We've all seen those advertisements, best provider in New England, best provider in Boston, best provider here. And to be able to say, my outcomes are better than the average, it's a nice thing from a quality perspective, or at least the perception of quality perspective.
But I think people a lot, and I'm going to conclude practitioners, I'm going to include patients, they're in the dark. I know my reality and I know what I think is good, but I don't know, right? There's no national conference that every doctor in every indication gets together or they share everything or they don't have time to read all the time. So to have it all, again in the terminal, but to be able to demonstrate here's how you're doing versus the whole group of sites that are in here, clinicians who are in here, I think it could drive care in a good way. I think it would make docs a bit more interested. It would definitely make their patients very interested, I think, to log in and see how they're doing relative to other people.
But I think it's just that sharing of information for better outcomes. In addition to financial compensation, which is let's just say it's got to happen, the technology enables it more now than ever before, but it has to be part of those conversations.
Jeff Elton:
Yeah, no, that makes sense.
Ariel Berger:
Same with the patient. If the patient can check out their outcomes, even more than a newsletter, I think that's going to drive the patient too. Now, you may have a critique that it's no longer real-world clinical practice because they're getting amplified and juiced by the data. But at the same time, if we're all looking at this stuff, then it becomes [inaudible 00:40:04]-
Jeff Elton:
It's what evidence generation is.
Ariel Berger:
Exactly. And if you're worried about evidence-based medicine taking 10, 15 years to ripple through clinical sites, this is a potential way to expedite that.
Jeff Elton:
Totally agree. And I'm almost hearing that area as a message both for the sponsor, and actually sponsors can present themselves in a different fashion, present different capabilities, be a "partner" as opposed to just a provider of a [inaudible 00:40:29] site.
Ariel Berger:
It's less transactional. More of a partnership model.
Jeff Elton:
And actually even across the sites, there's a little bit of a network effect that now you can introduce that wasn't there before. And so this is synergistic both for the care enterprise, as well as for the research enterprise in terms of taking that out there.
Ariel Berger:
And I think it will, and this has been my pet for a long, long time, is into the realm of these pragmatic studies, because we have all these medications for all these indications, and no one understands what's the best outcomes, how much we should be paying, what the right pathway is. And these pragmatic studies can really help us understand from a policy perspective, how should we treat populations in the most efficient way. And the data are there, it's just people have to say yes to participate.
Jeff Elton:
Well, it's probably another conversation, but we always thought that the consent for care should almost include a consent for pragmatic studies just built right into the backbone. Because if you can automate the running of the pragmatic and the registries, et cetera, I would agree with you, I think there's tremendous value. And now it actually does, it works those data in a way that will have utility for treatment decisions and other things going forward. So I think that's a good thing to kind of keep a lens on.
Ariel Berger:
From this studio, Jeff, out into the country.
Jeff Elton:
There we go. Well, there's always more topics at the end of the day.
Well, Ariel Berger, thank you so much for agreeing to be here today. Really appreciate the session, enjoyed your comments. And like all things, learn things and perspectives. Thank you everyone for listening to the ConcertAI Podcast. Wherever you are, good morning, good afternoon, and good night.