The ConcertAI Podcast | Real-World Data in Oncology: Navigating the New Frontiers feat. Jennifer Rider

Real-World Data in Oncology: Navigating the New Frontiers

Jeff Elton:

To the ConcertAI Podcast season two. This episode I am going to be with Jennifer Rider, of ConcertAI’s RWE services area. Jennifer has a long history both in academia having been at Harvard Medical School, Harvard School of Public Health, and the Boston University School of Public Health. ConcertAI returnee and from that perspective, very special. But she’s also spent a considerable amount of time in regulatory applications of RWD and RWE.

FDA guidance coming out only a few short months ago. It’s now a field that is accelerating and the applications are also diversifying. With that we really want to [inaudible 00:00:49] the field, the state of the field, how do we think the field’s going to evolve, and where do we think both sponsors and regulatory bodies are going to see the real value of regulatory RWE.

Jen, I’m going to let you do a bit of an introduction of yourself because I think one, I think you just have had a great career, well before you were here at ConcertAI, but just a little bit more about that and then we’ll start going into some of the questions.

Jennifer Rider:

Sure, thanks, Jeff. Thanks for having me. My name’s Jennifer Ryder, I’m a cancer epidemiologist. I did my doctoral training at the Harvard T.H. Chan School of Public Health in Boston and stuck around academics for quite a long time at Harvard and then at the Boston University School of Public Health. My research was primarily in the area of prostate cancer.

Jeff Elton:

When you left academia, you first came to ConcertAI. How did you, having a long academic career and actually having done a lot of what probably at one point wasn’t even called real world data research, but just health derived data, secondary use data for research. How did you think about leaving the context of academia but coming into another more research centric environment?

Jennifer Rider:

Yeah, it’s a good question. It felt like a big leap at the time. I had spent more than a decade in an academic environment. I was determined to stay within the field of oncology, and that was certainly one of the things that attracted me to ConcertAI. Then you’re right. It wasn’t the entire field of real world data. It was new nomenclature at the time, and I didn’t have any examples of other cancer epidemiologists who I had trained with who had made that move, but ultimately it was a great decision.

Jeff Elton:

Well, I would agree, it was a great decision. Lots of people were beneficiaries of that decision and one of the reasons why I was very excited about having Jennifer and even being on the podcast, aside that she’s here at ConcertAI, I think just your work in the field, prostate cancer itself, now has a whole range of new treatment approaches and is going through, in fact, urological cancers has actually been going through… It’s hard to even say it’s a renaissance, but actually it’s going through the fact where treatment strategies have changed and evolved considerably. So it’s an area of where there actually is quite intense research and methodological development and things of that nature. So, your own background as a scientist plus the nature of the field you’re in, I think, just has exceptional relevance, so.

Jennifer Rider:

Yeah, I couldn’t agree more. I think for the urological cancers, there’s so much promise for real world data. I began studying prostate cancer because there were so many interesting clinical issues happening at the time, and that continues to be the case.

Jeff Elton:

You were one of the ConcertAI originals and came here very early in our career history. You went to another RWE associated firm, which we hold in high regard, and came back. So you’ve seen a bit of the arc and the evolution of where the field of real world data and sometimes people talk more about real world evidence because it’s the application of the data to some questions of significance. But if you were to make some summaries of how the field has changed and matured over the course of that last three to five years, how would you characterize that?

Jennifer Rider:

Yeah, well, I think the first thing is the availability of so many rich data sources has drawn a lot of talent to the field. Many people who are training in biostats or epidemiology find themselves in this world, unlike when I graduated. All of that talent has really led to improvements in methodology. So just as an example, I think, comparative effectiveness studies now being envisioned more through a target trial framework, relatively new within the last decade, but has really changed the field. Then of course the other way it’s changed is now we have more formal guidance from regulatory agencies on how RWD can be used to generate evidence.

Jeff Elton:

I’d like to spend a little bit more time on that because I think for the first time, this was also several years in the making of before the guidance came out, there was a comment period on the guidance, it became formalized. You have Rob Califf now back there who has been quite outspoken in his sense of the value from applying RWD to a range of regulatory questions.

I think both the work you did at ConcertAI and the work you did in between coming back to ConcertAI also had some regulatory implications for it. Maybe explain a little bit about, what is regulatory applications of real world data versus what would’ve been the non-regulatory use of real world data where we would’ve seen go through peer review journal formats and things of that nature? How would you make a distinction between those two categories?

Jennifer Rider:

Sure. I think even within the category of regulatory real world data, you could probably think of a couple of different categories. So I think the highest bar would be real world data that’s going to contribute to the substantial evidence standard for effectiveness. There, I think there are only certain situations that really don’t have to do with the data, but rather, the actual disease indication and whether or not clinical trials are ethical or feasible for that particular question, that would determine whether real world data would be considered for that purpose. But then, there are other applications where real world data could be used to support regulatory decision making where that evidence is supportive. So for instance, a well conducted natural history study could be really important and provide context for findings from a clinical trial.

Then there are the specific data related issues. So making sure that the data has the proper coverage that’s required to answer the question, that there’s the proper verification. All of those things are also important when we’re just doing studies that are intended for a peer reviewed publication. But I think there’s more room to interpret the findings in context of any limitations. So you could conduct a study that was a valuable addition to the field that still might have some limitations in terms of data, but there’s just not the same appetite for that when we’re talking about regulatory use cases.

Jeff Elton:

So if I am a sponsor, here I mean, pharmaceutical company in the clinical development research center, but if I’m a sponsor, when typically would I think about the real world data may have the most utility around that?

Jennifer Rider:

Early. It’s interesting. When you read the FDA guidance documents, a very common theme is engage the FDA early if you are considering using real world data to support decision-making. In fact, not just engaging early, but also allowing room for input and discussion around how that data would be used. So I think historically maybe it was tacked on the end and we now have a number of examples even within oncology where an external control arm was submitted as confirmatory evidence but was not actually considered because of concerns about differences in the patient populations or how outcomes were measured, say, in the real world sample.

Jeff Elton:

So actually, we have our, what I’ll call more heritage RCT way of working, but where real world data may begin to provide utility in a lot of ways, and I want to come back to some of your observation. It’s now, we almost have a parallel process that needs to be thought of as its own process with its own engagement model. As opposed to, and I’ll be a little flippant about this, “a bolt on at the very-

Jennifer Rider:

That’s right.

Jeff Elton:

… Tail end,” that seems to reinforce some parts of the storyline that developed as a consequence of the RCT pace of the process.

Jennifer Rider:

That’s exactly right, yes.

Jeff Elton:

So if I go back, there are other, what I’ll call FDA objectives. It appears that some of the documents puts less formalized mandate, but part of it is, can the trial population itself more represent the ultimate population that may be treated with an approved therapeutics? Because sometimes the trial may have been only at large urban academic medical centers or things of that nature. So there’s been some other objectives, or if there’s different ethnic and racial or sometimes perhaps even economic groups, a little harder to define that.

But if there’s subgroups that may be more disproportionately negatively affected by a disease, are they represented in the trial data itself such that there could be sub-analysis to determine that given those other multiple layers of goals that both sponsors and agency tend to have, is there a way to start bringing the real world data into that dialogue to start actually doing framing very, very early in that process even as the study design is kind of evolving?

Jennifer Rider:

Yeah. Absolutely. I think just going back to the example of a natural history study, this is something that could be done very, very early during the drug development phase. It requires starting early because they can take some time, but then it’s really essential for interpreting the results of the trial and understanding if the trial control group represents what would’ve happened to patients who were untreated in a larger, more general population.

Jeff Elton:

When you started talking about the data itself and what are the features that gives confidence and the results of registrational trials is the randomization component, right?

Jennifer Rider:

Yes.

Jeff Elton:

And the nature of the control, we don’t give up controls easily, even if it is an external control, and sometimes it’s even complimented by different factors. So when we started thinking about real world data, which there’s all sorts of expressions like missing this messiness and other characteristics to it, how do we start actually thinking about, and you made the comment earlier on even in the opening that we now have much broader data availability-

Jennifer Rider:

Yes-

Jeff Elton:

And I’m assuming broader means more sites, more areas, greater representativeness in that particular data.

Jennifer Rider:

That is absolutely true. So I think we are in terms of depth and breadth, those data sources are growing. And you’re right, that gives us an ability to make sure that the treatments and patients receiving certain treatments are more comparable. We can’t ensure that comparability if we’re not collecting and measuring those variables that might vary between the treatment groups.

Jeff Elton:

So now I have this sort of substrate of data that’s coming from more settings. And when I mean that, I mean by academic NCI designated maybe regional hospital systems and community providers, oncology has its own layout about where care takes place. And so now you have greater breadth and depth kind of around that. But as you start looking at some of these trial designs, these become ever more narrow populations that are coming through. So even if you start off with greater breadth and depth, it reduces down to very small numbers very quickly.

How do you handle questions such as in the end, representativeness, given its very narrow for the trial population and it’s very narrow in terms of what you’re selecting down to make sure that that ultimate population truly is. Because what we’re looking for I’m presuming, is representativeness back to the standard of care population. So when I’m comparing this medicine versus the other standard of care treatments and looking for evidence that’s providing therapeutic benefit or in safety profile, it’s kind of against that particular kind of standard.

Jennifer Rider:

Yeah. This is really where some of the key value of real world data comes in. It’s understanding how these drugs perform in much broader populations, and that may not happen directly as part of the drug approval process, but still critically important for understanding which patients benefit most from certain treatments.

Jeff Elton:

So I’m assuming from what you’re saying… Let’s say it’s a standard RCT design, you do have a control population, but it sounds like you’re actually even implying that the real world data could in fact inform what the appropriate control population would be.

Jennifer Rider:

That’s exactly right. I think that having say a natural history study or study of long-term outcomes of patients receiving the standard of care can help contextualize the results from the trial in a really important way. And so it doesn’t remove the need for that trial control group necessarily, but just provides a different data point and some clues as to how this drug is going to perform out in the real world.

Jeff Elton:

And oftentimes we subject real world data to the standard of RCT data, which tends to have each cell is complete and there’s processes and source data verification processes and things that kind of go through to assure that we have a matrix of different variables with no missing areas or errors and things of that nature.

But are there ways to actually now turn that paradigm around a little bit and provide more confidence in the RCT data itself based on what the standard of care data looks like? Meaning, if I’m looking at the population that ultimately participated in the trial itself is knowing where those study subjects may have arrived from relative to what the standard of care looks like. Does it aid my confidence in the interpretation of those data a bit more?

Jennifer Rider:

Yes. And I’m harping on the natural history study, but just as an example, if you can demonstrate that there’s not a lot of variability in outcomes according to certain different patient populations, then you can be more confident that the results that you receive and that you identify in your very targeted population are going to be translatable to the broader population. So just another example of how real-world data can contextualize your findings.

Jeff Elton:

So if we take a look at every year, it depends on the year, but there could be anywhere between nine and 15 programs that get breakthrough designation. Usually three quarters to seven eighths of those are oncology related.

Jennifer Rider:

Yes.

Jeff Elton:

At least that seems to be what recent history would reflect those programs come with a requirement for doing ongoing study because they were accelerated and certain efficiencies were allowed because of the potential value for those therapeutics for the ultimate patients-

Jennifer Rider:

That’s right-

Jeff Elton:

Population. Where do you see real-world data contributing to some of those post-approval, but breakthrough designation program requirements? Because I think the nature of the research as required has a range of different research types that are allowed. That’s a discussion also with the agency, but it’s also an area that the agency has come back and indicated that less than a third of those programs that received that designation, completed that research in the requisite time and that they were going to come back and encourage compliance.

Jennifer Rider:

Yes. I have also heard that statistic and it’s quite interesting. This is another area where thinking about this early and having the real world study be part of the plan from the outset is really critically important. And from my understanding of how things are moving, I think that that will be potentially required more often.

Jeff Elton:

One of the areas that has evolved a lot over the last three years, certainly evolved for us. And I think sometimes we have been trying to be very active in it. And sometimes our partners and biopharma innovator partners have really kind of brought us to this is this notion of multiple modalities being linked together. Sometimes this, we’ll use some classical biostatistical approaches, but you’re also starting to see different tools, analytic tools and methodologies come together.

And I was at a conference recently where Rob Califf was speaking, and then I was reading something that was kind of a little paraphrasing and quoting again what was there. And he also made reference to kind of multimodal data, and he must’ve used the word causal multiple times. And my interpretation, and again, this didn’t come from a direct conversation with him, but my interpretation was the term causal was this idea of the conclusiveness of association between this causes this, and deterministic linkages as opposed to correlative and kind of things of that nature. And again, this is a fast moving area and the agency itself has sometimes the top is not the way the entire agency moves. But again, these are trends that seem to be picking up some momentum. How do you see that influencing and changing some of the world of RWD?

Jennifer Rider:

Well, we might’ve been at the same meeting because I also, I recall that as well.

Jeff Elton:

Highly likely, actually. There aren’t that many meetings.

Jennifer Rider:

But I think you’re right. And I think really this idea of causal methodologies, it really starts with asking causal questions. And so, what we’re talking about there, it’s kind of the who, what, when, where how of the question and making sure that when you are framing your research question, you’re thinking about all of these things and also about who would not be included in that study.

So we know that in important differences in results between randomized trials and real world or observational studies come from just differences in the question that we’re asking. And so I think the target trial framework helps to address this and making sure that we’re designing and analyzing our study in a way that is consistent with the trial that we would’ve wanted to conduct.

Jeff Elton:

We have a partner that we work with that you’ve probably not have had the pleasure of interacting with yet, which is Caris Life Sciences. In the context of that, some of the molecular data that they collect on solid tumor would represent a whole exome and whole transcriptome, and sometimes also includes whole slide images, which in other language is called digital pathology. And one of the features that… And again, these are all early analysis, but they have some that now have progressed to the point where it’s part of their interpreted report where you can find features in the transcriptome about a patient’s most likely positive response to a particular treatment.

In this case, it was colorectal cancer and FOLFOX FOLFIRI and that different patients in different order of those two different elements would have a higher likelihood of response, but that it was predictable and that therefore if you did it. So I say that just because now given the nature of what’s in some of the data set, your ability to discriminate not just who responded non-respond, but actually now to say why they responded non-respond, how does our ability… And again, this is early, but it will scale rapidly. How does that start to change how we frame a design or a question or a pre hoc, post hoc kind of analysis?

Jennifer Rider:

Yeah, it’s so interesting and I think a great example of how people like me would stick around in the academic world to be able to answer questions like this. And now we can do this with data that is not only accessible, but oftentimes retrospective. We don’t have to prospectively design studies to look at-

Jeff Elton:

Super important point-

Jennifer Rider:

Novel biomarkers. The time to discovery is shortened substantially. So this is really one of the things I’m most excited about over the next few years.

Jeff Elton:

So do you see, as I’m just listening to it, and I like your way of saying, historically we would’ve said this required a prospective validation and in fact it would’ve been framed as a hypothesis, whereas now post hoc retrospectively, we can actually move this through. But do you see those retrospective analyses now could in fact lay out a framework of some insights with a level of confidence around them that might even allow us to accelerate some of the prospective questions?

Jennifer Rider:

Absolutely. And I think this is the type of work that’s now informing trial design.

Jeff Elton:

Super cool. Yeah.

Jennifer Rider:

Very cool.

Jeff Elton:

Yeah, I want to kind of go back to the nature because just of studies themselves, and this has always been a little bit of a conundrum for me because if you take for US-based biopharma, and it varies a little bit by the company, but it could be that for their phase 2B3 studies that anywhere from 50 to 75%, sometimes up to 80% of the study subjects participating in a study may be XUS. If you look at earlier stages, and again, it varies a bit by different sponsors, and if you’re looking at a phase 1B2A oftentimes more often, that’s more US-centric just in terms of, and they’re smaller too, obviously, in terms of what’s there. And when you think about real-world data, in fact some of the work we’ve done, US patient data subjects that we’re kind of part of for real world data, sometimes were used with XUS authorities as well.

Jennifer Rider:

Correct.

Jeff Elton:

And that was always explained by the sponsor, even back to us that actually gaining access to data sets, XUS has always been challenging to-

Jennifer Rider:

Absolutely-

Jeff Elton:

Kind of bring that together. Some of this may be your own exploration and something that we do, but as we think about applying real world data to, and most of it’s probably to phase 2B and 3 studies where substantial investment is placed, but when you say it’s US data in a global study, but some of that US data will actually have application for a US regulatory authority, XUS regulatory, how do you see that evolving over time? See, is that changing? Is it still super complex to get XUS, or are there ways we can give more confidence in the US data such that its application can have higher receptivity in those other jurisdictional domains?

Jennifer Rider:

Yeah, my impression is that it is still complex getting access to XUS data and oftentimes the best path is still through an academic partnership. For instance, I spent a couple of years living in Sweden and spent a lot of time working within Swedish registry data. And I think that’s a great example of something that an academic institution can do, but otherwise, might be a roadblock. But what we really need to know is the extent to which any differences in those populations is going to impact our estimates of treatment effectiveness.

And I do think there are some indirect ways to get at that. I have heard some really interesting talks about the use of synthetic data, actually, for this purpose where you could potentially predict what would happen in patients with a different set of characteristics and background risk. So I’m not sure that that’s exactly ready for prime time yet, but I think it’s a really interesting idea, using all of the data that is available to us.

Jeff Elton:

It’s actually a very interesting idea because even if you think about it’s almost like you could start doing different health system versions of a digital twin.

Jennifer Rider:

That’s right.

Jeff Elton:

Because in the United States, we know that depending upon what agent somebody may have been exposed through in earlier lines of care, resistance may have been built in. But not all agents are available in all parts of the world.

Jennifer Rider:

Absolutely.

Jeff Elton:

So you in fact may have very different patterns, but then not everyone here had the same pattern of getting exposure to different kind of therapeutics as part of that standard of care. But going into that subset of data, you almost could begin to create mirror data sets of the patient population, that other system

Jennifer Rider:

Yeah. And tease those things apart.

Jeff Elton:

That’s a very interesting idea. Something we should actually follow up on actually as well, and we’d invite any listeners to have a great idea about that as well to do that. All right, so every headline, everywhere we go, and I at least in the flow of things that I get pushed to me electronically every single morning, there’s everything about predictive, generative AI, et cetera. And obviously we’re a company that has AI in their name and we’ve been involved in the field since 2017 in doing that.

And recently, I’ve had people show me, I’ve tripped over myself tools that do protocol authoring, I don’t even need to write the protocol anymore. There’s data assessment tools that are semi-automated off of doing that all the way to, there was something that literally drafted the manuscript that would be :ready for review before submission” off to a peer review journal.

Now we could all say, you’ve got your PhD, we all had our research phases of our life, and you’re like a little more subtle than that. Not necessarily everything retrospective can inform everything that’s prospective. But all that aside, it is also very clear that we’re seeing profoundly the speed with which we can move and do highly redundant tasks and move through vast amounts of data and present it in increasingly sophisticated ways is there. When you see this, how do you think the field will change and just be super flippant about it. And I’ve talked to all sorts of peer colleagues and they’re worried about even their kids’ careers and things of that nature. I mean, do we think that cancer epidemiologists and health economists will go the way of medical coders and people that write job descriptions, which I think are almost all GPT drafted. People say the accounting industry will see staff accountants in it, et cetera, or even using this to do coding, people are predicting paralegal. Where do you see these things positively begin to shape and change how this field operates?

Jennifer Rider:

I think you touched on it. I think there is absolutely going to be a change in efficiency in terms of gathering and synthesizing information. I think as a cancer epidemiologist, I’m not too worried about my job security in the near term, and it really is because it requires a mix of methodologic expertise, but also deep subject matter expertise. So I don’t see us disappearing as a discipline for a while. But I think another example is the synthetic data that I mentioned. There is an example of where is that going to be regulatory grade anytime soon? Probably not. But I think it could absolutely be hypothesis generating. It could be used to de-risk certain real world studies, huge amount of potential there.

Jeff Elton:

So historically, in fact, I think this was actually run out of the Harvard School of Public Health. There was always the Sentinel program which used claims data from most of the major payers in the country. It was funded out of US FDA, but through a center that I think was based over at the Harvard School of Public Health and administered over there.

And its value was you could take a signal, you could take a variety of things that may have come through and do an interrogation and investigation.

Jennifer Rider:

That’s right.

Jeff Elton:

But where the generative, if you think about the scope and the scale of the data that can be accessed, do you see even some of these things that are the backdrops of our healthcare system, having different tools and infrastructure that almost can be a beneficial here, not a malevolent, but a beneficial surveillance layer and bring forward much more quickly with a fact base, something that could then take more formal interrogation?

Jennifer Rider:

Absolutely. I think for process and quality improvements, there’s just a huge potential there because we could gather, and synthesize that information much, much more quickly. So I’m excited to see how that plays out.

Jeff Elton:

So in the next three to five years you will, certainly within our organization and then some of the ones we partner and work with, you’ll certainly be able to advance some of your own areas that you believe will add value to the field. What are you most excited about for the next two to three years?

Jennifer Rider:

I am really excited about the idea of more comprehensive, truly multimodal data that allows us to address use cases that really have not been possible up until now and improve the quality of our methodology. I think the field is becoming increasingly interdisciplinary, and that’s something that really excites me. There’s just always an opportunity to learn from people in other fields.

Jeff Elton:

So I have to ask just even as kind of a Harding question here, because we’re the same, but also a very different organization in many respects. What drew you back here?

Jennifer Rider:

Yeah, I think it had a lot to do with the oncology focus. I was able to do a lot of work in oncology in my prior role, but it became clear to me that this is really where I want to be and where I feel like I can have the greatest impact. I adore my ConcertAI colleagues and they were a big draw coming back as well. And then ConcertAI also just has a really interesting culture. There are tons of opportunities here for people who want to take them on, and that works for me.

Jeff Elton:

Well, it will work for us too, so thank you. And I think the field, and certainly our partners and others, will be kind of benefited by having you back. And I think, like all things I’m sure will benefit from experiences that you had outside of ConcertAI. I think we’re always trying to be informed by things that put a different lens on things we need to be thinking about and doing as well. This field changes so much, and if you don’t challenge yourself to make sure you’re working on the right things at the right time that have the right impact, then you don’t have the relevance you need.

Jennifer Rider:

Absolutely.

Jeff Elton:

So Jennifer. Jennifer Ryder, Jen, I thanks so much for being here again today and I’m so happy to have you be our first podcast of 2024 and I look forward to the opportunity to do another one. We’ll see what happens between now and ASCO and we’ll see if we can have another one by the time we get there. So thanks so much.

Jennifer Rider:

Great idea. Thanks Jeff.

Jeff Elton:

Jennifer, thank you again so much for being part of the ConcertAI podcast and kicking off our second season. Really this is a super important area. The fact that real world data advancing to the point of evidence generation now has an array of different regulatory applications is going to both change the field of RCT studies and how RWD is thought of. And the very fact that the FDA itself is spending so much time further punctuates that. So again, thank you for your observations and thank you for being part of the podcast.

Want to learn more about ConcertAI’s initiatives in this or any other area? Please visit us at www.concertai.com. I know you always have a choice as to how you spend your time. We really appreciate you listening to the podcast. So wherever you are, good morning, good afternoon, and good night.