Two J&J MedTech leaders shared advice to help medical device developers use real-world evidence (RWE) in FDA submissions.
Real-world evidence (RWE) took a big step forward recently when the FDA approved expanded indications for Johnson & Johnson MedTech ablation catheters.For the first time, the federal medical device safety regulator approved a label expansion based on RWE from a retrospective study of health records documenting off-label use by physicians.
“The clinical evidence used to support the expansion of indications was based solely on an analysis of a dataset comprised of electronic health records from two hospital systems,” the agency said. “The FDA worked closely with the study sponsor to ensure that the RWE resulting from the analysis was both relevant and reliable.”
Two J&J MedTech executives who led the project — Anthony “Tony” Hong, VP of preclinical and clinical research and medical affairs for the cardiovascular and specialty solutions group, and Paul Coplan, VP and global head of medical device epidemiology — discussed their experience to help other device developers use RWE in the same way.The study was a test case for the National Evaluation System for Health Technology (NEST), which coordinated the FDA and J&J MedTech project to evaluate the use of RWE in regulatory decisions.
J&J MedTech won approval of one expanded indication after a six-month FDA review period, though this included a stop in the review clock for Mercy Health and Mayo Clinic, the research partners in the study, to reanalyze their data to address FDA’s additional data requests. It would normally take nine to 12 months for a traditional premarket approval (PMA) process, including investigational device exemption (IDE) review.
J&J MedTech’s Biosense Webster ThermoCool SmartTouch ablation catheter was already approved for paroxysmal atrial fibrillation (AFib), but cardiologists were also using the device to treat persistent AFib. The J&J MedTech team compared their device against another ablation catheter already approved for persistent AFib — the ThermoCool SmartTouch SF (STSF) — in an analysis of records for 1,450 patients treated with either of the devices at Mayo Clinic and Mercy Health.
RWE also secured expanded indications for both catheters and other devices to be used for ablation to treat AFib without fluoroscopy, sparing patients and physicians from potential X-ray radiation exposure.
“There’s a lot of interest in using RWE more,” Coplan said. “The FDA has been particularly interested because there are often gaps in evidence, and sometimes I think that they see a medical need to get the label to be more consistent with what’s happening in clinical practice or what could be done to benefit patients. But they can’t get the evidence because it’s just too much of a hurdle to do a large clinical trial. So in that case, the RWE can actually help FDA, informing its public health role to provide patients and physicians with information that can guide the good practice of medicine.”The folllowing quotes have been lightly edited for space and clarity.
Which devices are good candidates for RWE?
Hong: “We’re looking at how physicians are using existing products that don’t have a pediatric label, for example, and understanding how that usage is safe and effective. This can be done for Class II — even for Class III PMA devices, it can be done. 510(k) Class II devices seem to be the most obvious. You don’t need to run a large, multicenter 500-patient study, but that doesn’t mean that the evidence isn’t needed from a product adoption perspective. So certainly for Class II devices, as the product is rolled out and it is being used in the real world, gathering that data is very powerful for demonstrating the utility of that device. With Class III, you’re already running large IDE trials to demonstrate the safety, effectiveness, efficiency — but with RWE, I always say if I ran a 500-patient IDE but if I can gather 2,000 patients’ worth of data in a real-world setting that corroborates this, the data becomes more powerful. And it is that usage and standard of care that ultimately will change guidelines.”
Coplan: “For the 510(k) pathway, getting approval using clinical or nonclinical data has a lower hurdle with the FDA than reimbursement approval by the U.S. Centers for Medicare & Medicaid Services (CMS). In contrast, for the PMA pathway, getting approval using clinical or nonclinical data has a high hurdle with the FDA and that data tends to be sufficient for reimbursement approvals by CMS. Often, the RWE is most useful where the hurdle is highest: for devices using the 510(k) pathway, RWE is useful for CMS reimbursement decisions and for devices using the PMA pathway, RWE is useful for FDA approvals.
Make sure to define your goal
Hong: “You’ve got to start with the end in mind. What is it that you are really trying to achieve and why? That finish line allowed us to think about the evidentiary hurdles and the evidence channels that we can use. We can run company-sponsored IDE studies, but we started pushing the boundaries in terms of leveraging real-world evidence and asking how the usage of a product in the real world corroborates the IDEs that we run, and is that an alternative channel of evidence that can be used to cross that finish line? That’s when we started partnering with Paul and his team and saying, ‘Hey, let’s think way outside the box here.’ We knew that physicians are using our product for persistent cases — that’s not on-label. For STSF, we actually ran a persistent study and got the label, but given the fact that ST is similar to STSF, is there a way in which we can do this? Data is king, and it is the power of the data that’s going to help convince the FDA that the RWE is valid in making label changes. It’s going to be very important to think about that finish line and what type of data do you need and why.”Coplan: “The evidence strategy needs to be developed for the key questions or key themes that need to be demonstrated for the product to get approved by FDA and approved by CMS and approved globally. Once the key research questions are identified, then the question is do you need to use clinical data or can you use RWE? If it would be feasible to use RWE because the product has been used in the real world, then the next step is to assess the data sources that capture product usage and have the data quality to meet the rigor that FDA requires. Once the availability of sample size and data quality questions have been addressed, then the next step is to identify the best study methodology to address the research question using that particular data source? That’s the logical flow I use to get to a fit-for-purpose study using RWE.”
How to pick your RWE partners
Hong: “Internally, it goes back to the alignment on what that finish line is and what are the internal capabilities and functions that need to be a part of this. And it’s not just clinical medical, but you have epidemiology, you have health economics and market access, and all of these groups need to be aligned. Internal alignment and partnership are absolutely critical.”
Coplan: “We submitted a proposal to the coordinating center of NEST. NEST has a network of about 21 partners that are probably the best institutions for medical device RWE research. The initial selection of NEST’s partners came through involvement in PCORnet (the National Patient-Centered Clinical Research Network). When we submitted our proposal to NEST, NEST then approached the 21 partners and asked who was interested in partnering on the study we had proposed. One of the criteria that the research partners had to affirm in order to be a partner was that they had to have use of that medical device in their healthcare system, because if they didn’t there wouldn’t be any relevant data.”
Ensuring RWE data integrity
Coplan: “We got some excellent research partnerships with very good academic researchers and well-established databases that were fit-for-purpose for our research. The first step was to validate each of the data sets according to a number of criteria that FDA required. Some of them were data quality steps, and then we submitted that to FDA, and they had additional requests for data quality validation. The feasibility and validation work was done through NEST funding and support, which I think is important because that data quality and validation work takes quite a bit of time. Once we finished the data quality work and the feasibility work, we went to the actual hypothesis testing stage of the study, all the time staying in touch with FDA.”
Coplan: “A key part of success in RWE is making sure you have good algorithms to identify your study outcomes, your cohort identification, your safety endpoints. Say if you’re doing a registry study, you typically can collect the data elements that you want from more proactively from within the study. If you’re using a retrospective analysis of secondary data — data that was collected for the conduct of a clinical practice, information that was either used for billing purposes or electronic health records to help with the coding for reimbursement or so the next time the patient comes in there’s a clear record of what the problem was and how it was treated — you rely a lot on the codes. The first thing is you really have to understand the codes and spend time on the codes to make sure that you have the codes —we have ICD-9 codes, ICD-10 codes and soon to be ICD-11 codes — you have to be familiar with all of those codes. The next thing we did was a chart review for 10 or 15 people. We had a specific code. One of the endpoints was stroke. So then we identify 15 people with stroke: 15 at Mercy, 15 at Mayo Clinic, and then do a chart review to see if they really did have a stroke. If not, what was the problem? Then we’d create a positive predictive value to compare. With stroke, it had a positive predictive value around 50%, which is too low. Normally you need around about 80%. The problem was that a lot of people who had a code for stroke, they had previously had a stroke and were coming in for post-stroke care. So we then had to figure out through a series of codes to differentiate between a past stroke and a current stroke.”
Hong: “The consistency of data is absolutely critical. Garbage in gives you garbage out. When you are gathering RWE, every physician’s usage of a device can be very different. As we think about partners we’re working with and the data that’s out there, understanding of how the data was collected and what was collected really ensures that your analysis is going to be meaningful. Otherwise, the heterogeneity of the data will be such that it’s gonna be very difficult to make heads or tails out of it.”