This year at the World EPA Congress in Amsterdam, Putnam participated in the meeting and identified areas of interest to medical affairs including evolving thinking about integrated evidence plans and new frameworks, methods, and tools in development for health technology assessment (HTA) to assess AI. We also presented our own thoughts about leveraging LLM for real-world evidence (RWE) which is summarized here.1
Integrated Evidence Plans (IEP)
Why are IEPs more important today? Three reasons include: 1) Generating evidence is more efficient, 2) Evidence generation that addresses the needs of all stakeholders (policy makers, payors, HCPs, etc.), and supports key strategic objectives, in an efficient and timely way, is needed, and 3) Acceptance of RWE for regulatory decisions increases opportunities and value creation.
Unlike randomized controlled clinical trials, which aim for regulatory approval, the return on investment (ROI) for RWE may vary across markets and require careful assessment. For smaller countries, IEPs can ensure representation by clustering data according to specific archetypes like region or ethnicity. Upskilling local teams and involving them early in the IEP process ensures that RWE meets local needs efficiently.
There are financial challenges associated with IEPs, such as budget constraints and the risk of frontloading capital investment. These challenges need to be addressed with a focus on the ultimate impact of evidence generation. Pressure testing plans with external stakeholders can support important trade-off discussions. Opportunities to improve IEPs include leveraging social listening, generative analytics and natural language processing (NLP) tools (e.g., AI for patient reported outcomes (PROs) with NLP and text data mining, AI risk-prediction tool on health outcomes, predictive overall survival (OS) data extrapolation), and external input to better meet the needs of various stakeholders.
Monitoring the progress of IEPs and assessing the RWE impact on the identified evidence gaps is essential. Engaging stakeholders early, ideally 3-4 years before launch, is key to understanding their data needs and supporting early education opportunities. Continually reassessing the IEP based on evolving stakeholder insights is important.
HTA and AI
AI-Mind Project
The HTA for AI Unit of the Advanced Graduate School of Health Economics and Management (ALTEMS), in collaboration with the Radboud University Medical Center, is working on a standardized framework for HTA of AI-supported technologies. Their objective is to validate this framework and establish a shared language to enhance data collection on AI solutions. Concurrently, AI-Mind is utilizing AI to prevent dementia by creating innovative tools for early prediction and risk assessment, aiming to lessen the overall impact of the disease.
EDiHTA
EDiHTA represents the first European Digital HTA framework, collaboratively developed by stakeholders across the EU Health Ecosystem. It aims to enhance healthcare quality and delivery by leveraging Digital Health Technologies (DHTs) that gather real-world data critical for decision making. EDiHTA’s objective is to facilitate the evaluation of various DHTs, such as telemedicine, mobile applications, and AI, across different Technology Readiness Levels, territorial levels (national, regional, local) and perspectives (e.g., payer, society, hospital). The framework will undergo testing in five leading European hospitals, engaging with European DHT developers in an open pilot program.
AI Trial Reporting
The healthcare sector is increasingly focusing on the transparency and explainability of AI models. With more AI models being tested, there is a need for a standardized approach to AI evidence reporting. Several tools have been developed to provide guidance on what should be reported including the MI-CLAIM checklist2, CONSORT-AI3, and the DECIDE-AI reporting guideline4. Specifically, the DECIDE-AI reporting guideline outlines the essential items that should be reported in early-stage clinical studies of AI-based decision support tools in healthcare, ensuring clarity and consistency in how AI advancements are communicated within healthcare.
We’re looking forward to upcoming opportunities to participate in industry events and learning from other industry experts. Discover more about our expertise across Putnam here.
References
- https://www.putassoc.com/insights/leveraging-large-language-models-with-rwe-to-generate-more-comprehensive-insights-in-medical-affairs/
- Norgeot B, et al. Minimum information about clinical AI modeling: the MI-CLAIM checklist. Nature Medicine September 2020; 26:1318-1330.
- AI in Medicine: A Systematic Review of Guidelines on Reporting and Interpreting Studies.
- Vasey B et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by AI: DECIDE-AI. Nature Medicine May 2022; 28:924-933.