Acceptance of Artificial Intelligence in Evidence and Dossier Developments by HTA bodies: Challenges and Opportunities

Artificial Intelligence (AI) holds transformative potential in healthcare, particularly in the field of Health Technology Assessment (HTA). By leveraging machine learning (ML), natural language processing (NLP) and especially large language models (LLMs), AI algorithms can significantly expedite evidence generation by analyzing vast and unstructured datasets with greater efficiency and accuracy.  AI can augment literature reviews and evidence synthesis – beginning with more efficient screening of large bibliographic datasets and extending to automated data extraction, synthesis, and reporting. Furthermore, AI can support the development of health economic models by assisting with conceptualization, data analysis, and validation processes, thus streamlining the economic evaluation component of HTA dossiers. In parallel, the HTA bodies on the model of the regulatory authorities themselves are starting to evaluate and use AI themselves for the reviews.

In an era of accelerating data availability and growing complexity across therapeutic areas such as oncology, rare diseases, and personalized medicine, the BioPharma industry stands at a pivotal moment of transformation. With the upcoming Joint Clinical Assessment in the EU, there is a heightened need for more robust, timely, and methodologically sound evidence packages. By harnessing advanced AI-driven solutions, companies can more efficiently address large PICO (Population, Intervention, Comparator, Outcome) requests – for example by rapidly adjusting systematic literature reviews and indirect treatment comparisons as well as synthesizing diverse real-world data sources to address more efficiently HTA requirements.

However, the use of AI in HTA is not without concern. Core to HTA is the principle of methodological transparency and replicability areas where AI, particularly with complex models like deep learning, can falter. Issues such as “black box” behavior, data provenance, bias amplification, and lack of standardization raise questions about the validity, reliability, and fairness of AI-supported evidence. Another well-known limitation of AIs, especially LLMs, is their tendency to generate inaccuracies and fabrications (“hallucinations”), often stemming from being trained on large, uncontrolled datasets. It is also important to note that the use of patient-level data to train or deploy AI tools in HTA raises questions about data privacy and confidentiality.

Current Landscape in HTA and AI: What is the current level of acceptability of AI-supported evidence for HTA submissions?

While guidelines have been published to evaluate AI-based healthcare technologies assessment from a regulatory perspective, only NICE has issued a position statement outlining expectations for the use of AI in evidence generation from an HTA perspective. Detailed methodological guidance, however, remains to be developed by most HTA bodies.

Current HTA guidelines AI-based Evidence generation
Country HTA Authority AI Guidance Status
United Kingdom NICE Issued a position statement in 2024 outlining expectations for the use of AI in evidence generation, focusing on transparency, governance, and auditability. NICE emphasizes that any AI-driven methods included in HTA submissions must be clearly declared, transparent, and scientifically robust. Developers must explain how the AI was used, ensure results are reproducible and understandable, and maintain appropriate human oversight. AI-generated evidence must meet the same quality standards as the traditional approaches and throughout sensitivity analyses are expected. NICE also encourages early engagement to discuss its appropriate use and is exploring AI applications to support its internal decision-making processes.
Spain AEMPS No AI-related HTA guidance publicly available as of April 2025.
Italy AIFA No AI-specific HTA guidance, but in 2021 AIFA published regulatory guidelines for clinical trials involving AI/ML methods.
Germany IQWiG Formal guidance is pending, but IQWiG’s General Methods allow ML for study selection and search strategy development.
France HAS No public AI-specific HTA guidance yet. HAS is exploring AI tools to support literature reviews. Early trials highlighted limitations, but prospective testing of AI tools is ongoing.
Sweden TLV No AI-related HTA guidance publicly available as of April 2025.
Netherlands ZIN Published new HTA guidance in 2024, but AI is not addressed.
Canada CDA No AI-specific HTA guidance, but exploratory discussions are ongoing, and the agency is monitoring development.
Australia PBAC No AI-related HTA guidance publicly available as of April 2025.

While there is recognition of AI’s rise, HTA guidance development remains in its infancy.

As regulators begin releasing guidelines exemplified by the EMA’s forthcoming 2024 “Reflection paper on the use of AI in the medicinal product lifecycle” and the FDA’s 2025 draft guidance “Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products”  – HTA frameworks will likewise need to incorporate AI-supported evidence, with further guidance from HTA agencies expected in the near future.

Despite several internal initiatives to test and develop relevant AI tools, they are rarely used in actual HTA dossiers due to the fear of non-acceptance of those methodologies. The use of AI in evidence generation remains uncommon and may not be openly disclosed by manufacturers.

AI Acceptance by HTA Authorities: What would it take to foster broader AI acceptance by HTA bodies?

The evolving regulatory and HTA landscape for AI mirrors the earlier trajectory of Real-World Evidence (RWE) integration. Initially met with caution, RWE has gradually gained credibility and acceptance as standards, methodologies and frameworks matured. However, the lack of guideline harmonization in terms of data governance, methodology or role in the HTA scope is still hampering RWE’s acceptance role. Similarly, AI is now at the early stages of this journey, and the path forward will likely depend on building trust through methodological rigor, transparency, reproducibility, critical assessment of limitations, and demonstration of added value in decision-making.

For broader acceptance by HTA bodies, several actions are needed:

  • Clear Guidance and Frameworks: Agencies must define how AI-derived evidence should be validated, documented, and audited. Criteria around transparency, model interpretability, and bias mitigation should be developed. NICE’s 2024 position was a positive first step, but more detailed methodological guidance is needed to support both manufacturers and HTA reviewers.
  • Cross-Stakeholders Collaboration: Pharmaceutical industry, HTA bodies, regulators, and academia must align on AI standards and data governance practices. EMA’s 2023–2028 AI workplan is a positive example of regulatory foresight. It outlines a coordinated EU strategy to responsibly integrate AI into medicines regulation, focusing on guidance development, tool evaluation, staff training, and structured experimentation. A key focus is fostering collaboration across the European Medicines Regulatory Network (EMRN) and with external stakeholders to ensure ethical, effective, and harmonized AI adoption in line with the EU AI Act. While alignment with the HTA bodies on the methods is necessary, ultimately, the manufacturer is responsible for the accuracy of the information provided and must be able to provide evidence of the reliability of its approaches.
  • Human Oversight and Hybrid Approaches: AI will augment not replace human judgment. Evidence generation that combines AI with expert validation (e.g., AI-assisted literature review or economic model development) is more likely to be more robust, thus better trusted.
  • Pilot Projects and Case Studies: HTA bodies should encourage controlled piloting of AI-supported submissions. These real-world test cases can guide methodological refinement and provide the empirical base for future frameworks.
  • Education and Methodological Literacy: Building internal capacity within HTA agencies to understand, evaluate, and challenge AI-generated evidence is critical. This includes training reviewers in data science and ML fundamentals.
Conclusion

AI in HTA is no longer hypothetical, it’s happening, albeit in early stages. While its use in submissions remains very limited to date, ongoing developments; such as NICE’s position statement signal a shift toward more structured adoption. At the same time, increasing engagement from regulatory agencies, including EMA and FDA, reinforces the broader shift toward integrating AI into healthcare decision-making frameworks. For AI to be widely accepted in HTA, stakeholders must prioritize methodological transparency, collaborative standard-setting, and robust validation. With the right frameworks and continued cross-sector dialogue, AI can evolve from a promising innovation to a trusted tool that enhances both the efficiency and quality of HTA decision-making.

For more information on AI at Putnam, please contact us.


References:

  1. Generative Artificial Intelligence for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations – An ISPOR Working Group Report – February 2025 https://www.sciencedirect.com/science/article/abs/pii/S1098301524067548
  2. Fleurence, R. L., Bian, J., Wang, X., Xu, H., Dawoud, D., Higashi, M., & Chhatwal, J. (2025). Generative artificial intelligence for health technology assessment: Opportunities, challenges, and policy considerations—An ISPOR Working Group Report. Value in Health, 28(2), 123–130.
  3. Naylor, N. R., Hummel, N., deMoor, C., & Kadambi, A. (2025). Potential meets practicality: AI’s current impact on the evidence generation and synthesis pipeline in health economics. Clinical and Translational Science, 18(4), 567–574.
  4. Reason, T., Rawlinson, W., Langham, J., Gimblett, A., Malcolm, B., & Klijn, S. (2024). Artificial intelligence to automate health economic modelling: A case study to evaluate the potential application of large language models. PharmacoEconomics – Open, 8(1), 191–203.
  5. Reason, T., Rawlinson, W., Langham, J., Gimblett, A., Malcolm, B., & Klijn, S. (2024). Automating economic modelling: A case study of AI’s potential with GPT-4. Value in Health, 27(3), 456–462.
  6. Hui, A. T., Ahn, S. S., Lye, C. T., & Deng, J. (2024). Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon, 10(2), e12345.
  7. U.S. Food and Drug Administration. (2025, January). Considerations for the use of artificial intelligence to support regulatory decision-making for drug and biological products. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/considerations-use-artificial-intelligence-support-regulatory-decision-making-drug-and-biological
  8. European Medicines Agency. (2024, September). Reflection paper on the use of artificial intelligence (AI) in the medicinal product lifecycle. https://www.ema.europa.eu/en/use-artificial-intelligence-ai-medicinal-product-lifecycle
  9. European Union. (2024, July). EU Artificial Intelligence Act. https://artificialintelligenceact.eu
  10. National Institute for Health and Care Excellence. (2024, August). Use of AI in evidence generation: NICE position statement. https://www.nice.org.uk/about/what-we-do/our-research-work/use-of-ai-in-evidence-generation–nice-position-statement
  11. Haute Autorité de Santé. (2025, April). La HAS évalue le potentiel de l’IA pour assister le processus de revue de littérature. https://www.has-sante.fr/jcms/p_3599818/fr/la-has-evalue-le-potentiel-de-l-ia-pour-assister-le-processus-de-revue-de-litterature
  12. Institute for Quality and Efficiency in Health Care. (2023, September). General methods: Version 7.0. https://www.iqwig.de/methoden/general-methods_version-7-0.pdf
  13. Zorginstituut Nederland. (2024, January 16). Richtlijn voor het uitvoeren van economische evaluaties in de gezondheidszorg (2024). https://www.zorginstituutnederland.nl/publicaties/publicatie/2024/01/16/richtlijn-voor-het-uitvoeren-van-economische-evaluaties-in-de-gezondheidszorg
  14. Italian Medicines Agency. (2021, May). Guide to the submission of a request for authorisation of a clinical trial involving the use of artificial intelligence (AI) or machine learning (ML) systems. https://www.aifa.gov.it/documents/20142/871583/Guide_CT_AI_ML_v_1.0_date_24.05.2021_EN.pdf
  15. Canada’s Drug Agency. (2025, March). 2025 watch list: Artificial intelligence in health care. https://www.cda-amc.ca/sites/default/files/Tech%20Trends/2025/ER0015%3D2025_Watch_List.pdf
  16. U.S. Food and Drug Administration. (2024, May 9). FDA announces completion of first AI-assisted scientific review pilot and aggressive agency-wide AI strategy. https://www.fda.gov/news-events/press-announcements/fda-announces-completion-first-ai-assisted-scientific-review-pilot-and-aggressive-agency-wide-ai