Artificial Intelligence (AI) holds transformative potential in healthcare, particularly in the field of Health Technology Assessment (HTA). By leveraging machine learning (ML), natural language processing (NLP) and especially large language models (LLMs), AI algorithms can significantly expedite evidence generation by analyzing vast and unstructured datasets with greater efficiency and accuracy. AI can augment literature reviews and evidence synthesis – beginning with more efficient screening of large bibliographic datasets and extending to automated data extraction, synthesis, and reporting. Furthermore, AI can support the development of health economic models by assisting with conceptualization, data analysis, and validation processes, thus streamlining the economic evaluation component of HTA dossiers. In parallel, the HTA bodies on the model of the regulatory authorities themselves are starting to evaluate and use AI themselves for the reviews.
In an era of accelerating data availability and growing complexity across therapeutic areas such as oncology, rare diseases, and personalized medicine, the BioPharma industry stands at a pivotal moment of transformation. With the upcoming Joint Clinical Assessment in the EU, there is a heightened need for more robust, timely, and methodologically sound evidence packages. By harnessing advanced AI-driven solutions, companies can more efficiently address large PICO (Population, Intervention, Comparator, Outcome) requests – for example by rapidly adjusting systematic literature reviews and indirect treatment comparisons as well as synthesizing diverse real-world data sources to address more efficiently HTA requirements.
However, the use of AI in HTA is not without concern. Core to HTA is the principle of methodological transparency and replicability areas where AI, particularly with complex models like deep learning, can falter. Issues such as “black box” behavior, data provenance, bias amplification, and lack of standardization raise questions about the validity, reliability, and fairness of AI-supported evidence. Another well-known limitation of AIs, especially LLMs, is their tendency to generate inaccuracies and fabrications (“hallucinations”), often stemming from being trained on large, uncontrolled datasets. It is also important to note that the use of patient-level data to train or deploy AI tools in HTA raises questions about data privacy and confidentiality.
While guidelines have been published to evaluate AI-based healthcare technologies assessment from a regulatory perspective, only NICE has issued a position statement outlining expectations for the use of AI in evidence generation from an HTA perspective. Detailed methodological guidance, however, remains to be developed by most HTA bodies.
Country | HTA Authority | AI Guidance Status |
United Kingdom | NICE | Issued a position statement in 2024 outlining expectations for the use of AI in evidence generation, focusing on transparency, governance, and auditability. NICE emphasizes that any AI-driven methods included in HTA submissions must be clearly declared, transparent, and scientifically robust. Developers must explain how the AI was used, ensure results are reproducible and understandable, and maintain appropriate human oversight. AI-generated evidence must meet the same quality standards as the traditional approaches and throughout sensitivity analyses are expected. NICE also encourages early engagement to discuss its appropriate use and is exploring AI applications to support its internal decision-making processes. |
Spain | AEMPS | No AI-related HTA guidance publicly available as of April 2025. |
Italy | AIFA | No AI-specific HTA guidance, but in 2021 AIFA published regulatory guidelines for clinical trials involving AI/ML methods. |
Germany | IQWiG | Formal guidance is pending, but IQWiG’s General Methods allow ML for study selection and search strategy development. |
France | HAS | No public AI-specific HTA guidance yet. HAS is exploring AI tools to support literature reviews. Early trials highlighted limitations, but prospective testing of AI tools is ongoing. |
Sweden | TLV | No AI-related HTA guidance publicly available as of April 2025. |
Netherlands | ZIN | Published new HTA guidance in 2024, but AI is not addressed. |
Canada | CDA | No AI-specific HTA guidance, but exploratory discussions are ongoing, and the agency is monitoring development. |
Australia | PBAC | No AI-related HTA guidance publicly available as of April 2025. |
While there is recognition of AI’s rise, HTA guidance development remains in its infancy.
As regulators begin releasing guidelines exemplified by the EMA’s forthcoming 2024 “Reflection paper on the use of AI in the medicinal product lifecycle” and the FDA’s 2025 draft guidance “Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products” – HTA frameworks will likewise need to incorporate AI-supported evidence, with further guidance from HTA agencies expected in the near future.
Despite several internal initiatives to test and develop relevant AI tools, they are rarely used in actual HTA dossiers due to the fear of non-acceptance of those methodologies. The use of AI in evidence generation remains uncommon and may not be openly disclosed by manufacturers.
The evolving regulatory and HTA landscape for AI mirrors the earlier trajectory of Real-World Evidence (RWE) integration. Initially met with caution, RWE has gradually gained credibility and acceptance as standards, methodologies and frameworks matured. However, the lack of guideline harmonization in terms of data governance, methodology or role in the HTA scope is still hampering RWE’s acceptance role. Similarly, AI is now at the early stages of this journey, and the path forward will likely depend on building trust through methodological rigor, transparency, reproducibility, critical assessment of limitations, and demonstration of added value in decision-making.
For broader acceptance by HTA bodies, several actions are needed:
AI in HTA is no longer hypothetical, it’s happening, albeit in early stages. While its use in submissions remains very limited to date, ongoing developments; such as NICE’s position statement signal a shift toward more structured adoption. At the same time, increasing engagement from regulatory agencies, including EMA and FDA, reinforces the broader shift toward integrating AI into healthcare decision-making frameworks. For AI to be widely accepted in HTA, stakeholders must prioritize methodological transparency, collaborative standard-setting, and robust validation. With the right frameworks and continued cross-sector dialogue, AI can evolve from a promising innovation to a trusted tool that enhances both the efficiency and quality of HTA decision-making.
For more information on AI at Putnam, please contact us.
References:
Jump to a slide with the slide dots.
Explore how MAPS EMEA 2025 redefined Medical Affairs - patient-centricity, AI, evidence generation & launch excellence take center stage
Read moreAI speeds pharma insights, but human experts turn them into action. ClarityNav blends AI power with strategic, real-world expertise.
Read moreGLP-1s are redefining cardiometabolic care. Explore key insights, trials, and whitespace from ACC 2025 with ClarityNav analysis.
Read more