Back to Agenda
Navigating the Regulatory Labyrinth: How Well Do Language Models Read the Fine Print?
Session Chair(s)
Sam Kay, RAC
VP of Pharmaceutical Strategy
Basil Systems, United States
Comprehensive benchmarking of four major LLMs across 100 FDA approval packages identifies optimal models for regulatory tasks. Study provides critical insights on accuracy, limitations, and implementation strategies for regulatory AI adoption.
Learning Objective : Identify which LLM models demonstrated highest accuracy of generative AI for extracting critical regulatory data from FDA approval packages; Recognize key limitations and failure modes when using LLMs for regulatory intelligence, including hallucination rates and performance degradation patterns; Apply evidence-based benchmarking to select an LLM
Speaker(s)
Navigating the Regulatory Labyrinth: How Well Do Language Models Read the Fine Print?
Cameron Kieffer, PHD
Takeda, United States
Director, Global Regulatory Intelligence and Policy Research
Navigating the Regulatory Labyrinth: How Well Do Language Models Read the Fine Print?
Jeff MacDonald, PHARMD
BeOne Medicines USA, Inc., United States
Associate Director, Global Regulatory Policy and Intelligence
Navigating the Regulatory Labyrinth: How Well Do Language Models Read the Fine Print?
Aleksandr Merenkov
Genmab, United States
Associate Director, Global Regulatory Intelligence
Have an account?