Credit: Andriy Onufriyenko/Getty Images
Drug industry regulators are embracing AI to streamline their oversight activities; however, say researchers, they are yet to make clear how they will regulate the use of the technology in manufacturing.
In June, the FDA launched a generative AI tool—Elsa—to help scientific reviewers, investigators, and other staff work more efficiently. The agency also signaled its intention to integrate more AI into different regulatory processes.
Likewise, on the other side of the Atlantic, the EMA has been working on a plan to use AI in regulation since 2023. The idea, which builds on a document issued in 2021, is to improve staff productivity, automate processes, and support decision-making.
Elsewhere, various national regulators are also trying to incorporate AI into their oversight activities, with the UK MHRA and Japan’s recently launched AI Safety Institute being examples.
In contrast, the development of guidelines on the use of AI in manufacturing is less advanced, say the authors of new research coming out of Northeastern University.
“Many drug regulators have recognized the utility of AI in various aspects of drug development, manufacturing, and post-marketing surveillance. However, they have not fully clarified how they will regulate these AI-generated systems, processes, platforms, and products.
“The use of AI in the biopharmaceutical industry is exploding, and having more regulations is not the panacea. Instead, drug regulators must reimagine how to regulate new products and changing technology. An unconventional approach is required to regulate AI-related technology and the human therapeutics it creates,” the authors write.
Unconventional regulations
Rather than developing fixed guidelines, regulators should build the capacity and flexibility to adapt to the biopharmaceutical industry’s use of AI in manufacturing as it evolves, according to the authors.
For example, they write, “Drug regulatory agencies must adapt to the AI revolution by developing agile and adaptive frameworks that accommodate rapidly evolving technologies like machine learning, generative AI, and predictive modeling.”
As well as investing in capacity, regulators should establish AI-focused teams and implement continuous training programs to “upskill” reviewers and inspectors.
“By embedding capacity and capability building into their strategies, regulatory agencies can effectively balance innovation with public health and safety, positioning themselves to lead in the age of AI.”
Harmonization
The authors also stress the need for harmonized rules on the use of AI in drug making, arguing that, without inter-agency collaboration, there is a risk that different countries will establish separate, incompatible regulations.
“With the rapid integration of AI in drug discovery, development, and regulatory processes, there is a critical need for a harmonized global regulatory framework.
“This international effort would aim to create a unified regulatory approach that supports innovation while maintaining high safety and efficacy standards for healthcare technologies,” they write.