Literature DB >> 33523400

Validating Intelligent Automation Systems in Pharmacovigilance: Insights from Good Manufacturing Practices.

Kristof Huysentruyt1, Oeystein Kjoersvik2, Pawel Dobracki3, Elizabeth Savage4, Ellen Mishalov5, Mark Cherry6, Eileen Leonard7, Robert Taylor8, Bhavin Patel9, Danielle Abatemarco7.   

Abstract

Pharmacovigilance is the science of monitoring the effects of medicinal products to identify and evaluate potential adverse reactions and provide necessary and timely risk mitigation measures. Intelligent automation technologies have a strong potential to automate routine work and to balance resource use across safety risk management and other pharmacovigilance activities. While emerging technologies such as artificial intelligence (AI) show great promise for improving pharmacovigilance with their capability to learn based on data inputs, existing validation guidelines should be augmented to verify intelligent automation systems. While the underlying validation requirements largely remain the same, additional activities tailored to intelligent automation are needed to document evidence that the system is fit for purpose. We propose three categories of intelligent automation systems, ranging from rule-based systems to dynamic AI-based systems, and each category needs a unique validation approach. We expand on the existing good automated manufacturing practices, which outline a risk-based approach to artificially intelligent static systems. Our framework provides pharmacovigilance professionals with the knowledge to lead technology implementations within their organizations with considerations given to the building, implementation, validation, and maintenance of assistive technology systems. Successful pharmacovigilance professionals will play an increasingly active role in bridging the gap between business operations and technical advancements to ensure inspection readiness and compliance with global regulatory authorities.

Entities:  

Mesh:

Year:  2021        PMID: 33523400      PMCID: PMC7892696          DOI: 10.1007/s40264-020-01030-2

Source DB:  PubMed          Journal:  Drug Saf        ISSN: 0114-5916            Impact factor:   5.606


Key Points

Introduction

Pharmacovigilance is the science of monitoring the effects of medicinal products to identify and evaluate potential adverse reactions and provide necessary and timely risk mitigation measures [1]. It is a discipline that is heavily reliant on data, and benchmark data indicate that most pharmacovigilance departments of pharmaceutical companies dedicate a large proportion of their resources to processing adverse event (AE) cases, and the number of AE cases increases annually [2, 3]. Intelligent automation technologies have strong potential to automate routine work and balance resource use across safety risk management and other pharmacovigilance activities [4]. Intelligent automation can contribute to the quality and consistency of case processing and assessment, leading to a timely assessment of safety signals. When such technology solutions are implemented to assist in processing AE cases, regulations require pharmaceutical companies to validate this software [5]. Computerized system validation (CSV) is the process of establishing and documenting that the specified requirements of a computerized system are fulfilled consistently from design until decommissioning of the system and/or transition to a new system. The approach to validation should focus on a risk assessment that takes into consideration the intended use of the system and the potential of the system to affect human subject protection and reliability of trial results [6]. Algorithms and rule-based software, electronic workflows, and pattern matching have been used widely in pharmacovigilance for years [4]. More recently, several companies and vendors have implemented robotic process automation to assist with the management of individual case safety reports (ICSRs). Such technologies follow existing guidance from regulators and industry guidelines such as the International Society for Pharmaceutical Engineering Good Automated Manufacturing Practices Guide (ISPE GAMP® 5) on the validation of computerized systems [7]. More recently, novel areas of research based on artificial intelligence (AI) technologies have evolved, including machine learning (ML) and natural language processing (NLP) techniques, and are now being adopted to support pharmacovigilance processes [8-10]. Although these types of technology show great promise with their capability to learn based on data inputs, existing validation frameworks may need to be augmented to verify intelligent automation systems. While the underlying validation requirements largely remain the same (e.g., such as conformity with user specifications, traceability of decision points with an audit trail, and reliability of algorithm output), additional software development activities tailored to intelligent automation are needed to document evidence that the system is fit for purpose. This paper presents a proposed classification of automation systems for pharmacovigilance, along with validation considerations for emerging technologies to support whether existing validation frameworks can be used or should be extended to constitute a reasonable/risk-based validation strategy, as has been done for computerized systems elsewhere within the industry [5, 7]. The primary focus of this paper is validation rather than other adoption considerations and potential barriers to implementation. This paper considers the use of intelligent automation solutions for pharmacovigilance, specifically for high-effort activities such as ICSR case processing [10]. The principles proposed may be applied to different areas that are subject to regulatory oversight.

Classification of Intelligent Automation Systems in Pharmacovigilance

Because existing validation frameworks may be sufficient for some novel technologies but not for others, there is a need for a classification of intelligent automation systems and the best-suited validation framework for each. Table 1 provides a breakdown of the types of automated pharmacovigilance (sub-) systems, some of the underlying technologies used within them, and examples of their application to pharmacovigilance processes.
Table 1

Classification of automated pharmacovigilance systems

ClassificationDefinitionStatus/validation framework
Rule-based static systemsAutomation is achieved via static rules designed to obtain the desired outcomeStatus: established
Examples include expedited reporting rule configuration, auto coding, and robotic process automation for case intakeValidation framework: exists today
AI-based static systemsSystem configuration includes components that are AI informed but subsequently "frozen," i.e., systems based on AI or ML that do not adapt in production (after "go-live"). These are also called "locked" modelsStatus: emergent

Re-training of the model is not applied automatically and is limited to the occurrence of events/triggers that require modification (e.g., the output needs change, expansion of training data set to improve quality)

Examples include an auto-translation system of source documents and a model based on ML for causality assessment of individual case safety reports

Validation framework: existing frameworks can be extended to cover these systems
AI-based dynamic systemsSystem configuration includes components that are AI informed and can adjust their behavior based on data after initial implementation in production, using a defined learning process [11]Status: eventual
These are like AI-based static systems but are continually updated once in production, based on a set cadence or trigger, to include new source data. These systems are sometimes referred to as online algorithmsValidation framework: more thorough review of validation frameworks will be needed to cover these systems

AI artificial intelligence, ML machine learning

Classification of automated pharmacovigilance systems Re-training of the model is not applied automatically and is limited to the occurrence of events/triggers that require modification (e.g., the output needs change, expansion of training data set to improve quality) Examples include an auto-translation system of source documents and a model based on ML for causality assessment of individual case safety reports AI artificial intelligence, ML machine learning In the classification (Table 1), the term “artificial intelligence” or AI is used to describe the simulation of human intelligence processes by computer systems but excludes purely rule-based systems. AI encompasses a wide range of technologies, including ML, text recognition, NLP, and machine translation. The considerations and elements described in this paper apply to supervised learning systems, within which the training data are curated and known by the model developers and subject matter experts [11]. For each classification of AI systems, the availability of a validation framework is categorized as established, emergent, or eventual (Table 1). “Established” frameworks are those for which clear guidance exists to support validation efforts. With “emergent” frameworks, a foundation exists but is insufficient for demonstrating fit-for-use systems within today's regulated environment. No validation framework yet exists for “eventual” systems. The focus of our research is to propose an extended validation system for AI-based static systems. We based our classification of automated pharmacovigilance systems in Table 1 on the main ways people typically interact with data. The primary purpose was to identify considerations for validation of automation systems. We recognize this is an evolving area, and there may be a need to define additional classes in the future.

Validation of Artificial Intelligence (AI)-Based Systems Require Considerations Beyond Those Needed for Rule-Based Systems

Compared with rule-based systems, AI-based systems do not consist of a set of pre-defined rules but generally include models derived from a training data set (e.g., supervised ML based on "ground truth") or more generalized modeling of a “human-like” capability (e.g., auto-translation, NLP). The potential advantages of AI-based systems are that more complex associations and patterns sometimes unknown to humans can be considered by a model, resulting in greater flexibility and ability to handle variable input (e.g., NLP for the handling of unstructured text). However, the use of AI introduces some potential challenges related to the validation of these systems (Sect. 3.2).

Validation of AI-Based Static vs. AI-Based Dynamic Systems

Dynamic systems are those in which new data (e.g., newly received ICSRs) are used to continually update the model for future data, in contrast to static systems that use batch learning techniques that generate the model by learning on the entire training data set at once. A dynamic system is referred to as “online ML” or an “online adaptive system.” Such systems will dynamically adapt to new patterns in data and should proactively improve over time; however, they also introduce potential risks in terms of stability and performance of the model over time. As these systems are inherently dynamic in nature, current validation and change control guidelines are no longer adequate to demonstrate consistent and reliable system performance. Moreover, some AI algorithms do not learn in a deterministic way, which adds additional validation challenges when such algorithms are not restricted to a static state. This reality will again require decisions regarding any re-validation over time. To date, electronic systems utilized in pharmacovigilance are almost exclusively static. Dynamic systems may be used more frequently in the future when appropriate methods for timely validation of these systems become available and if the benefits of using a dynamic system outweigh the potential risks associated with these systems.

Validation Considerations for AI-Based Static Systems

Multiple frameworks exist to support CSV. Validation of rule-based static systems is well-established and supported by existing standards and industry guides, as described in ISPE GAMP® 5 [7]. Figure 1 illustrates the ISPE GAMP® 5 general approach to achieving computerized system compliance and fitness for use with any type of AI system. This approach can be applied both for initial system validation and subsequently as part of change control. However, some other considerations are covered in this paper.
Fig. 1

A General Approach for Achieving Compliance and Fitness for Intended Use (ISPE). Source: Figure 4.1, GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems, © Copyright ISPE 2008 [7]. All rights reserved. https://www.ISPE.org.

Used with permission from ISPE (ISPE GAMP® 5, Figure 4.1)

A General Approach for Achieving Compliance and Fitness for Intended Use (ISPE). Source: Figure 4.1, GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems, © Copyright ISPE 2008 [7]. All rights reserved. https://www.ISPE.org. Used with permission from ISPE (ISPE GAMP® 5, Figure 4.1) The level of detail at each stage should be commensurate with the complexity. The model described in Fig. 1 is not the only potential framework for validation. However, it is the most widely adopted methodology in pharmacovigilance and is based on a scientific, risk-based approach. Therefore, we consider the ISPE GAMP® 5 methodology to be a seminal work and describe best practices for rule-based systems based on the ISPE GAMP® 5 approach. This also allows for AI functionality to be more easily accommodated within existing (validated) systems. To address the best practices as outlined for customized AI systems in pharmacovigilance (Fig. 2), we propose layering the AI approach within the ISPE GAMP® 5 methodology, as shown in Fig. 3. Steps and deliverables are highlighted below and focus on data selection, model training and testing, validation, system testing, and monitoring.
Fig. 2

Overlay of the US FDA's total product lifecycle approach on artificial intelligence/machine learning workflow.

Source: US FDA Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)—based software as a Medical Device (SaMD), Discussion Paper and Request for Feedback, 2019 [13]

Fig. 3

Proposed ISPE GAMP® 5 methodology for validating artificial intelligence (AI)-based static systems in pharmacovigilance

Overlay of the US FDA's total product lifecycle approach on artificial intelligence/machine learning workflow. Source: US FDA Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)—based software as a Medical Device (SaMD), Discussion Paper and Request for Feedback, 2019 [13] Proposed ISPE GAMP® 5 methodology for validating artificial intelligence (AI)-based static systems in pharmacovigilance Table 2 describes considerations specific to the validation of AI-based static systems in each of the steps in Fig. 3 and potential techniques and approaches that could be used to address some challenges specific to AI-based static systems. For more general validation considerations, refer to either the ISPE GAMP® 5 guidance or other recommendations for agile models [7, 12]. For AI-based static systems that require initial training, we also consider the “good machine learning practices” framework and terminology introduced by the US FDA to be a seminal work [13].
Table 2

Validation considerations for artificial intelligence-based static systems

AreaPotential challenges when validating AI-based static systems using existing frameworksConsiderations for extending existing frameworks for AI-based static systemsDocumentation considerations
PlanningThe validation approach and strategy require careful planning

Gather knowledge and create the overall approach for validating an AI-based static system in the organization (master strategy, validation framework, etc.). The overall approach should refer to examples, best practices, templates, etc.

Define performance thresholds and fallback plans/BCPs

The activity scope will vary based on whether you are validating a new AI-based static system or adding an AI functionality within an existing validated system

The master strategy for AI-based static systems should include guidelines for the objective, scope, key activities, BCP/fallback plan, responsibilities, and acceptance criteria
Risk management

New and unknown types of errors and risks areas may not be considered in existing validation frameworks:

Data and Environments

Knowledge and Experience

Complexity

Availability

Trust

Regulations

New types of risk may need to be considered, or existing risk types may be emphasized. At a minimum, these risks need to be assessed individually for each system and mitigated appropriately

Risks related to the quality of data include potential biases, completeness, sources, etc.

Inaccuracies in the model due to production data changes over time ("model drift")

There may be conditions within which the model works and those in which it does not (i.e., the operating envelope)

Systemic rather than random errors may occur for fully automated systems, potentially introducing a bias

Additional risks related to a system not reaching 100% accuracy, which may require additional assessment

External vendor/supplier involvement

Critical activity, downtime, or any other implications related to a deficiency in the model

Required level of transparency/explainability of decisions versus required accuracy

Existing risk identification and mitigation methodologies such as failure modes effect and criticality analysis could still be used

New risk categories can be incorporated into existing risk documentation deliverables (e.g., vendor risk assessment, functional risk assessment)

A separate deliverable focusing on data risk assessment may be introduced or incorporated within existing documents

Requirements and specificationsPerformance metrics must be proactively defined with consideration given to the system's intended use.

An acceptable performance level may be driven by the intended use of the system.

Define the intended use and criticality of each individual component of the system (note: The system could be a combination of any automated and manual activities)

Important risk-based considerations may be the impact (and quality) of targeted automation step(s) on the overall outcome of the process and the intended degree of automation. A good understanding is needed for cases where the system is effective vs. where it is not (i.e., consider the criticality of misses)

Specific requirements may need to be included for exception flagging with respective confidence scores to facilitate performance verification during validation

Desired performance level can be defined using different metrics and is not necessarily limited to a level of accuracy

The most appropriate measure may consist of a combination of criteria (e.g., overfitting to manage false positives vs. false negatives when predicting case validity)

Data selectionAI-based static systems are dependent on data selection, whereas existing validation frameworks are dependent on process

The methodology for the data collection, selection, and cleaning, should consider the following aspects:

Collection of data representative of the process, ensuring correct distribution among the data classes, categories, or labels

Assignment of "ground truth"

Data quality, completeness, and lifetime accuracy considerations

Close collaboration amongst pharmacovigilance professionals and technical experts

Allow models to handle new categories in production, ensure a small sample training set contains instances with an “unknown” category

Data used for training and validation of the model should be independent of and separate from the test data

Frequent updates of the test data set to ensure it is not stale and remains representative of the sample population

Ensure that the test data set receives the same data pre-processing treatments applied to training data

Ensure the model does not rely on personal identification or other irrelevant fields for making predictions

Data selection and preparation approach, covering:

Audit and sequestration of training/validation and test data

Collection protocols

Reference standard determination

Quality assurance

Data pre-processing

Model development

(model build, train and test, model validation)

Note: "Model validation" refers to the definition used in ML – see footnote

Model training, selection, and validation (see footnote)

Training, selection, and validation of an AI model is a new and delicate process

Training of AI models requires a combination of expertise in AI/ML with a deep understanding of the data and related process

Best practices for model training (i.e., good ML practices) need to be integrated into the validation framework

Utilize pre-trained models and transfer learning, where possible, as these methods are proven to be less sensitive to implicit biases in the data

Utilize techniques that lower any potential bias, e.g., regularization, dropouts, and cross-validation

Define the methodology to assess model performance using an independent team to avoid bias (e.g., blinding)

Study the basic metrics (including precision, recall, accuracy, and F-score) to define model performance

A test planning, strategy, and evaluation plan, including the following:

Model selection (e.g., protocol, rationale, and experimentation for the choice of model)

Adherence to good ML practices

Validation protocol (methodology, splitting strategy)

Model testing

It is not usually possible to list and test all potential scenarios that are handled by such a model

A tailored approach to testing (e.g., statistical and risk-based blinding methodology and test environment(s)) is typically necessary

Test model performance on incomplete, missing, delayed, corrupt, noisy, invalid, and unknown data points

A test planning, strategy, and evaluation plan, including the following:

Test planning and strategy (methodology and test environment(s))

Evaluation plan (metrics definition, acceptance thresholds)

Versioning performance records

Specific test data inclusion

Model explainability and traceability

Need transparency in how the AI model arrived at a certain decision

Careful model designing should be implemented to maintain a clear understanding of the models in any instanceEnsure complete documentation of the AI model at each version

As part of the design phase, consideration should be given to the required level of transparency and explainability concerning how the AI model arrived at a decision. There could be a trade-off in that more complex AI models involving deep learning may have a higher degree of accuracy but are inherently less transparent in how a decision was made [14]. Where complex models are used, then it should be possible for someone to describe broadly how the system is working and the principles used for decision making, and any boundaries beyond which the system is not trained

Avoid the "Black box” effect

Document ML architecture, hyper-parameters, and parameters
Acceptance testingThe integration of an AI model into the entire system (user interface, dashboards, reports, interfaces) may be a challenging activity

Specialized resources trained in both testing and the business area are essential to verify acceptance and the intended use of the entire system

The AI model can work as intended, but the acceptance level of testing should verify its integral performance

Planning of respective levels of integration testing and acceptance testing may require advanced test techniques (e.g., unit testing, integration, test automation, security, usability, etc.)

Consider the latest approaches (e.g., US FDA’s CSA) [15]

Test model roll-back capability in case of an error in production

Test strategy and planning should consider:

Validation of the AI model vs. entire system acceptance testing

Test types and techniques

Risk-based testing (CSA)

Model rollbacks

Address the impact of external intervention on monitoring performance of the model

Quality control activities and measurements for model performance must be inserted at points in which the findings will be representative of the model and not representative of the model combined with potential external intervention

Monitoring approaches for AI-based static systems:

Concurrent activities of the model, along with the historical approach, allow for comparison, adjudication, and identification of potential performance issues

Test strategy and planning should consider:

Validation of the AI model vs. entire system acceptance testing

Test types and techniques

Risk-based testing (CSA)

Model rollbacks

System deployment and monitoringAlthough training data should be representative of the real world, training data cannot be exhaustive, may be limited, may exclude a rare scenario, or may not be representative of all situationsPre-defined robust monitoring plan with periodic quality control measures is recommended. Strong consideration must be given to the perceived level of risk and potential impact of performance degradation on the quality and integrity of the entire system

Monitoring plan with pre-established:

Periodicity

Performance metrics

Acceptable quality standards and thresholds

Investigation measures to understand and isolate the root cause of performance degradation

Retraining plan with defined triggers for retraining (e.g., suboptimal performance for three straight periods)

Criteria for implementation of optimized models into production

Post-deployment monitoring and early detection of model performance degradation are critical to maintaining expectations from a regulatory and business perspectiveVarious circumstances can trigger model changes: quality standards/thresholds not being met, an opportunity to create new ground truth improving model performance, change in data input, or a change in business or regulatory requirements. The monitoring plan should include the threshold and criteria for (1) when thresholds are not met and (2) when to consider a root cause or impact assessment
Appropriately challenge performance. Understand model capabilities and limitations to avoid potential bias of assuming accuracy, which can result in reliance on a model with suboptimal performance

When evaluating a potential missed scenario, it is important to understand whether a given scenario was included within the scope of the training material or whether it is an entirely new scenario

The model may occasionally require retraining, and the reasons for this retraining should be documented as they arise

Change managementVersion control of an AI model is based on specific training data and is a part of the entire system (also version controlled)Consider a retrained model as a new model independent from the previous versions. All model versions should be recorded and documented for reproducibility and audit purposesThe change should be documented, including objectives of the change; models and features should be version controlled
Apply software change management principles when there is a need for end-to-end system change. This should be separate from changes to the ML subcomponentsAll versions should be recorded and documented for reproducibility and audit purposes. Track model parameters, training dataset, and algorithm details for each model version
Consider using tools to build AI models that allow for version control (i.e., model management platforms)The quality plan must be evaluated for potential changes (e.g., changes in parameters or configuration) that may require modification over time

Specific document requirements will be dependent on system complexity and intended use. In machine learning, model validation is the process in which a trained model is evaluated with a testing data set. The main purpose of using the testing data set is to test the generalization ability of a trained model [16]. Visit TransCelerate's website for more detail [17]

AI artificial intelligence, BCPs business continuity plans, CSA computerized system assurance, ML machine learning

Validation considerations for artificial intelligence-based static systems Gather knowledge and create the overall approach for validating an AI-based static system in the organization (master strategy, validation framework, etc.). The overall approach should refer to examples, best practices, templates, etc. Define performance thresholds and fallback plans/BCPs The activity scope will vary based on whether you are validating a new AI-based static system or adding an AI functionality within an existing validated system New and unknown types of errors and risks areas may not be considered in existing validation frameworks: Data and Environments Knowledge and Experience Complexity Availability Trust Regulations New types of risk may need to be considered, or existing risk types may be emphasized. At a minimum, these risks need to be assessed individually for each system and mitigated appropriately Risks related to the quality of data include potential biases, completeness, sources, etc. Inaccuracies in the model due to production data changes over time ("model drift") There may be conditions within which the model works and those in which it does not (i.e., the operating envelope) Systemic rather than random errors may occur for fully automated systems, potentially introducing a bias Additional risks related to a system not reaching 100% accuracy, which may require additional assessment External vendor/supplier involvement Critical activity, downtime, or any other implications related to a deficiency in the model Required level of transparency/explainability of decisions versus required accuracy Existing risk identification and mitigation methodologies such as failure modes effect and criticality analysis could still be used New risk categories can be incorporated into existing risk documentation deliverables (e.g., vendor risk assessment, functional risk assessment) A separate deliverable focusing on data risk assessment may be introduced or incorporated within existing documents An acceptable performance level may be driven by the intended use of the system. Define the intended use and criticality of each individual component of the system (note: The system could be a combination of any automated and manual activities) Important risk-based considerations may be the impact (and quality) of targeted automation step(s) on the overall outcome of the process and the intended degree of automation. A good understanding is needed for cases where the system is effective vs. where it is not (i.e., consider the criticality of misses) Specific requirements may need to be included for exception flagging with respective confidence scores to facilitate performance verification during validation Desired performance level can be defined using different metrics and is not necessarily limited to a level of accuracy The most appropriate measure may consist of a combination of criteria (e.g., overfitting to manage false positives vs. false negatives when predicting case validity) The methodology for the data collection, selection, and cleaning, should consider the following aspects: Collection of data representative of the process, ensuring correct distribution among the data classes, categories, or labels Assignment of "ground truth" Data quality, completeness, and lifetime accuracy considerations Close collaboration amongst pharmacovigilance professionals and technical experts Allow models to handle new categories in production, ensure a small sample training set contains instances with an “unknown” category Data used for training and validation of the model should be independent of and separate from the test data Frequent updates of the test data set to ensure it is not stale and remains representative of the sample population Ensure that the test data set receives the same data pre-processing treatments applied to training data Ensure the model does not rely on personal identification or other irrelevant fields for making predictions Data selection and preparation approach, covering: Audit and sequestration of training/validation and test data Collection protocols Reference standard determination Quality assurance Data pre-processing Model development (model build, train and test, model validation) Note: "Model validation" refers to the definition used in ML – see footnote Model training, selection, and validation (see footnote) Training, selection, and validation of an AI model is a new and delicate process Training of AI models requires a combination of expertise in AI/ML with a deep understanding of the data and related process Best practices for model training (i.e., good ML practices) need to be integrated into the validation framework Utilize pre-trained models and transfer learning, where possible, as these methods are proven to be less sensitive to implicit biases in the data Utilize techniques that lower any potential bias, e.g., regularization, dropouts, and cross-validation Define the methodology to assess model performance using an independent team to avoid bias (e.g., blinding) Study the basic metrics (including precision, recall, accuracy, and F-score) to define model performance A test planning, strategy, and evaluation plan, including the following: Model selection (e.g., protocol, rationale, and experimentation for the choice of model) Adherence to good ML practices Validation protocol (methodology, splitting strategy) Model testing It is not usually possible to list and test all potential scenarios that are handled by such a model A tailored approach to testing (e.g., statistical and risk-based blinding methodology and test environment(s)) is typically necessary Test model performance on incomplete, missing, delayed, corrupt, noisy, invalid, and unknown data points A test planning, strategy, and evaluation plan, including the following: Test planning and strategy (methodology and test environment(s)) Evaluation plan (metrics definition, acceptance thresholds) Versioning performance records Specific test data inclusion Model explainability and traceability Need transparency in how the AI model arrived at a certain decision As part of the design phase, consideration should be given to the required level of transparency and explainability concerning how the AI model arrived at a decision. There could be a trade-off in that more complex AI models involving deep learning may have a higher degree of accuracy but are inherently less transparent in how a decision was made [14]. Where complex models are used, then it should be possible for someone to describe broadly how the system is working and the principles used for decision making, and any boundaries beyond which the system is not trained Avoid the "Black box” effect Specialized resources trained in both testing and the business area are essential to verify acceptance and the intended use of the entire system The AI model can work as intended, but the acceptance level of testing should verify its integral performance Planning of respective levels of integration testing and acceptance testing may require advanced test techniques (e.g., unit testing, integration, test automation, security, usability, etc.) Consider the latest approaches (e.g., US FDA’s CSA) [15] Test model roll-back capability in case of an error in production Test strategy and planning should consider: Validation of the AI model vs. entire system acceptance testing Test types and techniques Risk-based testing (CSA) Model rollbacks Quality control activities and measurements for model performance must be inserted at points in which the findings will be representative of the model and not representative of the model combined with potential external intervention Monitoring approaches for AI-based static systems: Concurrent activities of the model, along with the historical approach, allow for comparison, adjudication, and identification of potential performance issues Test strategy and planning should consider: Validation of the AI model vs. entire system acceptance testing Test types and techniques Risk-based testing (CSA) Model rollbacks Monitoring plan with pre-established: Periodicity Performance metrics Acceptable quality standards and thresholds Investigation measures to understand and isolate the root cause of performance degradation Retraining plan with defined triggers for retraining (e.g., suboptimal performance for three straight periods) Criteria for implementation of optimized models into production When evaluating a potential missed scenario, it is important to understand whether a given scenario was included within the scope of the training material or whether it is an entirely new scenario The model may occasionally require retraining, and the reasons for this retraining should be documented as they arise Specific document requirements will be dependent on system complexity and intended use. In machine learning, model validation is the process in which a trained model is evaluated with a testing data set. The main purpose of using the testing data set is to test the generalization ability of a trained model [16]. Visit TransCelerate's website for more detail [17] AI artificial intelligence, BCPs business continuity plans, CSA computerized system assurance, ML machine learning

Documentation Considerations for AI-Based Static Systems

Confidence in the output of AI-based static systems requires insight into why decisions are made within the model. Documentation regarding how the model was developed is a good way to provide insight into which considerations were taken during the development process. We are proposing more specific documentation to be considered. This does not mean all forms of documentations are recommended for every model but rather for the documentation to be applied where it is appropriate, as documentation requirements also depend on the complexity of the system. The last column in Table 2 includes a proposal of the specific documentation that could be considered for AI-based static systems.

Conclusions

New and existing intelligent automation systems have great promise for enhancing existing processes in pharmacovigilance. In this article, we introduced a categorization of automation subsystems. Depending on the category, validation methodology may already be well-understood (e.g., rule-based static systems) or may pose significant unknowns (e.g., AI-based dynamic systems). Traditional validation methodologies can be extended to cover AI-based static systems. The main points to highlight for AI-based static systems are as follows: Risk assessment within an overall risk-based approach will be crucial to defining the level of validation activities needed. New deliverables are being introduced to cover ML (e.g., data acquisition plan and model test plan). Data quality is a key consideration. Monitoring after deployment in production is critical and should be an essential consideration of the validation approach. Terminology alignment is necessary for a good understanding of the meaning between CSV specialists, business users, and ML experts. Close collaboration amongst pharmacovigilance professionals and technical experts is required. As this technology area is still evolving quickly, best practices for validation of these systems will continue to emerge and evolve. AI-based static systems may be implemented more frequently in the future, including, perhaps, even some dynamic elements. Our validation considerations applied to the GAMP validation framework and support opportunities for increasing the adoption of intelligent automation technologies within pharmacovigilance by offering the industry a framework for developing these technologies. In time, best validation practices for these technologies can be identified and evaluated against the goal of enhanced patient safety. Future research will focus on the development of AI-based dynamic systems, for which the considerations must address the continuous learning nature of these systems. Alignment with regulators on validation is critical to the implementation of AI-based systems within the highly regulated pharmacovigilance industry. As part of this alignment process, industry should engage regulators actively in discussions since agreement on high-level performance measures with a clear interpretation and verifiable measurement processes will be essential. Pharmacovigilance professionals must learn new skills and acquire technical knowledge relating to system implementation and CSV to remain successful in this rapidly advancing sector [18]. A once-siloed working paradigm is shifting, requiring pharmacovigilance professionals to understand business and technical requirements, to select representative data to adequately train and approve AI models, and to understand how, when, and why to retrain their models over time. A deep understanding of AI is becoming more closely intertwined with pharmacovigilance operations and will dictate which pharmacovigilance functions will successfully leverage assistive technology to further consistency in data collection and evaluation, ultimately improving compliance and positive outcomes for our patients globally.
With the widespread adoption of technology in the pharmacovigilance space, pharmacovigilance professionals must understand and guide the building, validation, and maintenance of artificial intelligence (AI)-based pharmacovigilance systems. This is essential to ensure inspection readiness as we integrate automation systems across the pharmacovigilance value chain.
Intelligent automation systems can be grouped into three categories: rule-based static systems, AI-based static systems, and AI-based dynamic systems. Validation frameworks currently exist for rule-based static systems but not AI-based systems.
We propose validation considerations for a risk-based approach to compliant GxP AI-based static systems, which can be implemented within the good automated manufacturing practices framework.
  7 in total

Review 1.  Natural language processing: an introduction.

Authors:  Prakash M Nadkarni; Lucila Ohno-Machado; Wendy W Chapman
Journal:  J Am Med Inform Assoc       Date:  2011 Sep-Oct       Impact factor: 4.497

2.  Automation Opportunities in Pharmacovigilance: An Industry Survey.

Authors:  Rajesh Ghosh; Dieter Kempf; Angela Pufko; Luisa Fernanda Barrios Martinez; Chris M Davis; Sundeep Sethi
Journal:  Pharmaceut Med       Date:  2020-02

3.  Artificial Intelligence Within Pharmacovigilance: A Means to Identify Cognitive Services and the Framework for Their Validation.

Authors:  Ruta Mockute; Sameen Desai; Sujan Perera; Bruno Assuncao; Karolina Danysz; Niki Tetarenko; Darpan Gaddam; Danielle Abatemarco; Mark Widdowson; Sheryl Beauchamp; Salvatore Cicirello; Edward Mingle
Journal:  Pharmaceut Med       Date:  2019-04

4.  Adverse Drug Reaction Case Safety Practices in Large Biopharmaceutical Organizations from 2007 to 2017: An Industry Survey.

Authors:  Stella Stergiopoulos; Mortiz Fehrle; Patrick Caubel; Louise Tan; Louise Jebson
Journal:  Pharmaceut Med       Date:  2019-12

Review 5.  Utilizing Advanced Technologies to Augment Pharmacovigilance Systems: Challenges and Opportunities.

Authors:  David John Lewis; John Fraser McCallum
Journal:  Ther Innov Regul Sci       Date:  2019-12-28       Impact factor: 1.778

6.  Artificial Intelligence and the Future of the Drug Safety Professional.

Authors:  Karolina Danysz; Salvatore Cicirello; Edward Mingle; Bruno Assuncao; Niki Tetarenko; Ruta Mockute; Danielle Abatemarco; Mark Widdowson; Sameen Desai
Journal:  Drug Saf       Date:  2019-04       Impact factor: 5.606

7.  Innovation in Pharmacovigilance: Use of Artificial Intelligence in Adverse Event Case Processing.

Authors:  Juergen Schmider; Krishan Kumar; Chantal LaForest; Brian Swankoski; Karen Naim; Patrick M Caubel
Journal:  Clin Pharmacol Ther       Date:  2018-12-11       Impact factor: 6.875

  7 in total
  6 in total

1.  Leveraging Machine Learning to Facilitate Individual Case Causality Assessment of Adverse Drug Reactions.

Authors:  Yauheniya Cherkas; Joshua Ide; John van Stekelenborg
Journal:  Drug Saf       Date:  2022-05-17       Impact factor: 5.606

2.  Industry Perspective on Artificial Intelligence/Machine Learning in Pharmacovigilance.

Authors:  Raymond Kassekert; Neal Grabowski; Denny Lorenz; Claudia Schaffer; Dieter Kempf; Promit Roy; Oeystein Kjoersvik; Griselda Saldana; Sarah ElShal
Journal:  Drug Saf       Date:  2022-05-17       Impact factor: 5.228

3.  "Artificial Intelligence" for Pharmacovigilance: Ready for Prime Time?

Authors:  Robert Ball; Gerald Dal Pan
Journal:  Drug Saf       Date:  2022-05-17       Impact factor: 5.228

4.  Black Swan Events and Intelligent Automation for Routine Safety Surveillance.

Authors:  Oeystein Kjoersvik; Andrew Bate
Journal:  Drug Saf       Date:  2022-05-17       Impact factor: 5.228

5.  A Decision Support System for preclinical assessment of nanomaterials in medical products: the REFINE DSS.

Authors:  Alex Zabeo; Fabio Rosada; Lisa Pizzol; Fanny Caputo; Sven Even Borgos; Jeremie Parot; Robert E Geertsma; Joost Jacob Pouw; Rob J Vandebriel; Oihane Ibarrola Moreno; Danail Hristozov
Journal:  Drug Deliv Transl Res       Date:  2022-05-10       Impact factor: 5.671

6.  Challenges and opportunities for mining adverse drug reactions: perspectives from pharma, regulatory agencies, healthcare providers and consumers.

Authors:  Graciela Gonzalez-Hernandez; Martin Krallinger; Monica Muñoz; Raul Rodriguez-Esteban; Özlem Uzuner; Lynette Hirschman
Journal:  Database (Oxford)       Date:  2022-09-02       Impact factor: 4.462

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.