| Literature DB >> 33047156 |
K Dufraing1, F Fenizia2, E Torlakovic3, N Wolstenholme4, Z C Deans5, E Rouleau6, M Vyberg7, S Parry8, E Schuuring9, Elisabeth M C Dequeker10.
Abstract
In personalized medicine, predictive biomarker testing is the basis for an appropriate choice of therapy for patients with cancer. An important tool for laboratories to ensure accurate results is participation in external quality assurance (EQA) programs. Several providers offer predictive EQA programs for different cancer types, test methods, and sample types. In 2013, a guideline was published on the requirements for organizing high-quality EQA programs in molecular pathology. Now, after six years, steps were taken to further harmonize these EQA programs as an initiative by IQNPath ABSL, an umbrella organization founded by various EQA providers. This revision is based on current knowledge, adds recommendations for programs developed for predictive biomarkers by in situ methodologies (immunohistochemistry and in situ hybridization), and emphasized transparency and an evidence-based approach. In addition, this updated version also has the aim to give an overview of current practices from various EQA providers.Entities:
Keywords: External quality assessment; Guideline; Immunohistochemistry; In situ hybridization; Molecular pathology; Oncology; Predictive biomarkers; Proficiency testing
Mesh:
Substances:
Year: 2020 PMID: 33047156 PMCID: PMC7550230 DOI: 10.1007/s00428-020-02928-z
Source DB: PubMed Journal: Virchows Arch ISSN: 0945-6317 Impact factor: 4.064
Fig. 1Phases of testing-based approach to EQA as determinant of evidence-based EQA and clinical relevance of EQA results. EQA, external quality assurance; TPCs, test performance characteristics; PPA, positive percent agreement, NPA, negative percent agreement. *Some examples are given; the lists are incomplete
Proposed scoring criteria for the analytical phase (example given as usually applied by molecular EQA programs)
| Scoring criteria | Points |
|---|---|
| Test outcome | |
| Target correctly identified | 2 points awarded |
| Target incorrectly identified | 0 points awarded |
| Technical malfunction | 0 points awarded (if no valid reason given) 0.5 points awarded (if valid reason given) 1 point awarded (if repeat sample requested) |
| Nomenclature (If relevant) | |
| Correct use of HGVS nomenclature [ | No points deducted |
| Minor nomenclature error (errors which cannot lead to a misinterpretation of the result), e.g., | No points deducted, but comment given |
Major nomenclature error e.g., errors which might cause a misinterpretation of the results; CTG>CGG L858R instead of c.2573T>G p.(Leu858Arg) e.g., only reporting the genomic or protein variant | 0.5 points deducted (only once) |
HGVS: Human Genome Variation Society (https://varnomen.hgvs.org)
Elements that should be included in the general EQA report
| Element | Further explanation |
|---|---|
| 1. Contact details | Contact details of the EQA provider, the EQA coordinator, and the person(s) of the EQA organization authorizing the report |
| 2. Subcontracted activities | For example, sample validation and preparation |
| 3. Report information | Information should include the issue date, report status, page numbers, report number, name of the EQA program, type of EQA (clinical or technical), and a confidentiality statement. |
| 4. Sample information | A clear description of the selection, validation, and preparation of EQA items used should be given. |
| 5. Marking criteria | Items that were assessed and how scores were given and finally calculated. |
| 6. Participant’s results | Individual and/or aggregate group results. Note: not fit-for-purpose tests can also be listed in the general report |
| 7. Assigned values | These are the correct or expected outcomes of the test. |
| 8. Comments on performance | The EQA coordinator and medical/technical experts give an overview of pitfalls and advice for quality improvement. Results are evaluated and also compared with previous EQA results. |
| 9. Statistics on variation | A descriptive overview showing characteristics of participants, methods, or procedures employed by participants, and the overall success rates should be provided to participants if relevant. Note1: Statistical analysis is not always applicable. Which statistical analyses are applicable is determined by the purpose of the program and selected TPCs. Note 2: Reporting variation in success rate between methods, equipment type, or procedures should be performed with caution. Poor performance is not always directly the result of methodology used, but other factors (human error, pre-analytical errors, reporting errors, etc.) might also play a role. If however a clear problem occurs with one method, reagent, platform, …, the EQA provider should notify the manufacturer and the relevant competent authority in their country of origin according to the IVD regulation [ Note 3: It should also be noted that although test validation is not the purpose of EQA programs, laboratories that are still validating their test method may also participate. Depending on the design of the program, their results may contribute to overall evidence for technical validation or indirect clinical validation of the assays. |
| 10. Unusual factors | Situations where unusual factors make evaluation of results and comments on performance impossible. Note: The EQA providers also want to stress that situations where unusual factors make evaluation impossible have not yet occurred in their experience. |
| 11. Conclusions |
| Ref | Recommendation | New vs carry-over |
|---|---|---|
| 1. Recommendations for the organization of an EQA program | ||
| 1.1 | The format of an EQA program should depend on the purpose of the program (integrated approach vs. “test performance characteristic” (TPC)–based approach, see Fig. | New |
| 1.2 | The program should be planned and organized by the EQA coordinator considering advice from experts: medical and technical experts and assessors (Supplementary Table | Carry-over |
| 1.3 | The time to return results must be pre-defined and monitored. | New |
| 1.4 | ISO/IEC 17043 accreditation is strongly recommended. | Carry-over |
| 2. Recommendations for EQA sample selection and validation | ||
| 2.1 | Samples should be fit for purpose in terms of the investigated TPCs. | New |
| Targets should be present in a clinically relevant reportable range, unless pre-determined otherwise. | New | |
| 2.2 | If possible, sample matrices should be identical to routine samples. Otherwise, substitute matrices could be used (Supplementary Table | New |
| 2.3 | Results for challenging cases should be included in the total performance score, unless more than a pre-defined fraction of laboratories had an incorrect result. | New |
| 2.4 | The EQA provider is responsible for validation procedures and for the selection of validating laboratories where the validation is conducted. The EQA provider should assess the competence of all laboratories chosen to validate EQA materials. | Carry-over |
| Validation of EQA samples is defined as reproducibility of the results in at least two laboratories or by different techniques; one laboratory is always a “designated reference laboratory.” This is the required minimum, but the final validation procedures could be more elaborate and may include other TPCs if deemed necessary by the EQA provider. | New | |
| 3. Recommendations for scoring criteria for “pass” vs. “fail” | ||
| 3.1 | Testing of the pre-analytical phase is generally out of scope of these EQAs. | Carry-over |
| 3.2 | For scoring of the analytical phase, a two-tiered system can be used as proposed in Table | Carry-over |
| EQA providers should define and monitor “technical malfunctions” and “laboratories with frequent technical malfunctions.” | New | |
| 3.3 | In schemes with a TPC-focused approach, the following elements should be scored as a minimum: name of the test, sensitivity of the test, and the variants tested. Quality metrics might be scored, depending on the specific methods used for analysis. | New |
| In schemes with an integrated approach or TPC-focused schemes where interpretation accuracy is a TPC, the presence and correctness of the interpretation should be scored in relation to the clinical and methodological information. The test interpretation should be written in a general and directive way, unless national guidelines stipulate alternative requirements. | New | |
| 4. Recommendations for dealing with poor performance | ||
| 4. | EQA providers will report (persistent) poor performers to governmental bodies, if these bodies are available. Where such bodies are not available, it is suggested that EQA providers should perform follow-up studies (e.g., request root cause analysis by the participants) or have to rely on national accreditation bodies for suggestions for improvement and/or could perform additional follow-up studies. | New |
| 5. Communication with participants | ||
| 5. | The EQA provider should make efforts for clear communication with laboratories before (e.g., scheme purpose), during (e.g., sample handling), and after result submission (individual results, general report, and appeal phase). | New |