| Literature DB >> 34591852 |
Céline Caillet1,2,3,4, Serena Vickers1,2,3, Stephen Zambrzycki5, Facundo M Fernández5, Vayouly Vidhamaly1,2,3, Kem Boutsamay1,2,3, Phonepasith Boupha1,2,3, Pimnara Peerawaranun4, Mavuto Mukaka2,4, Paul N Newton1,2,3,4.
Abstract
BACKGROUND: Medicine quality screening devices hold great promise for post-market surveillance (PMS). However, there is little independent evidence on their field utility and usability to inform policy decisions. This pilot study in the Lao PDR tested six devices' utility and usability in detecting substandard and falsified (SF) medicines. METHODOLOGY/PRINCIPALEntities:
Mesh:
Substances:
Year: 2021 PMID: 34591852 PMCID: PMC8483322 DOI: 10.1371/journal.pntd.0009674
Source DB: PubMed Journal: PLoS Negl Trop Dis ISSN: 1935-2727
Main characteristics of the devices included in the study*.
| Device name | Manufacturer or Institution | Market status | Technology Main Specifications | Handheld | Cost |
|---|---|---|---|---|---|
| 4500a FTIR Single Reflection Spectrometer | Agilent Technologies [ | M | FTIR-MIR Spectral range 4,000cm-1-650cm-1 | N | US$ 31,067 |
| Minilab | Global Pharma Health Fund E.V. [ | M | TLC, disintegration test | N | US$ 2,510 (without reference standards) |
| MicroPHAZIR RX analyser | ThermoFisher Scientific [ | M | NIR–Dispersive Wavelength range 1,600nm-2,400nm | Y | US$47,500 |
| NIR-S-G1 Spectrometer | Young Green Energy -Innospectra | M | NIR–Dispersive Wavelength range 900nm-1,700nm | Y | US$1,199 (without smartphone) |
| Paper Analytical Device | University of Notre-Dame [ | D | Paper-based colour test | Y (S) | US$3 |
| Progeny Spectrometer | Rigaku [ | M | Raman 1,064 nm laser | Y | (ex-demo model) |
| TruScan RM Spectrometer | ThermoFisher Scientific [ | M | Raman 785 nm laser | Y | US$ 62,500 (including chemometric software package and tablet holder) |
*Rapid Diagnostic Tests (RDT) and single-use immunoassay devices were deemed field-suitable in the laboratory evaluation work [7], but could not be evaluated in the present study because the developers of the single-use immunoassay test were unable to supply sufficient samples of the devices within the timeframe of the project.
D, Under development; FTIR, Fourier Transform Infrared; M, Marketed; MIR, Mid-Infrared; MS, Mass spectrometry; N, No; NIR, Near infrared; S, Single-use device; TLC, Thin-layer chromatography; Y, Yes.
a The costs reported here do not include VAT that may vary by country of purchase. Ordering several devices from the manufacturer is subject to potential reduced purchase cost.
b Unlike other devices, the Minilab was evaluated by laboratory technicians involved in current routine quality control at the National Center for Food and Drug Analysis.
c At the time of the study the NIR unit was produced by Young Green energy. It is now produced by InnoSpectra Corporation.
d The near-infrared sampling unit is marketed, but the smartphone application is not.
Samples sets of medicines initially included in sample set testing.
| API | Study Code | Brand name | Quality type and origin of the medicines |
|---|---|---|---|
|
|
|
|
|
|
|
|
| |
| SPS16 | Diabeta 250 | F—Look-alike | |
| SPS03 | N/A | 100% API simulated medicine | |
| SPS04 | N/A | 50% API simulated medicine | |
| SPS02 | N/A | 0% API simulated medicine | |
|
|
|
|
|
|
|
|
| |
|
|
|
| |
| SPS09 | Coartem | G—Field-collected | |
| SPS10 | Coartem | F—Field-collected | |
| SPS11 | Coartem | F—field collected | |
|
| SPS14 | Oflocee | G—Field-collected |
| SPS15 | Ofloxacin | G—Field-collected | |
| SPS13 | Di-Flo | G- Field-collected | |
| SPS05 | N/A | 100% API—Simulated medicine | |
| SPS01 | N/A | 50% API simulated medicine | |
| SPS02 | N/A | 0% API simulated medicine |
The test sample (SPS22) and the reference library samples (n = 4: SPS20, SPS21, SPS06, SPS07) that were subsequently discarded from the results because of unexpected out-of-specifications API content as per UPLC analysis, are given in italics and highlighted. The samples with out-of-specifications reference library samples were still used for the PAD evaluation as the PAD reference libraries are independent reference pictures provided by the device developers.
G: genuine; F: falsified.
*Simulated sample
¥ ‘look-alike’ medicines are defined as medicines stated as containing specific API (not one of the seven API included in this study) but the tablets were visually indistinguishable from genuine medicines in order to mimic a falsified medicine with a wrong API; the actual medicine was Diabeta (chlorpropamide), but the tablets looked identical to Sulfatrim (SMTM)] [21].
Pairwise comparisons of the median total time taken per sample in sample set testing.
| 4500a FTIR | MicroPHAZIR RX | Minilab | NIR-S-G1 | PAD | Progeny | Truscan RM | |
|---|---|---|---|---|---|---|---|
|
| - | <0.001 | <0.001 | <0.001 | <0.001 | 0.004 | 0.009 |
|
| - | - | <0.001 | <0.001 | <0.001 | <0.001 | 0.002 |
|
| - | - | - | <0.001 | <0.001 | <0.001 | <0.001 |
|
| - | - | - | - | <0.001 | <0.001 | <0.001 |
|
| - | - | - | - | - | <0.001 | <0.001 |
|
| - | - | - | - | - | - | 0.51 |
|
| - | - | - | - | - | - | - |
|
| 316 (206–373) | 134 (98–170) | 2,063 (1,766–2,920) | 94 (61–112) | 620 (562–716) | 273 (163–302) | 148 (109–299) |
P-values of the mixed effects generalised linear regression model of ln(total time) adjusted by device and training, and clustered by inspectors and observers
Pairwise comparisons of the percentage of samples wrongly classified over all inspections out of total samples tested overall with the devices in the evaluation pharmacy inspections.
| 4500a FTIR | MicroPHAZIR RX | NIR-S-G1 | PAD | Progeny | Truscan RM | |
|---|---|---|---|---|---|---|
|
| - | 0.103 | 1.000 | 0.014 | 1.000 | 0.242 |
|
| - | - | 0.243 | <0.001 | 0.167 | N/A |
|
| - | - | - | 0.005 | 1.000 | 0.269 |
|
| - | - | - | - | 0.023 | <0.001 |
|
| - | - | - | - | - | 0.225 |
|
| - | - | - | - | - | - |
|
| 9.7 (2.0–25.8) | 0 (0–10.3) | 7.7 (1.6–20.9) | 37.9 (20.7–57.7) | 8.3 (1.0–27.0) | 0 (0–13.2) |
P-values of the Fisher’s exact test are presented
* p<0.05
**p<0.01
***p<0.001
a Not applicable as no samples were wrongly categorised in inspections with the Truscan RM or MicroPHAZIR RX
b Artesunate samples were discarded from the results analysis because samples were scanned through the glass vials by the inspectors, although reference library was created by scanning through a replacement packaging (plastic packaging)
Comparison of the number of samples incorrectly classified in evaluation pharmacy inspections with devices vs initial visual inspection without device.
| Device | Z | p-value | Median (IQR) number of samples wrongly classified with the device | Median (IQR) number of samples wrongly classified in initial inspection |
|---|---|---|---|---|
|
| -1.980 | 0.048 | 1.0 (0.3–1.0) | 2.0 (1.0–2.3) |
|
| 2.638 | 0.008 | 0 (0–0) | 2.0 (1.0–2.3) |
|
| 1.980 | 0.048 | 1.0 (0.3–1.0) | 2.0 (1.0–2.3) |
|
| -0.480 | 0.631 | 2.0 (1.0–5.3) | 2.0 (0.8–2.3) |
|
| 1.891 | 0.059 | 0 (0–1.5) | 1.5 (1.0–2.3) |
|
| 2.814 | 0.005 | 0 (0–0) | 1.5 (1.0–2.3) |
Z statistic and p-value of the Wilcoxon rank sum test are presented
* p<0.05
**p<0.01
***p<0.001
$ The numbers of samples wrongly categorized in initial inspections without devices used in the comparisons vary because we included only brands tested in initial inspections that each device were able to test (e.g. AL samples wrongly categorized during initial inspections were excluded for the PAD, as the PAD could not test samples containing AL). In both initial inspections without devices and inspections with devices, we excluded samples wrongly categorized from brands subsequently found to have reference library spectra obtained from poor quality reference samples (as per UPLC analyses), except for the PAD.
Matrix of pairwise comparisons of accuracy of devices in classifying samples incorrectly during sample set inspections (test device in row vs reference device in column).
| 4500a FTIR | MicroPHAZIR RX | Minilab | NIR-S-G1 | PAD | Progeny | Truscan RM | |
|---|---|---|---|---|---|---|---|
|
| - | 1.2 (0.1–25.0) | 0.5 (0.0–7.5) | 0.5 (0.0–5.8) | 0.5 (0.0–4.9) | 0.3 (0.0–3.5) | 0.6 (0.1–7.5) |
|
| - | - | 0.4 (0.0–6.5) | 0.4 (0.0–5.3) | 0.4 (0.0–3.8) | 0.2 (0.0–2.8) | 0.5 (0.0–6.1) |
|
| - | - | - | 0.9 (0.1–10.2) | 0.9 (0.1–6.5) | 0.6 (0.1–4.2) | 1.3 (0.2–10.8) |
|
| - | - | - | - | 1.0 (0.1–6.9) | 0.6 (0.1–3.0) | 1.4 (0.2–10.9) |
|
| - | - | - | - | - | 0.4 (0.1–2.5) | 1.4 (0.3–7.0) |
|
| - | - | - | - | - | - | 2.3 (0.4–13.0) |
|
| 5.6 (0.1–27.3) | 7.7 (0.2–36.0) | 11.8 (1.5–36.4) | 11.1 (1.4–34.7) | 20.8 (7.1–42.2) | 23.5 (6.8–49.9) | 15.0 (3.2–37.9) |
Odds ratio (95% CI) of the mixed effect logit model, with adjustment on the type of training received (rudimentary or intensive), sample set type (OFLO, AL, SMTM), and clustered by inspectors, are presented
$ 95%CI for binomial distribution
Observed user errors during EP and SSM inspections.
| Selection of wrong reference libraries in EP | Selection of wrong reference libraries in SSM | Other errors | |||||||
|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
| N/A | N/A | N/A | N/A | N/A | N/A | 5.9% (3/51) scans in EP and 3.0% (1/33) in SSM were not renamed after acquisition in the device memory | 0 | Samples were recorded on paper by the inspectors. Thus errors did not result in sample misclassification, but could affect traceability in practice |
|
| 3.9% (2/51) | 5.9% (2/34) | 0.0% (0/34) | 0.0% (0/33) | 0.0% (0/13) | 0.0% (0/13) | 5.9% (3/51 scans with tablets not inserted in sample cover) in EP | 0 | All errors made by inspector with rudimentary training |
|
| 27.0% (17/63) | 28.2% (11/39) | 5.1% (2/39) | 27.8% (10/36) | 33.3% (6/18) | 11.1% (2/18) | None | None | |
|
| N/A | N/A | N/A | N/A | N/A | N/A | Reading result errors in 24.1% (7/29) samples tested in EP and 20.8% (5/24) in SSM | 6.9% (2/29) in EP and 16.7% (4/24) in SSM | In some cases both the PAD showed wrong colours and the user made an error of interpretation, leading to overall correct classification (more details in |
| None of the failing samples were rerun despite clear instructions to rerun suspicious samples | Uncertain | ||||||||
| Use of the same visibly contaminated water for multiple PAD during one EP inspection (inspector with rudimentary training) | Uncertain | ||||||||
|
| 7.5% (4/53) | 13.3% (4/30) | 0.0% (0/30) | 0.0% (0/21) | 0.0% (0/13) | 0.0% | Deviation from study protocol: One inspector did not run the ’Application’ test after running the ’Analyse’ function | 0 | |
|
| 20.0% (9/45) | 19.4% (6/31) | 0.0% (0/31) | 25.8% (8/31) | 20.0% (5/20) | 0.0% | None | None | Inspectors did not recognize they selected the wrong library entry, but the device returned correct result |
*3 inspections only with the MicroPHAZIR RX because results of one inspection were discarded because of an issue over the inbuilt reference library
$Errors were recognized by the inspectors who re-tested the samples without mistakes
¥ Wrong selection of the reference library can happen only with the ’Application’ function. Using the analyse function, no user errors were observed during both EP and SSM