Literature DB >> 28864692

Common evidence gaps in point-of-care diagnostic test evaluation: a review of horizon scan reports.

Jan Y Verbakel1, Philip J Turner1, Matthew J Thompson2, Annette Plüddemann1, Christopher P Price1, Bethany Shinkins3, Ann Van den Bruel1,4.   

Abstract

OBJECTIVE: Since 2008, the Oxford Diagnostic Horizon Scan Programme has been identifying and summarising evidence on new and emerging diagnostic technologies relevant to primary care. We used these reports to determine the sequence and timing of evidence for new point-of-care diagnostic tests and to identify common evidence gaps in this process.
DESIGN: Systematic overview of diagnostic horizon scan reports. PRIMARY OUTCOME MEASURES: We obtained the primary studies referenced in each horizon scan report (n=40) and extracted details of the study size, clinical setting and design characteristics. In particular, we assessed whether each study evaluated test accuracy, test impact or cost-effectiveness. The evidence for each point-of-care test was mapped against the Horvath framework for diagnostic test evaluation.
RESULTS: We extracted data from 500 primary studies. Most diagnostic technologies underwent clinical performance (ie, ability to detect a clinical condition) assessment (71.2%), with very few progressing to comparative clinical effectiveness (10.0%) and a cost-effectiveness evaluation (8.6%), even in the more established and frequently reported clinical domains, such as cardiovascular disease. The median time to complete an evaluation cycle was 9 years (IQR 5.5-12.5 years). The sequence of evidence generation was typically haphazard and some diagnostic tests appear to be implemented in routine care without completing essential evaluation stages such as clinical effectiveness.
CONCLUSIONS: Evidence generation for new point-of-care diagnostic tests is slow and tends to focus on accuracy, and overlooks other test attributes such as impact, implementation and cost-effectiveness. Evaluation of this dynamic cycle and feeding back data from clinical effectiveness to refine analytical and clinical performance are key to improve the efficiency of point-of-care diagnostic test development and impact on clinically relevant outcomes. While the 'road map' for the steps needed to generate evidence are reasonably well delineated, we provide evidence on the complexity, length and variability of the actual process that many diagnostic technologies undergo. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

Entities:  

Keywords:  Diagnosis; PRIMARY CARE; Point-of-care Systems; evidence based medicine; framework; horizon scanning reports

Mesh:

Year:  2017        PMID: 28864692      PMCID: PMC5588931          DOI: 10.1136/bmjopen-2016-015760

Source DB:  PubMed          Journal:  BMJ Open        ISSN: 2044-6055            Impact factor:   2.692


This study provides the first data on evidence gaps in point-of-care diagnostic test evaluation in primary care, answering an important clinical need. We extracted data from multiple consistently conducted horizon scan reports. Our approach might ignore relevant research, but systematic evidence gaps identified suggest findings to be robust. Our analyses are limited by the horizon scan report publication date.

Introduction

Primary care is becoming increasingly complex due to a rise in patients with multimorbidity and polypharmacy, the pressure of short consultation times and the fragmented nature of primary and secondary care. Delayed or missed diagnoses are the most common reason for malpractice claims.1 Therefore, there is a huge demand for innovations that enable efficient and accurate diagnostic assessment within a general practitioner (GP)’s consultation. Consequently, the development of point-of-care diagnostic tests is currently a hotbed of activity.2 These tests have the potential to significantly improve the efficiency of diagnostic pathways, providing test results within the time frame of a single consultation, enabling them to influence immediate patient management decisions. A potential barrier to this innovative activity however is the slow and haphazard nature of the current pathway to adoption for new healthcare technologies.3 This is particularly the case for diagnostic tests, where uptake is highly variable between settings and notable inconsistencies lie in the speed at which they are adopted.4 One possible cause of this inefficiency is the slow generation of evidence of efficacy relevant to the target clinical settings. To provide an efficient means of identifying, summarising and disseminating the evidence for emerging diagnostic technologies relevant to primary care settings, the Oxford Diagnostic Horizon Scanning Programme was established in 2008 (currently funded by the National Institute for Health Research (NIHR) Oxford Diagnostic Evidence Co-operative).5 New technologies are identified through systematic literature searches and interactions with clinicians and the diagnostics industry. These are then prioritised using a defined list of criteria.6 Evidence is gathered using systematic searches of the published literature and supplementary information obtained from manufacturer or trade websites and through web search engines, which are then used to summarise the analytical and diagnostic accuracy of the point-of-care test,7 impact of the test on patient outcomes and health processes, cost-effectiveness of the test and current guidelines for use within routine care in the UK. The reports, indexed in the TRIP database8 and freely available from the Horizon Scan Programme’s website (www.oxford.dec.nihr.ac.uk), are actively disseminated to the NIHR Health Technology Assessment Programme, the National Institute for Health and Clinical Excellence, clinical researchers and commissioners of healthcare services and highlight any further research requirements to facilitate evidence-based adoption decisions. To date, 40 horizon scan reports have been completed, all following an identical protocol. These horizon scan reports provide a unique opportunity to describe the evidence trajectory of new point-of-care diagnostic tests relevant to primary care settings and identify common evidence gaps.

Methods

This is a descriptive study of all 40 horizon scan reports published to date by the Oxford Horizon Scan programme. For each horizon scan report, we extracted the year of publication and the disease area (classified per clinical domain of the International Classification of Primary Care—Revised Second Edition (ICPC2-R) 17) (see online supplementary file 1). We subsequently reviewed all studies that were included in the horizon scan reports (including systematic reviews) and extracted data on year of publication, size of the study, point-of-care test device(s) and its intended role. The intended roles were defined as ‘triage’, in which the new test is used at the start of the clinical pathway, ‘replacement’, in which the new test replaces an existing test, either as a faster equivalent test or to replace a non-point-of-care laboratory test, or ‘add-on’ in which the new test is performed at the end of a clinical pathway.9 Depending on the role, different types of evidence are required before a new point-of-care test can be adopted in routine care.10 We extracted data on study design and primary outcomes and used the dynamic evidence framework developed by Horvath et al,11 as shown in figure 1, to classify the type of evidence, defined as (1) analytical performance, (2) clinical performance, (3) clinical effectiveness, (4) comparative clinical effectiveness, (5) cost-effectiveness and (6) broader impact.
Figure 1

Horvath et al 11’s cyclical framework for the evaluation of diagnostic tests. This framework illustrates the key components of the test evaluation process. (1) Analytical performance is the aptitude of a diagnostic test to conform to predefined quality specifications. (2) Clinical performance examines the ability of the biomarker to conform to predefined clinical specifications in detecting patients with a certain clinical condition or in a physiological state. (3) Clinical effectiveness focuses on the test’s ability to improve health outcomes that are relevant to an individual patient, also allowing comparison (4) of effectiveness between tests. (5) A cost-effectiveness analysis compares the changes in costs and health effects of introducing a test to assess the extent to which the test can be regarded as providing value for money. (6) Broader impact encompasses the consequences (eg, acceptability, social, psychological, legal, ethical, societal and organisational consequences) of testing beyond the above-mentioned components.

Horvath et al 11’s cyclical framework for the evaluation of diagnostic tests. This framework illustrates the key components of the test evaluation process. (1) Analytical performance is the aptitude of a diagnostic test to conform to predefined quality specifications. (2) Clinical performance examines the ability of the biomarker to conform to predefined clinical specifications in detecting patients with a certain clinical condition or in a physiological state. (3) Clinical effectiveness focuses on the test’s ability to improve health outcomes that are relevant to an individual patient, also allowing comparison (4) of effectiveness between tests. (5) A cost-effectiveness analysis compares the changes in costs and health effects of introducing a test to assess the extent to which the test can be regarded as providing value for money. (6) Broader impact encompasses the consequences (eg, acceptability, social, psychological, legal, ethical, societal and organisational consequences) of testing beyond the above-mentioned components. Analytical performance is the aptitude of a diagnostic test to conform to predefined quality specifications.12 13 Clinical performance examines the ability of the biomarker to conform to predefined clinical specifications in detecting patients with a particular clinical condition or in a physiological state.14 Clinical effectiveness focuses on the test’s ability to improve health outcomes that are relevant to the individual patient.14 A cost-effectiveness analysis compares the changes in costs and in health effects of introducing a test to assess the extent to which the test can be regarded as providing value for money. Broader impact encompasses the consequences (eg, acceptability, social, psychological, legal, ethical, societal and organisational consequences) of testing beyond the above-mentioned components. For point-of-care tests that had evidence on each of these components, we calculated the median time (in years) for a technology to complete the evaluation cycle. We assessed whether the study was conducted in a setting that was relevant for primary care, which was defined as GP surgeries (clinics), outpatient clinics, walk-in (or urgent care) centres and emergency departments. Data extraction was piloted on 20 reports by BS and checked by JV and PT after which improvements were made to the final data extraction sheet. Three authors (JYV, BS and PJT) independently single-extracted data of the included studies.

Results

We screened 40 horizon scan reports and extracted data from the 500 papers (including 41 systematic reviews) referenced by these reports (table 1). Ten horizon scan reports examined a point-of-care test relevant to cardiovascular disease, six to respiratory diseases and five to each of endocrine/metabolic diseases, digestive diseases and general/unspecified diseases. A further nine horizon scan reports examined a health problem relevant to a range of other disease areas. The intended role of the test was triage in 14 (35%), replacement in 20 (50%) and add-on in 6 (15%) of the 40 horizon scan reports.
Table 1

Baseline characteristics of horizon scan reports by clinical domain

Diagnostic tests by clinical domainDate report publishedStudies in primary care (n)/all studies (N)Sample sizes of primary studies (range)Publication dates primary studies (range)Intended role of the testComponents missing
Cardiovascular 21/158
 The D-dimer test for ruling out DVT in primary care20092/91028–12951997–2009TI/IV/VI
 Point-of-care test for INR coagulometers201016/3218–8922000–2010R
 Point-of-care test for cardiac troponin20110/1820–22632001–2011R
 Point-of-care pulse wave analysis20110/722–7492008–2011RIII/IV/V/VI
 Point-of-care test for BNP20111/15150–6062001–2011TIV
 Handheld ECG monitors for the detection of atrial fibrillation in primary care20130/918–5051990–2012RI/IV/V
 Genotyping polymorphisms of warfarin metabolism20130/220–1122009–2010TIII/IV/V/VI
 Point-of-care test for hFABP20141/2152–10742003–2013AI/VI
 Portable ultrasound devices20141/243–9431993–2013TVI
 Point-of-care test for a panel of cardiac markers20140/2133–52011999–2012TI/III/V/VI
Digestive 12/81
 Point-of-care test for hepatitis C virus20110/6100–22061999–2011RIV/V/VI
 Point-of-care test for coeliac disease20121/987–26902004–2009TI/III/IV/V
 Transcutaneous bilirubin measurement20130/2831–8491996–2011TI/III/IV
 Point-of-care calprotectin tests20141/814–1402008–2012TI/III/IV/V/VI
 Point-of-care faecal occult blood testing201410/30100–851491990–2013T
Endocrine/metabolic 28/76
 Point-of-care blood test for ketones in diabetes20096/933–5292000–2006T
 Point-of-care test for the analysis of lipid panels20109/1334–49681995–2010R
 Point-of-care test for HbA1c20106/723–78932003–2010RIV*
 Point-of-care test for thyroid-stimulating hormone20130/164–641999–1999RI/III/IV/V/VI
 Point-of-care HbA1c tests: diagnosis of diabetes20167/4623–62261996–2015RIII/IV*
Female genital 2/8
Chlamydia trachomatis testing20092/8162–25171998–2009RII/V/VI
General and unspecified 7/34
 Point-of-care test for total white cell count20101/6120–5002000–2009RIII/IV/V/VI
 Point-of-care test for CRP20116/820–8981997–2011R
 Point-of-care test for procalcitonin20120/854–3842001–2010AIV
 Non-contact infrared thermometers20130/690–8552005–2013AIII/V/VI
 Point-of-care test for malaria20150/698–981999–2014RI/V
Musculoskeletal 0/1
 Autoimmune markers for rheumatoid arthritis20120/1880–8802008–2008RI/III/IV/V/VI
Neurological 1/9
 Point-of-care test for handheld nerve conduction measurement devices for carpal tunnel syndrome20121/933–11902000–2011RIV/V
Pregnancy 2/11
 Urinalysis self-testing in pregnancy20142/949–4 44 2201991–2012AIV
 Point-of-care test for quantitative blood hCG20150/240–402014–2015TIII/IV/V/VI
Respiratory 25/84
 Pulse oximetry in primary care20091/4114–21271997–2004TI/III/IV/V
 A portable handheld electronic nose in the diagnosis of cancer, asthma and infection20095/830–6651999–2009TIII/IV/V/VI
 Point-of-care spirometry20114/1113–10411996–2009RV
 Point-of-care automated lung sound analysis20113/221–1001996–2010AIV/V
 Point-of-care tests for influenza in children20128/2973–91862002–2011RI/IV
 Point-of-care tests for group A streptococcus20154/10121–8922002–2015RI
Skin 1/7
 Dermoscopy for the diagnosis of melanoma in primary care20091/796–30532001–2008TI/IV/V
Urological 7/31
 Point-of-care urine albumin–creatinine ratio test20106/1183–49681999–2010R
 Point-of-care test for creatinine20141/1020–4012007–2013RIV/V/VI
 Point-of-care NGAL tests20150/10100–12192007–2013AIII/IV/V
Grand total106/500

Evaluation components: I, analytical performance; II, clinical performance; III, clinical effectiveness; IV, comparative clinical effectiveness; V, cost-effectiveness; VI, broader impact.

*Same diagnostic technology.

A, add-on; BNP, B-natriuretic peptide; CRP, C reactive protein; DVT, deep vein thrombosis; HbA1c, glycated haemoglobin; hCG, human chorionic gonadotropin; hFABP, heart-type fatty acid-binding protein; INR, international normalised ratio; n, number; NGAL, neutrophil gelatinase-associated lipocalin; R, replacement; T, triage

Baseline characteristics of horizon scan reports by clinical domain Evaluation components: I, analytical performance; II, clinical performance; III, clinical effectiveness; IV, comparative clinical effectiveness; V, cost-effectiveness; VI, broader impact. *Same diagnostic technology. A, add-on; BNP, B-natriuretic peptide; CRP, C reactive protein; DVT, deep vein thrombosis; HbA1c, glycated haemoglobin; hCG, human chorionic gonadotropin; hFABP, heart-type fatty acid-binding protein; INR, international normalised ratio; n, number; NGALneutrophil gelatinase-associated lipocalin; R, replacement; T, triage We found a median of nine primary studies (IQR 7–15.8) per horizon scan report with a median time between first and last publication of 10 years (IQR 6.8–13.3). Across all horizon scan reports, on average, only 19% (95% CI 11.4% to 20.7%) of studies were performed in primary care (figure 2).
Figure 2

Setting (%) of the studies by disease area (according to the International Classification of Primary Care-Second edition coding).

Setting (%) of the studies by disease area (according to the International Classification of Primary Care-Second edition coding). Of all studies, 30.4% (n=152) assessed analytical performance of the diagnostic technology, providing evidence for this component in 25 (62.5%) of the 40 horizon scan reports. Clinical performance was evaluated in 71.2% (n=356) of all studies, while only 18.2% (n=91) of studies evaluated clinical effectiveness of the diagnostic technology. A further 10.0% (n=50) compared clinical effectiveness of two or more point-of-care tests, and only 8.6% (n=43) of the 500 papers evaluated cost-effectiveness (figure 3).
Figure 3

Test evaluation component by disease area in absolute number (n) of studies.

Test evaluation component by disease area in absolute number (n) of studies. Clinical performance was often assessed earlier (in 16 tests) or together (in 12) with analytical performance and not assessed at all for 6 tests. Broader impact such as acceptability was tested before evidence on clinical effectiveness or cost-effectiveness was available in 11 horizon scan reports. Figure 4 shows the number of years between the horizon scan report and original paper publication date for each evaluation component, split by intended role of the point-of-care test. The size of the bubbles represents the number of studies proportionate to all studies for the intended role, clearly depicting the emphasis on clinical performance and paucity of clinical effectiveness studies. Furthermore, tests acting as a triage instrument tend to spend more time on evidence generation than tests replacing an existing one or add-on tests performed at the end of the clinical pathway.
Figure 4

Number of years between horizon scan report and original paper publication date by the intended role for each evaluation component. Size of bubbles represents number of studies proportionate to all studies for the intended role. BNP, B-natriuretic peptide; CRP, C reactive protein; FOBT, faecal occult blood test; HbA1c, glycated haemoglobin; hCG, human chorionic gonadotropin; hFABP, heart-type fatty acid-binding protein; INR, international normalised ratio; TSH, thyroid-stimulating hormone; WBC, white cell count.

Only seven (17.5% (95% CI 7.3% to 32.8%)) horizon scan reports included evidence for all evaluation components with a median time to complete the evaluation cycle of 9 years (IQR 6–13 years). Of these, tests acting as a triage instrument (in three reports) had a median of 15 years (IQR 10–19) while tests replacing an existing one (in four reports) had 9 years (IQR 5–11) (figure 4). Number of years between horizon scan report and original paper publication date by the intended role for each evaluation component. Size of bubbles represents number of studies proportionate to all studies for the intended role. BNP, B-natriuretic peptide; CRP, C reactive protein; FOBT, faecal occult blood test; HbA1c, glycated haemoglobin; hCG, human chorionic gonadotropin; hFABP, heart-type fatty acid-binding protein; INR, international normalised ratio; TSH, thyroid-stimulating hormone; WBC, white cell count. Even in the latter category of diagnostic technology replacing existing tests, where nearly half of studies (49.4%; n=247) have been performed, there was a clear imbalance between studies merely focusing on analytical or clinical performance (87.4%) and the few studies advancing to clinical effectiveness (21.1%). The sequence of evidence generation over time for the seven horizon scan reports which had completed the evaluation cycle varied widely, as shown in figure 5. The size of the bubbles represents the proportion of studies for each evaluation component. The grey arrow shows the sequence we would expect, starting at analytical performance (at 12 o’clock) and completing at broader impact analysis (at 10 o’clock). The arrows and numbered bubbles represent the actual time sequence of evidence generation.
Figure 5

Sequence of evidence generation for all seven horizon scan reports completing the full evaluation cycle.  INR, international normalised ratio.

Sequence of evidence generation for all seven horizon scan reports completing the full evaluation cycle.  INR, international normalised ratio. Very few point-of-care test evaluations seem to follow the expected sequence. In fact, only the report on point-of-care C reactive protein testing seemed to generally follow a linear temporal sequence from analytical performance towards broader impact. Some diagnostic technology, such as point-of-care international normalised ratio (INR) testing, had evidence generated for the broader impact component before any other component, suggesting that some diagnostic technologies are adopted in routine clinical care prior to any published evidence on clinical performance or effectiveness.

Discussion

Main findings

Our findings suggest that most point-of-care diagnostic tests undergo clinical performance assessment, but very few progress to evaluation of their broader impact or cost-effectiveness, even in the more established and frequently reported clinical domains, such as cardiovascular disease. Some point-of-care tests even skip essential stages such as clinical effectiveness, yet are still implemented in routine care. We present a novel way to visualise the gaps in the evidence generation through bubble plots and dynamic cycle illustration.

Strengths and limitations

Our study provides novel data on common evidence gaps in the evaluation of novel point-of-care tests for a wide range of clinical conditions. The extensive library of existing horizon scan reports and the methodological rigour in which they are produced provided an ideal opportunity to review the pathway of evidence for novel point-of-care diagnostic technologies relevant to the primary care settings. The chosen topics of the reports result from a comprehensive approach to identify new or emerging diagnostic tests, including literature searches and interaction with the diagnostics industry and clinicians, prioritising technologies relevant to primary care. We have, however, no measure of how reproducible this prioritisation process is, and potentially risks greater or lesser inclusion of various disease areas. Evidence from reports on other clinical topics might provide different findings. Further to this, our review is limited to publication date of each report, thus potentially overlooking evidence generated following publication. Our approach might arguably ignore relevant (unpublished) research carried out (eg, studies performed during test development by industry), but the commonalities in the evidence gaps across the reports suggest that our findings are robust. Three authors (JYV, BS and PJT) independently single-extracted data of the 500 included studies, making it impossible to test for inter-rater agreement.

Comparison with the existing literature

Previous evidence has shown that the adoption of a diagnostic technology is often insufficient to achieve a benefit, and in most cases, a change of care process is essential.15 The market for point-of-care tests is growing rapidly,16 and there is a clear demand from primary care clinicians for these tests to help them diagnose a range of conditions.17 18 Critical appraisal of new diagnostic technologies is considered essential to facilitate implementation.11 Several evaluation frameworks have been identified,19 most of which describe the evaluation process as a linear process, similar to the staged evaluation of drugs. Considering the interactions between different evaluation components and the need for certain tests to re-enter the evaluation process after updates to the underlying technology,20 it may be more realistic to assess these components in a cyclic and repetitive process.11

Implications

The slow adoption of novel point-of-care tests may result from the paucity of technologies following the expected sequence of evidence generation. Specifically, there is a need to shift emphasis from examining clinical performance of point-of-care tests to comparative clinical effectiveness and broader impact assessment. We recommend using a structured dynamic approach, presenting the results in a visually appealing manner for both industry (during development and pursuit for regulatory approval) and research purposes. Policy-makers and guideline developers should be aware of this cyclical nature; assuming test evaluation is a linear process results in a less efficient evidence generation pathway. For example, assessing the cost-effectiveness early on in the development phase of novel point-of-care test can help determine where exactly it fits in the clinical pathway and thus ensure that the evidence subsequently generated is relevant to that population and setting.

Conclusions

Considering that evidence generation for new tests takes on an average 9 years, test developers need to be aware of the time and investment required. While the ‘road map’ for the steps needed to generate evidence are reasonably well delineated, we provide evidence on the complexity, length and variability of the actual process that many diagnostic technologies undergo.
  13 in total

1.  Introduction: strategies to set global quality specifications in laboratory medicine.

Authors:  C G Fraser; A Kallner; D Kenny; P H Petersen
Journal:  Scand J Clin Lab Invest       Date:  1999-11       Impact factor: 1.713

2.  Interpreting diagnostic accuracy studies for patient care.

Authors:  Susan Mallett; Steve Halligan; Matthew Thompson; Gary S Collins; Douglas G Altman
Journal:  BMJ       Date:  2012-07-02

3.  From biomarkers to medical tests: the changing landscape of test evaluation.

Authors:  Andrea R Horvath; Sarah J Lord; Andrew StJohn; Sverre Sandberg; Christa M Cobbaert; Stefan Lorenz; Phillip J Monaghan; Wilma D J Verhagen-Kamerbeek; Christoph Ebert; Patrick M M Bossuyt
Journal:  Clin Chim Acta       Date:  2013-09-27       Impact factor: 3.786

Review 4.  The Evidence to Support Point-of-Care Testing.

Authors:  Andrew St John
Journal:  Clin Biochem Rev       Date:  2010-08

5.  Proposals for a phased evaluation of medical tests.

Authors:  Jeroen G Lijmer; Mariska Leeflang; Patrick M M Bossuyt
Journal:  Med Decis Making       Date:  2009-07-15       Impact factor: 2.583

6.  Prioritisation criteria for the selection of new diagnostic technologies for evaluation.

Authors:  Annette Plüddemann; Carl Heneghan; Matthew Thompson; Nia Roberts; Nicholas Summerton; Luan Linden-Phillips; Claire Packer; Christopher P Price
Journal:  BMC Health Serv Res       Date:  2010-05-05       Impact factor: 2.655

7.  Chapter 7: grading a body of evidence on diagnostic tests.

Authors:  Sonal Singh; Stephanie M Chang; David B Matchar; Eric B Bass
Journal:  J Gen Intern Med       Date:  2012-06       Impact factor: 5.128

8.  Current and future use of point-of-care tests in primary care: an international survey in Australia, Belgium, The Netherlands, the UK and the USA.

Authors:  Jeremy Howick; Jochen W L Cals; Caroline Jones; Christopher P Price; Annette Plüddemann; Carl Heneghan; Marjolein Y Berger; Frank Buntinx; John Hickner; Wilson Pace; Tony Badrick; Ann Van den Bruel; Caroline Laurence; Henk C van Weert; Evie van Severen; Adriana Parrella; Matthew Thompson
Journal:  BMJ Open       Date:  2014-08-08       Impact factor: 2.692

9.  Point-of-care testing in UK primary care: a survey to establish clinical needs.

Authors:  Philip J Turner; Ann Van den Bruel; Caroline H D Jones; Annette Plüddemann; Carl Heneghan; Matthew J Thompson; Christopher P Price; Jeremy Howick
Journal:  Fam Pract       Date:  2016-04-05       Impact factor: 2.267

10.  More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests.

Authors:  Matthew Thompson; Bernhard Weigl; Annette Fitzpatrick; Nicole Ide
Journal:  IEEE J Transl Eng Health Med       Date:  2016-06-13       Impact factor: 3.316

View more
  15 in total

1.  Point-of-care testing in general practice: just what the doctor ordered?

Authors:  Angel Mr Schols; Geert-Jan Dinant; Jochen Wl Cals
Journal:  Br J Gen Pract       Date:  2018-08       Impact factor: 5.386

2.  Diagnosis of acute serious illness: the role of point-of-care technologies.

Authors:  Gregory L Damhorst; Erika A Tyburski; Oliver Brand; Greg S Martin; Wilbur A Lam
Journal:  Curr Opin Biomed Eng       Date:  2019-09-16

3.  Care pathway and prioritization of rapid testing for COVID-19 in UK hospitals: a qualitative evaluation.

Authors:  Timothy Hicks; Amanda Winter; Kile Green; Patrick Kierkegaard; D Ashley Price; Richard Body; A Joy Allen; Sara Graziadio
Journal:  BMC Health Serv Res       Date:  2021-05-31       Impact factor: 2.655

4.  Target Product Profiles for medical tests: a systematic review of current methods.

Authors:  Paola Cocco; Anam Ayaz-Shah; Michael Paul Messenger; Robert Michael West; Bethany Shinkins
Journal:  BMC Med       Date:  2020-05-11       Impact factor: 8.775

5.  How point-of-care HbA1c testing changes the behaviour of people with diabetes and clinicians - a qualitative study.

Authors:  J A Hirst; A J Farmer; V Williams
Journal:  Diabet Med       Date:  2020-01-08       Impact factor: 4.359

6.  In-vitro diagnostic point-of-care tests in paediatric ambulatory care: A systematic review and meta-analysis.

Authors:  Oliver Van Hecke; Meriel Raymond; Joseph J Lee; Philip Turner; Clare R Goyder; Jan Y Verbakel; Ann Van den Bruel; Gail Hayward
Journal:  PLoS One       Date:  2020-07-06       Impact factor: 3.240

7.  Evaluating diagnostic strategies for early detection of cancer: the CanTest framework.

Authors:  Fiona M Walter; Matthew J Thompson; Ian Wellwood; Gary A Abel; William Hamilton; Margaret Johnson; Georgios Lyratzopoulos; Michael P Messenger; Richard D Neal; Greg Rubin; Hardeep Singh; Anne Spencer; Stephen Sutton; Peter Vedsted; Jon D Emery
Journal:  BMC Cancer       Date:  2019-06-14       Impact factor: 4.430

8.  Diagnostic evidence cooperatives: bridging the valley of death in diagnostics development.

Authors:  Ann Van den Bruel; Gail Hayward
Journal:  Diagn Progn Res       Date:  2018-06-18

9.  The Clinical Review Committee: Impact of the Development of In Vitro Diagnostic Tests for SARS-CoV-2 Within RADx Tech.

Authors:  Matthew Robinson; Charlotte Gaydos; Barbara Van Der Pol; Sally McFall; Yu-Hsiang Hsieh; William Clarke; Robert L Murphy; Lea E Widdice; Lisa R Hirschhorn; Richard Rothman; Chad Achenbach; Claudia Hawkins; Adam Samuta; Laura Gibson; David McManus; Yukari C Manabe
Journal:  IEEE Open J Eng Med Biol       Date:  2021-04-28

10.  Performance and ease of use of a molecular point-of-care test for influenza A/B and RSV in patients presenting to primary care.

Authors:  Jan Y Verbakel; Veerle Matheeussen; Katherine Loens; Mandy Kuijstermans; Herman Goossens; Margareta Ieven; Christopher C Butler
Journal:  Eur J Clin Microbiol Infect Dis       Date:  2020-03-14       Impact factor: 3.267

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.