| Literature DB >> 17603909 |
Andrea Haase1, Markus Follmann, Guido Skipka, Hanna Kirchner.
Abstract
BACKGROUND: Information overload, increasing time constraints, and inappropriate search strategies complicate the detection of clinical practice guidelines (CPGs). The aim of this study was to provide clinicians with recommendations for search strategies to efficiently identify relevant CPGs in SUMSearch and Google Scholar.Entities:
Mesh:
Year: 2007 PMID: 17603909 PMCID: PMC1925105 DOI: 10.1186/1471-2288-7-28
Source DB: PubMed Journal: BMC Med Res Methodol ISSN: 1471-2288 Impact factor: 4.615
Figure 1Flowchart of study methodology.
Determination of clinical practice guideline terms for the GLAD search strategy in the preliminary study
| Guideline/-s/-* | 340,105 | 1,954,000 | 58 |
| Practice guideline/-s/-* | 139,385 | 984,000 | 46 |
| Recommendation/-s/-* | 162,239 | 1,892,000 | 35 |
| Standard/-s/-* | 1,384,650 | 18,560,000 | 30 |
| Clinical pathway | 3,332 | 420,000 | 0 |
| Clinical protocol | 76,530 | 572,000 | 0 |
| Clinical standard | 64,009 | 872,000 | 0 |
| Clinical recommendation | 3,649 | 64,900 | 5 |
| Consensus | 62,113 | 970,000 | 7 |
| Clinical consensus | 12,401 | 263,000 | 0 |
| Consensus (SUMSearch: AND) development conferences | 6,189 | 49,700 | 6 |
| Position paper | 7,115 | 1,740,000 | 0 |
| Clinical (SUMSearch: AND) position paper | 1,085 | 272,000 | 0 |
| Good (SUMSearch: AND) clinical practice | 6,487 | 768,000 | 0 |
† The terms 'guideline', 'practice guideline', 'recommendation' and 'standard' were entered into SUMSearch and Google Scholar with the truncation '*', and as singular and plural terms.
‡ The number of retrievals produced by the respective CPG terms per search engine in the preliminary study.
§ Defined by Google Scholar as "Results 1–10 of about...."
|| The number of guidelines identified in SUMSearch by the combination of a CPG term and the MeSH term "back pain".
* = truncation; CPG = Clinical Practice Guideline; GLAD = GuideLine And Disease.
Allocation of retrievals in the manual review (3 GLAD-strategies, 9 diseases)
| unique and relevant retrievals; no duplicates | non-relevant retrievals removed from manual review; duplicates of: | raw retrievals; duplicates between PubMed and NGC | |
| unique and relevant retrievals; no duplicates | non-relevant retrievals removed from manual review; duplicates of and between: | raw retrievals; duplicates of and between retrievals | |
| unique and relevant retrievals; no duplicates | non-relevant retrievals removed from manual review; duplicates of and between: | raw retrievals; duplicates of and between retrievals | |
| unique and relevant retrievals; no duplicates | non-relevant retrievals removed from manual review; duplicates of: | raw retrievals; duplicates between singular and plural | |
| unique and relevant retrievals; no duplicates | non-relevant retrievals removed from manual review; duplicates of and between: | raw retrievals; duplicates of and between retrievals | |
| unique and relevant retrievals; no duplicates | non-relevant retrievals removed from manual review; duplicates of and between: | raw retrievals; duplicates of and between retrievals | |
Figure 2Intersections of unique and relevant clinical practice guidelines of nine tested diseases. (a) Per CPG term in Google Scholar (b) Per CPG term in SUMSearch (c) All CPG terms combined (SUMSearch and Google Scholar). CPG = clinical practice guideline.
Formula for calculating retrieval performance parameters for search strategies*
| Meets criteria (unique, relevant CPGs) | Does not meet criteria (non-relevant CPGs or duplicates) | |
| Detected | a | b |
| Not detected | c | d |
*Following the methodology of the Hedges group (see text): sensitivity = a/(a+c); specificity = d/(b+d); precision = a/(a+b); total sample (all unscreened reviewed retrievals) = a+b+c+d; NNR = 1/precision.
CPG = clinical practice guideline.
Retrievals obtained by application of GLAD search strategies in SUMSearch and Google Scholar
| 1843† | |||
| Detected | 97 | 697 | 794 |
| Not detected | 22 | 2014 | 2036 |
| Detected | 72 | 643 | 715 |
| Not detected | 47 | 2068 | 2115 |
| Detected | 48 | 286 | 334 |
| Not detected | 71 | 2425 | 2496 |
| 987† | |||
| Detected | 38 | 595 | 633 |
| Not detected | 81 | 2116 | 2197 |
| Detected | 10 | 214 | 224 |
| Not detected | 109 | 2497 | 2606 |
| Detected | 14 | 116 | 130 |
| Not detected | 105 | 2595 | 2700 |
| Total | 119 | 2711 | 2830 |
† Number of pooled retrievals for the nine MeSH terms in each search engine.
* = truncation; GLAD = Guideline And Disease.
Retrieval performance of search strategies in SUMSearch and Google Scholar†
| Guideline* | 81.51 (74.53 to 88.49) | 74.29 (72.64 to 75.94) | 8.18 (6.90 to 10.05) |
| Recommendation* | 60.50 (51.72 to 69.28) | 76.28 (74.67 to 77.89) | 9.93 (8.14 to 12.72) |
| Practice guideline* | 40.34 (31.52 to 49.16) | 89.45 (88.29 to 90.61) | 6.96 (5.52 to 9.43) |
| Guideline/s | 31.93 (23.56 to 40.30) | 78.05 (76.50 to 79.60) | 16.67 (12.76 to 24.04) |
| Recommendation/s | 8.40 (3.42 to 13.38) | 92.11 (91.09 to 93.13) | 22.42 (13.97 to 56.82) |
| Practice guideline/s | 11.76 (5.98 to 17.54) | 95.72 (94.96 to 96.48) | 9.29 (6.21 to 18.38) |
† 95% confidence intervals in brackets.
‡ NNR = number needed to read.
* = truncation.