| Literature DB >> 30990522 |
Philip J Scott1, Angela W Brown1, Taiwo Adedeji1, Jeremy C Wyatt2, Andrew Georgiou3, Eric L Eisenstein4, Charles P Friedman5.
Abstract
OBJECTIVE: To assess measurement practice in clinical decision support evaluation studies.Entities:
Keywords: clinical decision support systems; health informatics; measurement; reliability; validity
Mesh:
Year: 2019 PMID: 30990522 PMCID: PMC6748820 DOI: 10.1093/jamia/ocz035
Source DB: PubMed Journal: J Am Med Inform Assoc ISSN: 1067-5027 Impact factor: 4.497
Classifications of generic study types by broad study questions and the stakeholders concerned, with kind permission from Springer Science and Business Media. © Springer Science and Business Media, Inc. 2006.
| Study type | Aspect studied | Broad study question | Audience/stakeholders primarily interested in results | |
|---|---|---|---|---|
| 1 | Needs assessment | Need for the resource | What is the problem? | Resource developers, funders of the resource |
| 2 | Design validation | Design and development process | Is the development method in accord with accepted processes? | Funders of the resource; professional and governmental certification agencies |
| 3 | Structure validation | Resource static structure | Is the resource appropriately designed to function as intended? | Professional indemnity insurers, resource developers, professional and governmental certification agencies |
| 4 | Usability test | Resource dynamic usability and function | Can intended users navigate the resource so it carries out intended functions? | Resource developers, users |
| 5 | Laboratory function study | Resource dynamic usability and function | Does the resource have the potential to be beneficial? | Resource developers, funders, users, academic community |
| 6 | Field function study | Resource dynamic usability and function | Does the resource have the potential to be beneficial in the real world | Resource developers, funders users |
| 7 | Laboratory user effect study | Resource effect and impact | Is the resource likely to change behavior? | Resource developers and funders, users |
| 8 | Field user effect study | Resource effect and impact | Does the resource change user actual user behavior in ways that are positive? | Resource users and their clients, resource purchasers and funders |
| 9 | Problem impact study | Resource effect and impact | Does the resource have a positive impact on the original problem? | The universe of stakeholders |
Figure 1.Literature review process and results.
Figure 2.Measurement indicators in all included studies.
Reliability indicators
| Studies with Reliability Indicators | |
|---|---|
| Indicator | Instances |
| Inter-rater Cohen’s kappa | 28 |
| Inter-rater percentage | 8 |
| Test-retest | 1 |
| Intraclass correlation coefficient | 2 |
| Cronbach’s alpha | 5 |
| Claimed (no measurement specified) | 6 |
Valid measures by domain
| Primary Category | User Measures | Patient Health | Process of Care | IT System | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Secondary Category | Physician Knowledge, attitudes, or beliefs. | Physician decisions or diagnostic/ therapeutic accuracy | Physician satisfaction or perceptions | Laboratory | Clinical Measure or Outcome | Patient Reported Outcome | Patient Safety | Patient Reported Experience | Usability/ usefulness | Total |
| Validity Measured Elsewhere | 1a | 1e | 15 | 30 | 4 | 10 | 2c |
| ||
| Validity Measured in Study | 2d | 1b | 1 | 1 |
| |||||
|
|
|
|
|
|
|
|
|
|
|
|
a-eMeasures that were not categorized as patient health outcomes or process of care measures.
Reuse indicators
| Studies with Reuse Indicators | |
|---|---|
| Reuse Artefact | No of Studies |
| Modified or un-validated instrument | 23 |
| Methodology (all or part) | 17 |
| Measurement | 6 |
| Categorization | 8 |
| Guideline/protocol | 3 |
| Criteria | 8 |
| Definition | 3 |
|
|
|
Figure 3.Distribution of study types (all included studies n = 391).
Figure 4.Distribution of study types by measurement indicators.