| Literature DB >> 26484762 |
Meredith N Zozus1, Carl Pieper2, Constance M Johnson3, Todd R Johnson4, Amy Franklin4, Jack Smith4, Jiajie Zhang4.
Abstract
OBJECTIVE: Medical record abstraction (MRA) is often cited as a significant source of error in research data, yet MRA methodology has rarely been the subject of investigation. Lack of a common framework has hindered application of the extant literature in practice, and, until now, there were no evidence-based guidelines for ensuring data quality in MRA. We aimed to identify the factors affecting the accuracy of data abstracted from medical records and to generate a framework for data quality assurance and control in MRA.Entities:
Mesh:
Year: 2015 PMID: 26484762 PMCID: PMC4615628 DOI: 10.1371/journal.pone.0138649
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Overview of the research process.
Fig 2Disposition of screened articles.
Fig 3The MRA system.
Factors identified in Delphi Round 1 that were not in the literature.
| Factor | Number of mentions |
|---|---|
| Abstractor credentials | 10 |
| Access to charts | 6 |
| Interruptions | 6 |
| Complete and accurate medical record | 4 |
| Availability of abstraction tools | 4 |
| Adequate time for abstraction tasks | 4 |
| Complexity of the study or project | 3 |
| Supportive collegial relationships with physicians, nurses and medical records colleagues | 3 |
| Abstractor (human) error | 3 |
* Not complete semantic matches at the detail level at which they were mentioned but conceptually part of higher-level factors or related to factors mentioned in the literature.
† Not mentioned at all in the articles included in the systematic review.
‡ Ultimately not upheld in Delphi Round 4.
Factors identified in Delphi Round 1 that were not in the literature top 26%.
| Factor | Number of mentions in Delphi / literature |
|---|---|
| Limited time | 5 / 1 |
| Lack of training | 4 / 2 |
| Same information found in multiple places in the medical record (opportunity for conflicting information) | 3 / 3 |
| Incomplete review of the medical record | 3 / 1 |
| Volume of information in the medical record | 3 / 1 |
* Found in the literature top 26% but with opposite valence.
† Ultimately not upheld in Delphi Round 4.
Comparison of factors mentioned in the Delphi top 27% and the literature top 26%.
| Literature top 26% | |||
|---|---|---|---|
| Delphi top 27% | Mentioned | Not mentioned | Total |
| Mentioned | 11 | 14 | 25 |
| Not mentioned | 64 | _ | 64 |
| Total | 75 | 14 | 89 |
Fig 4Importance versus reliability of factors.
Refuted and uncertain factors.
| Refuted Factor | Clinical research mean (σ) | Registry / QI mean (σ) | Overall mean (σ) | Number of comments (%) mentioning mitigating factors |
|---|---|---|---|---|
| Necessity of RN credential | 3.2 |
|
| 13 (62%) |
| Blinding abstractors to study aims |
|
|
| 10 (20%) |
| Centralized abstraction | 3.2 (0.97) |
| 3.0 (1.1) | 8 (63%) |
| High study/project complexity | 3.8 (1.07) |
| 3.3 | 11 (64%) |
| High volume of data in medical records | 3.2 (1.09) |
| 3.1 | 14 (29%) |
| Care provided by multiple providers/facilities | 3.6 (1.06) |
| 3.3 | 12 (33%) |
| Presence of multiple diagnoses/procedures |
|
|
| 8 (50%) |
| Abstractors with different levels of experience | 3.2 (1.03) | 3.6 | 3.4 (1.15) | 16 (44%) |
| Abstracting from narrative text | 4.2 (0.90) | 3.3 | 3.8 (1.2) | 6 (50%) |
| Coding data while abstracting | 3.3 (0.77) | 3.2 | 3.2 (1.12) | 10 (30%) |
| Same information found in multiple places | 3.9 (1.11) | 3.7 | 3.8 (1.18) | 8 (25%) |
Marked mean values in the table are those rated lower than neutral. Marked standard deviation values in the table are those that were above the standard deviation of 1.2 cut-off.
* Comments mentioning a mitigating factor as well as justification for participants’ response were split into two. The factor “Abstractors with different levels of experience” had two comments split; remaining marked factors had one comment split.
Abbreviations: QI, quality improvement; RN, registered nurse.
Framework for increasing data accuracy in MRA.
|
|
| Abstractor qualification |
| •Abstractor familiarity with how data are recorded in the medical record |
| •Abstractor experience in the clinical area for which he or she is abstracting |
| •Abstractor experience abstracting in any clinical area |
| •Abstractor having a clinical credential in the area in which he or she is abstracting |
| •Abstractor having passed a competency test |
| Communication with abstractors |
| •Providing feedback to abstractors, e.g., from periodic review of cases |
| •Ongoing communication with abstractors, e.g., opportunity to discuss difficult cases |
| Abstractor project specific training |
| •Training abstractors |
| •Ongoing abstractor training |
| •Recommended components of training |
| a.) Instruction in the therapeutic area, i.e., clinical specialty, of the study or registry |
| b.) Instruction in how data are collected in the healthcare setting or specifically at participating facilities |
| c.) Including an overview of the study or registry, e.g., purpose, how the data will be used |
| d.) Covering abstraction specifications, i.e., data element definitions, guidelines, and conventions |
| e.) Training abstractors on the data collection or abstraction form |
| f.) Training on proper use of coding systems used |
| g.) Covering the software to be used in the abstraction, i.e., computerized abstraction form |
| h.) Practice exercises with feedback |
| i.) Examples of difficult cases |
|
|
| •A supportive and positive relationship with local physicians, nurses, and medical records colleagues |
| •Minimizing interruptions during abstraction |
| •Minimizing time pressure, i.e., limited time in which to abstract |
| •Easy access to medical records |
| •Facilities having training for clinicians in better documentation practices |
|
|
| Abstraction process |
| •Abstracting data during the patient encounter, i.e., while the patient is in the hospital |
| •Using data collection forms, i.e., abstraction forms |
| •Conducting a pilot study of the abstraction |
| •Reviewing the entire or relevant parts/time period of the medical record before abstracting |
| Applying methods that decrease human error |
| •Availability of abstraction tools, e.g., guidelines, conventions, definitions |
| Abstraction guidelines |
| •Standardizing the abstraction process by explicit criteria, i.e., rules or conventions for each data element |
| •Specifying the location in the medical record where the data element is to be found |
| •Prioritizing the locations in the medical record where data elements may be found |
| •Documenting inclusion and exclusion criterion defining which cases should be in the study or registry |
| •Documenting rules for dealing with missing information |
| •An available glossary with synonyms and abbreviations |
| Data element definition |
| •Defining each data element |
| •Specifying categories to denote unknown information, i.e., null flavors |
| •Denoting and prioritizing critical data elements |
| •Providing conventions describing handling of common problems, e.g., multiple values, missing information, for each data element |
| •Valid values for categorical data elements, i.e., data elements with check boxes or pick lists, should cover all possible options and not overlap |
| •Choosing valid values that resolve minor discrepancies, i.e., broad categories |
| •Defining and collecting data elements as structured data rather than free text |
| •Defining data elements as raw data, i.e., data that are abstracted directly from medical records rather than those requiring mapping, interpretation, calculations, converting units, or scoring questionnaires |
| •Assuring that data elements are routinely documented in medical records, i.e., documented in standard care |
| •Avoiding subjective data elements, i.e., those requiring judgment, or constraining the subjectivity through definition to make the data element more objective, i.e., more likely to be abstracted consistently |
| •Identifying data elements abstracted less accurately, e.g., pain onset time, symptoms, procedure onset time, and constraining them through definition, guidelines, or examples, or avoiding them altogether |
| •Using data elements that are the original recording rather than those that may be transcribed from other places, e.g., using original lab values from the lab rather than those transcribed into a discharge summary |
| Data collection or abstraction form |
| •Using a well-designed data collection form |
| •Using the same form, i.e., a standard form, at multiple data collection sites |
| •Listing codes on the data collection form for data elements where the abstractor assigns a code during abstraction |
| •Ordering questions on the data collection form following the order in the medical record |
| Computer use in abstraction |
| •Entering data into a computer as the data are abstracted |
| •Use of computerized error checks during data entry to notify the abstractor about missing, out-of-range, or illogical values |
| •Use of computerized error checks after data entry for missing, out-of-range, or illogical values |
| •Minimizing transcription steps, e.g., transcribing data from the data collection form to the database |
| •Use of independent data sources to verify data, e.g., checking data against another source of the same data |
|
|
| •Error and inconsistency in the medical record |
| •The practice of providers not documenting results or assessments that are “normal,” i.e., only charting pertinent negatives |
| •Missing information, i.e., incompleteness of the medical record |
| •Missing charts, i.e., instances when the medical record is not available |
| •Conflicting information in the medical record |
| •Illegible information in the medical record |
| •Uncertainty in the medical record, i.e., statements such as “possible infarction” rather than a firm diagnosis |
| •Variability of documentation practices among clinicians |
| •Variability of assessment skills, i.e., of the clinician examining the patient |
|
|
| Aspects of re-abstraction |
| •Re-abstraction of data |
| •Reviewing or re-abstracting a representative selection of cases rather than all of the cases |
| •Independent or external re-abstraction, i.e., re-abstraction by someone other than the initial abstractor |
| •Periodic re-abstraction throughout the project, i.e., every few months or a few times per year |
| •Reviewing re-abstraction results, i.e., discrepancies or difficulty areas, with abstractors |
| •Measurement of inter- or intrarater reliability, i.e., a measure of agreement between two abstractions |
| •Monitoring abstractor performance, i.e., through re-abstraction, manual review, or computerized data checks |
| •Data quality control activities, e.g., re-abstraction, manual review, or computerized data checks, should be ongoing for the duration of the project rather than occur just once during the study or registry |
| •Clinical review of abstracted cases, i.e., a clinician or senior abstractor looking through the abstracted data screen by screen or page by page |
| •Visual review, i.e., a person looking through the abstracted data screen by screen or page by page, of abstracted cases |
| Sharing quality control information with participating facilities |
| •Using re-abstracted data to improve institutional data accuracy, i.e., as feedback to the healthcare facility medical records or data quality program |
| •External audits of abstraction at local facilities |
| •Returning data discrepancies identified during re-abstraction or other data quality control activities to local facilities for correction |
| •Providing data quality reports to sites |
| •Reporting of data quality by facility |
* Opposite valence factors, “Lack of abstractor training decreases accuracy of abstracted data,” “An incomplete review of the medical record (e.g., not reading all pages from the required time period) decreases the accuracy of abstracted data,” “Data element definitions that lack suggestions for where in the chart to find data values,” “Data abstracted from a complete medical record are more accurate than those that are abstracted from medical records with omissions,” “Abstractor (human) error is a factor in decreasing the accuracy of abstracted data,” and “Data abstracted from a medical record that is free from error are more accurate than those abstracted from a medical record containing errors,” were omitted from framework.
† Combined factors “Misuse of the coding system” and “Misunderstanding the coding system,” and moved to the training category.
‡ Original text “Abstractor human error” restated to create an actionable item.
§ “Data elements requiring the abstractor to do calculations (e.g., convert units or score questionnaires) are less accurate than those that do not” and “Data elements that are abstracted directly from medical records) are more accurate than those requiring mapping or interpretation” were combined.
Frequency of reporting MRA methods.
| Clinical studies (36) | Non-clinical studies | |
|---|---|---|
| 1) Data source within the medical record | 0 (0%) | 0 (0%) |
| 2) Abstraction methods and tools | 18 (50%) | 22 (73%) |
| 3) Abstraction environment | 0 (0%) | 0 (0%) |
| 4) Abstraction human resources | 15 (42%) | 19 (63%) |
Values are presented as number of studies reporting (%).
* Category includes validation of administrative data, performance measures, or indicators (18); data quality assessment (11); and questionnaire validation (1).