| Literature DB >> 35990265 |
Dora Praczko1, Amanda K Tinkle1, Crystal R Arkenberg1, Robyn L McClelland2, Kate E Creevy1, M Katherine Tolbert1, Brian G Barnett1, Lucy Chou1, Jeremy Evans1, Kellyn E McNulty1, Jonathan M Levine1.
Abstract
Here we describe the development and evaluation of a survey instrument to assess the research suitability of veterinary electronic medical records (EMRs) through the conduct of two studies as part of the Dog Aging Project (DAP). In study 1, four reviewers used the instrument to score a total of 218 records in an overlapping matrix of pairs to assess inter-rater agreement with respect to appropriate format (qualification), identification match (verification), and record quality. Based upon the moderate inter-rater agreement with respect to verification and the relatively large number of records that were incorrectly rejected the instrument was modified and more specific instructions were provided. In study 2, a modified instrument was again completed by four reviewers to score 100 different EMRs. The survey scores were compared to a gold standard of board-certified specialist review to determine receiver operating curve statistics. The refined survey had substantial inter-rater agreement across most qualification and verification questions. The cut-off value identified had a sensitivity of 95 and 96% (by reviewer 1 and reviewer 2, respectively) and a specificity of 82% and 91% (by reviewer 1 and reviewer 2, respectively) to predict gold standard acceptance or rejection of the record. Using just qualification and verification questions within the instrument (as opposed to full scoring) minimally impacted sensitivity and specificity and resulted in substantial time savings in the review process.Entities:
Keywords: clinical trial; electronic medical record; inter-rater agreement; point score; verification
Year: 2022 PMID: 35990265 PMCID: PMC9389294 DOI: 10.3389/fvets.2022.941036
Source DB: PubMed Journal: Front Vet Sci ISSN: 2297-1769
Inter-rater agreement amongst four individuals for each qualification question and record verification in study 2.
|
|
|
|
| |||
|---|---|---|---|---|---|---|
|
|
|
| ||||
| Primary clinic record digital (Qualification) | 100 | 83 | 13 | 4 | 96% | 0.84 |
| Correct file format (Qualification) | 100 | 92 | 8 | 0 | 100% | 1 |
| Record legible (Qualification) | 100 | 84 | 13 | 3 | 97% | 0.88 |
| Record met all verification criteria | 100 | 79 | 14 | 7 | 93% | 0.76 |
| Dog seen at clinic within past 2 years (Qualification) | 100 | 83 | 7 | 10 | 90% | 0.53 |
| Include record? | 100 | 75 | 20 | 5 | 95% | 0.86 |
Figure 1Scatterplot of 100 medical record scores between generated by 4 individuals grouped into reviewer 1 and reviewer 2 categories in study 2. Unverified records are assigned a zero score.