| Literature DB >> 30526567 |
Hong Kang1, Sicheng Zhou1, Bin Yao1, Yang Gong2.
Abstract
BACKGROUND: Patient falls, the most common safety events resulting in adverse patient outcomes, impose significant costs and have become a great burden to the healthcare community. Current patient fall reporting systems remain in the early stage that is far away from reaching the ultimate goal toward a safer healthcare. According to the Kirkpatrick model, the key challenge in reaction, learning, behavior and results is the realization of learning stage due to the lack of knowledge management, sharing and growing mechanism.Entities:
Keywords: Information storage and retrieval; Knowledge base; Patient safety
Mesh:
Year: 2018 PMID: 30526567 PMCID: PMC6284264 DOI: 10.1186/s12911-018-0688-5
Source DB: PubMed Journal: BMC Med Inform Decis Mak ISSN: 1472-6947 Impact factor: 2.796
Fig. 1A rule-based knowledge support strategy for event reporting
Survey for learning effect evaluation
|
| |
|
| |
|
|
Q1-Q3 are single-choice questions with four scaled choices: 1) fully agree, 2) mostly agree, 3) mostly disagree, and 4) fully disagree, while Q4 is a subjective question. Participants reviewed the materials and completed the survey individually. A Fleiss’ kappa, a statistical measure for assessing the reliability of agreement among multiple raters, was calculated to the answers of Q1-Q3 from the five participants. To simplify calculation, fully agree and mostly agree were treated as agree, while mostly disagree and fully disagree were treated as disagree
A summary of the hierarchical contributing factor list for fall event reports
| Index | Categories | Num. of Terms | Max Depth |
|---|---|---|---|
| 1 | Communication, other than at the time of handover/handoff | 4 | 2 |
| 2 | Handover/handoff | 1 | 1 |
| 3 | Data issues (e.g., availability, accuracy) | 4 | 2 |
| 4 | Environment (e.g., culture of safety, physical surroundings) | 18 | 3 |
| 5 | Human factors (e.g., fatigue, stress, inattention, cognitive factors) | 87 | 5 |
| 6 | Policies and procedures, including clinical protocols (e.g., absence, adequacy, clarity) | 6 | 3 |
| 7 | Staff qualifications (e.g., competence, training) | 3 | 2 |
| 8 | Supervision/support (e.g., clinical, managerial) | 3 | 2 |
| 9 | Health Information Technology (HIT) | 8 | 2 |
| 10 | Medications | 39 | 5 |
| 11 | Consequences | 10 | 3 |
| 12 | Admission/discharge | 3 | 2 |
| 13 | Event location | 5 | 2 |
| 14 | Therapy prior fall | 4 | 2 |
Distribution of the scaled scores -- Results of evaluating the identification rules
| Scaled Scores | 1: Fully Agree | 2: Mostly Agree (lacking factor(s)) | 3: Mostly Disagree (appearing wrong factor(s)) | 4: Fully Disagree |
|---|---|---|---|---|
| Reports ( | 349 (96.4%) | 7 (1.9%) | 6 (1.7%) | 0 |
Fig. 2A screenshot of similar report sorted by the similarity scores in a descending order. When a report is selected as a query (scenario 1), top 10 similar reports will be displayed on the left side of the page. After clicking any of the 10 similar reports, corresponding details will be shown on the right side. The selected similar report is presented side by side with the query report. All contributing factors are identified and listed under the description sections. By clicking any factor entry, the keywords contributed to the identification will be highlighted in red within the description
Fig. 3A screenshot of customized contributing factor. Rather than applying an event report as a query, the user can also directly select the contributing factors and to launch the similarity search. The user is free to include/exclude any of the total 195 factors to/from “My Factors” and launch a similarity search. The calculation of similarity scores is based on Eq. 2, and result display page is referred to Fig. 2
Fig. 4A screenshot of the solution recommendation
The result of survey-based evaluation for the knowledge support strategy
| Mean Score ( | Fleiss’ kappa | |||||||
|---|---|---|---|---|---|---|---|---|
| Scenario 1 | Scenario 2 | |||||||
| Report1 | Report 2 | Report3 | Report4 | Report5 | ||||
| Q1 (assess factors) | 1.4 | 1.4 | 1.6 | 1.6 | 1.6 | 1.2 | 0.68 | < 0.01 |
| Q2 (assess similarity) | 2.0 | 1.8 | 1.6 | 1.8 | 2.0 | 2.2 | 0.61 | < 0.01 |
| Q3 (assess learning) | 1.4 | 1.4 | 1.8 | 1.4 | 1.8 | 1.6 | 0.66 | < 0.01 |
*Please refer to method section for the content of each question
*Scaled scores: 1. fully agree; 2. mostly agree; 3. mostly disagree; and 4. fully disagree
*Fleiss’ kappa between 0.61 and 0.80 indicates a substantial consistency among multiple subjects