| Literature DB >> 26343669 |
Muhammad Afzal1, Maqbool Hussain2, Taqdir Ali3, Jamil Hussain4, Wajahat Ali Khan5, Sungyoung Lee6, Byeong Ho Kang7.
Abstract
Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important.Entities:
Keywords: Arden Syntax; CDSS; automated query construction; knowledge-based queries; medical logic modules
Mesh:
Year: 2015 PMID: 26343669 PMCID: PMC4610474 DOI: 10.3390/s150921294
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Evidence supported CDSS recommendation service for chronic disease patients in ambient assisted home care environment.
Figure 2Sample MLM for oral cavity cancer with highlighted “logic” slot in the knowledge category.
Figure 3Knowledge-based query construction using CDSS rules.
Control Structure Parsing Examples.
| Control Structure Parsing Examples | ||
|---|---|---|
| A | IF (C = “v1”) THEN | |
| D = “d1” | ||
| Output: “d1 is recommended” | ||
| END IF | ||
| B | IF (C = “v1”) THEN | For CDSS output “d1 is recommended”: |
| D = “d1” | ||
| Output: “d1 is recommended” | ||
| ELSE | For CDSS output “d2 is recommended”: | |
| D = “d2” | ||
| Output = “d2 is recommended” | Where “!” represents the negation (not). | |
| END IF | ||
| C | IF (C = “vl”) THEN | For CDSS output “d1 is recommended”: |
| D = “d1” | ||
| Output: “d1 is recommended” | ||
| ELSEIF (C in (“v2”, “v3”)) THEN | For CDSS output “d2 is recommended”: | |
| D = “d2” | ||
| Output: “d2 is recommended” | ||
| ELSEIF (C = “v3”) THEN | For CDSS Output “d3 is recommended” | |
| D = “d3” | ||
| Output = “d3 is recommended” | ||
| ELSE | For CDSS output “d4 is recommended” | |
| D = “d4” | ||
| Output = “d4 is recommended” | ||
| END IF | ||
| D | IF (C1 = “v1”) THEN | |
| IF (C2 != “v2”) THEN | ||
| D = “d1” | ||
| Output = “d1 is recommended” | ||
| END IF | ||
| END IF | ||
| E | Switch C | For CDSS output “d1” is recommended: |
| case v1 | ||
| D = “d1” | ||
| Output = “d1 is recommended” | For CDSS output “d2 is recommended: | |
| case v2 | ||
| D = “d2” | ||
| Output = “d2 is recommended” | ||
| EndSwitch | ||
| F | IF (C1 = “v1”) THEN | |
| Call subMLM1 | ||
| END IF | ||
| subMLM | ||
| IF (C2 = “v2”) THEN | ||
| D = “d2” | ||
| Output: “d2 is recommended” | ||
| END IF | ||
Formal representation of query construction.
| c : |
| d : |
| q : |
| decisionPath : c → d |
| (decisionPath is function of mapping condtion concepts into decision concepts) |
| R = |
| KB = |
| er = |
| executedDecisionPath : decisionPath |
| ( |
Figure 4Sequence diagram of eUtils API functions (ePost, eSearch, and eFetch) used as a part of the function for creation of meta-data associated with evidence.
Figure 5Selected MLMs for the oral cavity site of head and neck cancer embodied with Logic Slots.
Contents of queries constructed from the executed paths in KB of CDSS outputs.
| No | CDSS Output | MLM Reference | Constructed Queries |
|---|---|---|---|
| Q1 | Radiotherapy | RootMLM | Palliative AND Radiotherapy |
| Q2 | Induction chemotherapy | RootMLM | Radical and Induction Chemotherapy |
| Q3 | Surgery, radiotherapy | SubMLM1 | Radical AND Chemotherapy AND ((T1 OR T2) AND (N1)) AND Surgery AND Radiotherapy |
| Q4 | Surgery, combined chemotherapy radiation therapy | SubMLM1 | Radical AND Chemotherapy AND ((T1 OR T2) AND (N1)) AND Combined chemotherapy radiation therapy |
| Q5 | Surgery | SubMLM2 | Radical AND Chemotherapy AND ((T1 OR T2) AND (N0)) AND Surgery |
| Q6 | Radiotherapy, follow-up | SubMLM2 | Radical AND Chemotherapy AND ((T1 OR T2) AND (N0)) AND Clinical stage I and Radiotherapy and follow up |
| Q7 | Radiotherapy | SubMLM2 | Radical AND Chemotherapy AND ((T1 OR T2) AND (N0)) AND Clinical stage II and Radiotherapy |
| Q8 | Combined chemotherapy radiation therapy | SubMLM3 | Radical AND Chemotherapy AND (T3 AND N1 ) OR ((T1 OR T2) AND (N2 OR N3 )) OR (T3 AND (N1 OR N2 OR N3)) OR (T4) AND (Squamous cell carcinoma OR Small cell carcinoma OR Carcinoma, no subtype) AND Combined chemotherapy radiation therapy |
| Q9 | Surgery, radiotherapy | SubMLM3 | Radical AND Chemotherapy AND (T3 AND N1 ) OR ((T1 OR T2) AND (N2 OR N3 )) OR (T3 AND (N1 OR N2 OR N3) ) OR (T4) AND (Adenocarcinoma, no subtype OR Adenoid cystic carcinoma OR Basal cell carcinoma OR Pleomorphic adenoma OR Spindle cell carcinoma OR Ameloblastoma, malignant) NOT (Squamous cell carcinoma OR Small cell carcinoma OR Carcinoma, no subtype) AND Surgery AND Radiotherapy |
Figure 6Retrieval set reduction with knowledge-based constructed query compared to simple query.
Figure 7Number of citations is reduced with increased relevance using a knowledge-based query construction mechanism.
Recall, precision, and F1 measure for knowledge-based queries in comparison to simple queries.
| Query No. | Query Type | Recall (%) | Precision (%) | F1 Measure (%) |
|---|---|---|---|---|
| Q2 | Simple Query | 27.48 | 0.13 | 0.38 |
| Knowledge-based | 18.32 | 3.44 | 10.33 | |
| Q3 | Simple Query | 41.22 | 0.04 | 0.13 |
| Knowledge-based | 18.32 | 4.94 | 14.81 | |
| Q4 | Simple Query | 26.72 | 0.12 | 0.37 |
| Knowledge-based | 17.56 | 9.62 | 28.87 | |
| Q6 | Simple Query | 41.22 | 0.10 | 0.31 |
| Knowledge-based | 19.08 | 4.36 | 13.07 | |
| Q8 | Simple Query | 42.75 | 0.11 | 0.32 |
| Knowledge-based | 19.08 | 2.67 | 8.02 |
Figure 8Query result integration with Smart CDSS through KnowledgeButton.
Figure 9Manual query writing time in minutes for expert and average users.
User satisfaction based on overall impression for each task with the proposed approach (1= very negative, 5 = very positive).
| Task | Expert 1 | Expert 2 | Expert 3 | Expert 4 | Expert 5 |
|---|---|---|---|---|---|
| Usefulness of approach | 4 | 5 | 3 | 5 | 4 |
| Query content | 5 | 4 | 4 | 3 | 4 |
| Relevance of results | 4 | 3 | 2 | 4 | 5 |
Figure 10Evidence-supported medication recommendation service workflow represented in Business Process Model and Notation process model where the set of activities is represented as a pool using Enterprise Architect.