Literature DB >> 35776743

Graphical analysis of guideline adherence to detect systemwide anomalies in HIV diagnostic testing.

Ronald George Hauser1,2, Ankur Bhargava1,3, Cynthia A Brandt1,3, Maggie Chartier4, Marissa M Maier5,6.   

Abstract

BACKGROUND: Analyses of electronic medical databases often compare clinical practice to guideline recommendations. These analyses have a limited ability to simultaneously evaluate many interconnected medical decisions. We aimed to overcome this limitation with an alternative method and apply it to the diagnostic workup of HIV, where misuse can contribute to HIV transmission, delay care, and incur unnecessary costs.
METHODS: We used graph theory to assess patterns of HIV diagnostic testing in a national healthcare system. We modeled the HIV diagnostic testing guidelines as a directed graph. Each node in the graph represented a test, and the edges pointed from one test to the next in chronological order. We then graphed each patient's HIV testing. This set of patient-level graphs was aggregated into a single graph. Finally, we compared the two graphs, the first representing the recommended approach to HIV diagnostic testing and the second representing the observed patterns of HIV testing, to assess for clinical practice deviations.
RESULTS: The HIV diagnostic testing of 1.643 million patients provided 8.790 million HIV diagnostic test results for analysis. Significant deviations from recommended practice were found including the use of HIV resistance tests (n = 3,007) and HIV nucleic acid tests (n = 16,567) instead of the recommended HIV screen.
CONCLUSIONS: We developed a method that modeled a complex medical scenario as a directed graph. When applied to HIV diagnostic testing, we identified deviations in clinical practice from guideline recommendations. The model enabled the identification of intervention targets and prompted systemwide policy changes to enhance HIV detection.

Entities:  

Mesh:

Year:  2022        PMID: 35776743      PMCID: PMC9249187          DOI: 10.1371/journal.pone.0270394

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Observational medical databases allow healthcare systems to electronically measure how well the care of patients conforms to clinical guidelines [1]. The retrospective measurement of clinical guideline adherence previously required manual chart review, a time-consuming and subjective process [2]. Electronic reviews (e.g., database queries) of guideline adherence, in contrast, generally take less time, can be easily modified, repeatedly executed, and scaled to larger sample sizes with minimal additional effort [3-5]. A limitation of electronic reviews as opposed to manual reviews is that they often evaluate less complex medical scenarios [6]. Diverse technologies have attempted to address this limitation. Natural language processing (NLP) can convert critical data elements and context-specific medical decisions into a structured form [7, 8]. Guidelines Interchange Format (GLIF), Guidelines Element Model (GEM) and other projects express complex clinical guidelines in a computable format [9-12]. Even with these tools, the analysis of complex medical scenarios continues to present a challenge. Decision pathways are a method to diagram a complex medical scenario, such as the Center for Disease Control and Prevention’s (CDC) HIV Diagnostic Testing Guidelines, which recommend a hierarchical sequence of testing to diagnose HIV [13]. Rather than assess adherence to each decision in the hierarchy, authors typically review a single decision point. For example, Cane et. al. reviewed HIV resistance testing in patients with a low HIV viral load [14]. Improved analysis methods would promote the review of an entire medical scenario, rather than a single decision, for adherence to medical guidelines. Graph theory provides a mathematical construct capable of modeling complex decision pathways [9, 15]. A graph, according to graph theory, consists of nodes, commonly represented as circles, connected by edges, commonly represented as lines. Graphs in which the edges denote a path to be followed are termed directed graphs and their edges are represented as arrows. Attributes of the graph can represent additional information: a line’s style can represent proper or improper adherence to current guidelines (e.g., solid line for adherence, dashed line for non-adherence), and the thickness of the line can represent the utilization frequency of an edge. To facilitate the use of observational medical databases to evaluate guideline adherence in a complex medical scenario, a method involving graph theory is introduced. Using graph theory, we created a model of the CDC’s HIV Diagnostic Testing Guidelines to evaluate the adherence of clinicians to HIV diagnostic testing recommendations provided by the CDC, identify intervention targets, and suggest an appropriate intervention strategy.

Methods

We developed a method to assess adherence to HIV testing guidelines in a large healthcare system by leveraging historical electronic health record (EHR) data. We assessed the CDC’s HIV Diagnostic Testing Guidelines by modeling them as a directed graph. For each patient we created a directed graph of their HIV testing. A patient’s graph begins at the start node and travels along the graph’s edges to reach the node of the next test performed in chronological order. The set of patient-level graphs were aggregated into a summary graph of all HIV testing sequences performed within our healthcare system. Finally, we compared these two graphs, the first representing the CDC’s recommended approach to HIV diagnostic testing and the second graph representing the testing patterns we found in our healthcare system. The comparison allowed us to assess for deviations from the recommended guidelines. The Supplement and online code repository contain more technical details (e.g., step-by-step examples, data structures, algorithms) (S1-S6 Figs in S1 File, S1-S7 Tables in S1 File) [16].

Modeling of HIV diagnostic testing guidelines as a graph

Recommended approach to HIV diagnostic tests

The CDC publishes guidelines for HIV Diagnostic Testing [13]. They recommend an HIV-1/2 antigen/antibody combination immunoassay to screen for HIV followed by a confirmation test, an HIV-1/HIV-2 antibody differentiation immunoassay. They recommend following a negative or indeterminate confirmation test with an HIV-1 nucleic acid test (NAT). The CDC no longer recommends the HIV-1 Western blot, a test previously used for diagnosis. Except in unusual circumstances, only patients with confirmed HIV should receive HIV resistance tests or HIV NAT. (See S8 Table in S1 File for additional details about HIV diagnostic tests).

Representation of guidelines as a graph

As a first step to measuring adherence, the guidelines were modeled as a graph (Fig 1A). Nodes in the graph represented either an HIV test or the result of a specific HIV test, such as an HIV resistance test or a positive HIV screen, respectively. Edges in the graph connected tests recommended to be performed in sequence, with edges pointing from one test in the sequence to the next. For example, an edge connects a positive HIV screen node with a positive HIV confirmation node. The graphical model of the guidelines balanced the intent of the guidelines with modifications to simplify the measurement of adherence.
Fig 1

A. Modeling of HIV Diagnostic Testing Guidelines as a Graph. Arrows connect HIV tests that should occur chronologically according to guidelines. The arrow points from the first test to the second. B. Nonadherence to HIV Diagnostic Testing Guidelines as a Graph. Solid lines denote observed adherence to guidelines (See Fig 1A). Dashed lines denote observed nonadherence to guidelines. The graph shows the 11 nonadherent edges with at least 1,000 observations, which comprise 84% (54,149/64,405) of the total nonadherent observations. Line thickness of dashed lines denotes the number of nonadherent tests. -, the test result is negative; +, the test result is positive; Screen, HIV-1/2 antigen/antibody combination immunoassay; confirm, HIV-1/HIV-2 antibody differentiation immunoassay; NAT, HIV-1 nucleic acid test (NAT); Resistance, HIV resistance test.

A. Modeling of HIV Diagnostic Testing Guidelines as a Graph. Arrows connect HIV tests that should occur chronologically according to guidelines. The arrow points from the first test to the second. B. Nonadherence to HIV Diagnostic Testing Guidelines as a Graph. Solid lines denote observed adherence to guidelines (See Fig 1A). Dashed lines denote observed nonadherence to guidelines. The graph shows the 11 nonadherent edges with at least 1,000 observations, which comprise 84% (54,149/64,405) of the total nonadherent observations. Line thickness of dashed lines denotes the number of nonadherent tests. -, the test result is negative; +, the test result is positive; Screen, HIV-1/2 antigen/antibody combination immunoassay; confirm, HIV-1/HIV-2 antibody differentiation immunoassay; NAT, HIV-1 nucleic acid test (NAT); Resistance, HIV resistance test. The graph intentionally contains a start node and an end node. The edge from the start node to the HIV screen node conveys the guideline’s recommendation to perform this test first. Similarly, the edges to the end node denote which tests the guideline considers appropriate to stop the diagnostic workup. After a negative HIV screen, for example, it is appropriate stop the HIV workup, so an edge exists between the negative HIV screen node and the end node. In contrast, a patient lost to follow-up after a positive HIV screen would not adhere to the guidelines because they would benefit from an HIV confirmation test. The graph conveys this by the absence of an edge between a positive HIV screen and the end node. The inclusion of the start and end nodes allows the graph to contain important edges used later in the analysis. HIV resistance tests are not diagnostic for HIV, but they are still included in the graph. After an HIV diagnosis is confirmed, the guidelines recommend the use of HIV resistance tests [17]. This explains the edge from a positive HIV confirmation to an HIV resistance test. Resistance tests are not a recommended part of HIV diagnosis, but researchers have documented its inappropriate utilization [14]. To differentiate between appropriate and inappropriate utilization of the HIV resistance test, we included it, and the edges denoting adherence to the guidelines, in the graph. The graph permits repeated HIV diagnostic workups. For example, a patient with a negative HIV screen may, perhaps after a potential exposure, undergo a second HIV screen, also with a negative result. The graph models this scenario by a “loop”, a special type of edge that points to the same node from where it originated. A loop can be found at the negative HIV screen node in the graph (Fig 1A).

Modeling of patient HIV testing as a graph

Data source

Study data originates from the Veterans Health Administration’s (VA) Corporate Data Warehouse (CDW), a relational database that aggregates medical data, including laboratory results, from 130 separate healthcare facilities [18]. These healthcare facilities are located across the continental United States, Alaska, Hawaii, and the Philippines. Nearly all facilities contributed identifiable data for the full duration of the study. We identified HIV laboratory tests and standardized their results, including checks for manual data entry errors, with previously published methods [19, 20]. The VA healthcare system maintains an HIV registry that contains a list of patients with known HIV, including their date of diagnosis. The HIV registry helped define the study population.

Study population

The study population consists of patients that underwent HIV diagnostic testing at VA facilities between January 2015 to January 2019 inclusive (Fig 2). We excluded patients with known HIV because diagnostic testing guidelines did not apply to them. For example, a patient with known HIV may appropriately receive a test for HIV resistance at their first visit to our healthcare system.
Fig 2

Determination of study population.

Comparison of patient HIV testing to guidelines

Data analysis

To evaluate the adherence of HIV testing to guidelines, we assembled a directed graph to summarize the sequential HIV testing performed on patients within our healthcare system. We began by arranging the HIV tests performed for each patient in chronological order. Next, the chronologically arranged HIV tests were converted to edges, where the edge began at the first test and pointed to the next. Each patient had an edge from the start node to the first HIV test they received. Likewise, each patient had an edge from the last HIV test they received to the end node. The list of edges was aggregated to count the number of occurrences of each edge (e.g., an edge from the start node to negative HIV screen node occurred 10 times). The output, a table of edges and a count of their occurrence, was converted to a directed graph. To determine adherence to the HIV Diagnostic Testing Guidelines, each edge was classified as adherent or nonadherent to the guidelines. We denoted the adherence of an edge to the guidelines by line style. An edge drawn with a solid line represented adherence, while an edge drawn with a dashed line represented nonadherence. The source code (i.e., C#, T-SQL) to conduct the analysis is available online [16]. Gephi and Inkscape were used to draw the graphs. S7 Fig in S1 File shows the data flow and manipulation.

Determination of significant findings

To verify examples of nonadherence to the guidelines discovered through the graphical model, we manually reviewed patient charts. Fifty patients were reviewed for each type of nonadherence. Disagreements were adjudicated by HIV subject matter expert consensus. Inter-annotator agreement was reported by Cohen’s kappa.

Consideration of absolute time

To construct a clinically meaningful model of guideline adherence with a directed graph, an important consideration is absolute time, in addition to the chronological sequence of tests (e.g., test 1 → test 2). By absolute time we mean the elapsed time measured in, for example, seconds or minutes, between one test and another. We modified the graphical model of guideline adherence to account for absolute time and improve its clinical interpretation. These considerations are detailed in the S1 File.

Cost estimation

The Center for Medicare and Medicaid Services’ (CMS) 2018 Clinical Laboratory Fee Schedule provides costs for HIV tests [21]. The cost of nonadherent tests was determined by counting the number of individual tests which the graph identified as nonadherent to the guidelines.

Results

Study population description

Over 3.855 million patients underwent HIV testing in our healthcare system between 1999 and 2019 (Fig 2). The initial years (1999–2014) were used to determine patients with HIV prior to the as part of the registry of HIV patients. The later years (2015–2019), contained 1.643 million patients who underwent HIV diagnostic testing after our healthcare system implemented the 2014 CDC Guidelines. The demographics of these patients are included in Table 1. These patients received care, including 8.790 million HIV diagnostic test results, at 130 facilities.
Table 1

Study population demographics.

The population totaled 1,643,149 patients.

Age in years: Mean (+/- SD)52.2 (±17.2)
Sex (% Male)80.2%
Race (% White)57.5%
Ethnicity (% Non-Hispanic)75.8%

Study population demographics.

The population totaled 1,643,149 patients.

Assessment of adherence to HIV testing guidelines

The graphical analysis of the study population’s HIV testing produced 331 unique edges (14 adherent, 317 nonadherent). Many of the edges occurred infrequently with only 14 edges (3 adherent, 11 nonadherent) having over 1,000 occurrences (Fig 1B). On review of the nonadherent edges by test (e.g., HIV NAT), we found three recurring scenarios: (1) HIV NAT with or without an HIV screen, (2) HIV resistance testing used in place of an HIV screen, and (3) the performance of a confirmation test after a negative HIV screen. Cohen’s kappa for interrater agreement of adherence to guidelines was 0.78 (97% agreement; 97/100).

Scenario 1: HIV nucleic acid tests (NAT)

On manual chart review of the nonadherent edges involving HIV NATs, we found the majority (86%, 43/50) represented true nonadherence because the HIV NAT was used in combination or in lieu of the HIV screen. Specifically, we did not find evidence to sufficiently explain the utilization of HIV NATs such as (1) the appropriate use of HIV NAT to diagnose acute HIV, (2) HIV NAT as a follow-up to a negative HIV confirmation test, or (3) HIV NAT performed in a patient with existing HIV. Of the 11 nonadherent edges with over 1,000 observations in the graph, 7 edges involved the HIV NAT. These edges had 23,728 occurrences from orders placed by 9,927 clinicians representing all 130 facilities.

Scenario 2: HIV resistance test used in place of an HIV screen

On review of the nonadherent edges involving HIV resistance tests, we also found the majority (100%, 50/50) represented true nonadherence. The patients reviewed did not have existing HIV, as recommended by the guidelines. The one nonadherent edge with over 1,000 observations in the graph pointed from the start node to the HIV resistance node. This indicates an HIV resistance test was the first HIV test performed in these 1,644 patients. A total of 56% (73/130) of facilities had at least one observation of this edge. A few facilities accounted for 61% (1002/1644) of the observations, and clinicians who placed these orders could be identified within these facilities. When these clinicians were contacted by phone, they erroneously believed the HIV resistance test was the HIV screen and agreed to modify their HIV test ordering practices.

Scenario 3: combined HIV screen and confirmation tests

Review of patient’s medical charts with an HIV screen and confirmation test performed together revealed many of these patients had received a 5th generation HIV screen. This test, in contrast to the 4th generation test, combines the HIV screen and HIV confirmation into a single test. The analysis classified this scenario as nonadherent because we did not anticipate it prior to our review of patient charts, but after review, we believe it represents adherence to guidelines.

The cost of guideline nonadherence

We identified the total number of nonadherent HIV tests: 16,567 (86% of 19,264) HIV NAT and 3,007 (100% of 3,007) HIV resistance tests. The CMS cost per test was $94.55 for the HIV NAT and $286.05 for the HIV resistance test. In total, the estimated cost of nonadherent testing was $2.427 million in 2018 United States dollars.

Discussion

This graphical model uses a directed graph to model guidelines, an idea shared with the GuideLine Interchange Format (GLIF) and others [9, 22]. It also relies heavily on temporal relationship, which other authors have studied in depth in a medical context [23]. Unlike previous graphical methods, our graphical model (1) evaluates guideline adherence within a population instead of at the individual level, (2) assesses nonadherent clinical practice, in addition to adherent clinical practice, and (3) quantifies the impact of the observed nonadherence while identifying targets for intervention. The examples of nonadherence from our analysis convey important lessons for reviews of guidelines adherence. First, the strategy to reverse nonadherence may originate from the review itself. Nonadherence limited to relatively few facilities or clinicians suggests a simple intervention (e.g., phone call) to reverse course, such as with nonadherent HIV resistance tests (Figs 3 and 4). Nonadherence to HIV NAT affected a larger proportion of the health system and required a more intensive intervention (e.g., systemwide campaign). Second, the availability of structured data does not obviate the need for manual chart review. Through manual chart review, we identified facilities utilizing the 5th generation HIV test, which we did not consider in the model.
Fig 3

Reduction of inappropriate HIV resistance tests performed at outlier facilities before and after an intervention (i.e., phone calls).

Total tests performed at outlier facilities before and after an intervention (gray; continuous piecewise linear spline). The red vertical line represents the timing of the intervention.

Fig 4

Reduction of inappropriate HIV resistance tests performed at outlier facilities.

The most common shared edges are shown (A) before (A) and after (B) the intervention.

Reduction of inappropriate HIV resistance tests performed at outlier facilities before and after an intervention (i.e., phone calls).

Total tests performed at outlier facilities before and after an intervention (gray; continuous piecewise linear spline). The red vertical line represents the timing of the intervention.

Reduction of inappropriate HIV resistance tests performed at outlier facilities.

The most common shared edges are shown (A) before (A) and after (B) the intervention.

Strengths of modeling guideline adherence as a graph

Modeling guideline adherence as a graph has multiple strengths. First, the graph is easy to understand. The method does not involve advanced mathematics (e.g., algebra, calculus) [24]. Clinicians generally appreciate the graph as a model of the expected (Fig 1A) and observed (Fig 1B) pattern of diagnostic testing for HIV after a brief orientation. Second, although simple to understand, the graph can represent a complex process. Non-recommended diagnostic tests (e.g., HIV Western blot), in addition to the recommended tests, are included in the graph of expected HIV diagnostic testing. It also conveys inappropriate tests to start and stop a diagnostic workup. For example, an HIV diagnostic workup should not end after a positive HIV screen, an HIV confirmation is needed. Finally, the graph models repeated workups, such that a patient could become HIV positive after a negative HIV screen because an arrow points to a positive HIV screen from a negative HIV screen (Fig 1A). Third, the process scales easily to large populations. Our healthcare system is the largest integrated healthcare system in the United States, and we conducted this analysis on a commodity desktop PC. Most applications will not require expensive computer hardware and may be repeatedly run as part of a plan-do-check-act (PDCA) cycle to support iterative quality improvement. Even as it scales to large populations, it remains highly specific, identifying individual clinicians who performed inappropriate HIV tests in a healthcare system with over 10,000 clinicians.

Limitations of modeling guideline adherence as a graph

We encountered certain difficulties when modeling guideline adherence as a graph. First, we had to balance graph accuracy with interpretability. For example, the consideration of absolute time created additional nodes and edges, increasing the complexity of the graph. (See S1 File–Consideration of Absolute Time.) The incorporation of absolute time increased the model’s accuracy, so we tolerated its increased complexity. As a second example, we chose to exclude indeterminate HIV confirmation results from the model because they happened too rarely to provide a benefit. Second, the development of graphical guideline adherence models may require iterative revision. On review of the current model, we became aware of 5th generation HIV testing, which performs the HIV screen and confirm in a single test. The current HIV diagnostic guidelines describe the sequential performance of an HIV confirmation only after a positive HIV screen. To distinguish between 5th generation HIV testing and the incorrect performance of an HIV confirmation after a negative HIV screen, the model would require a revision (i.e., a new node to represent a 5th generation HIV test). In the future, we plan to build an interface between this data analysis method and a standardized observational data model [25].

Conclusion

We developed a graphical model to determine if complex medical scenarios adhered to established guidelines. The model applies to patient populations, rather than individuals. With an observational database as the input, we demonstrated the method via an electronic, retrospective review of HIV diagnostic testing in over one million patients. The method identified >20,000 occurrences of inappropriate utilization of the HIV NAT test and HIV resistance tests, which cost an estimated $2.427 million dollars. This led to systemwide changes in policy to reduce nonadherent orders and enhance detection of HIV. This approach is in no way specific to HIV and may be applied to diverse medical scenarios.

Graphical analysis of guideline adherence to detect systemwide anomalies in HIV diagnostic testing.

(DOCX) Click here for additional data file. 14 Dec 2021
PONE-D-20-27454
Graphical Analysis of Guideline Adherence to Detect Systemwide Anomalies in HIV Diagnostic Testing
PLOS ONE Dear Dr. Hauser, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
 
I would like to sincerely apologize for the delay you have incurred with your submission. It has been exceptionally difficult to secure reviewers to evaluate your study. We have now received two completed reviews; their comments are available below. The reviewers have raised significant scientific concerns about the study that need to be addressed in a revision.
Please revise the manuscript to address all the reviewer's comments in a point-by-point response in order to ensure it is meeting the journal's publication criteria. Please note that the revised manuscript will need to undergo further review, we thus cannot at this point anticipate the outcome of the evaluation process. Please submit your revised manuscript by Jan 27 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Miquel Vall-llosera Camps Senior Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for including the following ethics statement on the submission details page: 'N/A - This work is exempt from human subjects review' in the ethics statement in the Methods section of your manuscript, please specify whether the medical data used for your study is anonymized before access. 3. Thank you for stating the following financial disclosure: "NO - The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript." At this time, please address the following queries: a) Please clarify the sources of funding (financial or material support) for your study. List the grants or organizations that supported your study, including funding received from your institution. b) State what role the funders took in the study. If the funders had no role in your study, please state: “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.” c) If any authors received a salary from any of your funders, please state which authors and which funders. d) If you did not receive any funding for this study, please state: “The authors received no specific funding for this work.” Please include your amended statements within your cover letter; we will change the online submission form on your behalf. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors propose to use graphical models to evaluate if guidelines are followed. This is done by comparing the observed and guideline recommended graphs for HIV diagnostic testing. My comments are: a) I think similar procedures are in place at many places. For example, when my group works with EHRs we perform similar procedures to check for data quality. Without user friendly software being available that allows easy implementation of the method is different settings, the impact of this is unclear. b) In EHRs it is pretty common to have entry errors. Did the manual chart review check for those? c) Guidelines are not laws and are meant to guide but not completely dictate clinical decision making. Sometimes external circumstances justify not following the guidelines. Do the authors have any sense of how often the clinicians made a conscious decision not to follow the guidelines? d) It would be useful and interesting to have more discussion of the downstream effect of these checks. For example, more details on the intervention and how it was implemented in this setting would be helpful. e) The authors state “The method does not involve advanced mathematics (e.g., algebra, calculus).” Can you give some examples of methods that require that? Reviewer #2: This is a nice paper, clearly presented, good, generally reusable methodology. My only comment is that you miss an opportunity to show off your approach. Why not do the following: 1) Have Figure 1B show potential erroneous edges (or maybe anticipate the important ones and just incliude those, if there are too many possible permutations). 2) Show the figure currently in 1B as a separate figure in the Results section and use line thinkness to show frequency, as you mention in your introduction. 3) Add a new two-part figure to complement the current figure 3 that shows before and after, as weighted-edge graphs. It is not often I ask for a chance to re-review a paper (maybe never) but in this case I would be delighted to see a revised paper with the graphs as described above. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 24 Jan 2022 Reviewer #1: The authors propose to use graphical models to evaluate if guidelines are followed. This is done by comparing the observed and guideline recommended graphs for HIV diagnostic testing. My comments are: a) I think similar procedures are in place at many places. For example, when my group works with EHRs we perform similar procedures to check for data quality. Without user friendly software being available that allows easy implementation of the method is different settings, the impact of this is unclear. Response a) We believe the reviewer has suggested that procedures similar to what we have described are “in place at many places”. We agree that there are similar processes for visual checking of data quality and that the availability of user-friendly software for easy access to these procedures is indeed lacking. But we believe that the packaging of the data quality checks with guidelines in a graphical form is novel, and we have made the software available in the public domain for others to use. b) In EHRs it is pretty common to have entry errors. Did the manual chart review check for those? Response b) Data entry errors are certainly common when manual processes are used. The dataset for this project was created by automated platforms used by clinical laboratories. Typographic errors are relatively rare with these automated platforms. (See https://pubmed.ncbi.nlm.nih.gov/28505339/.) We employed manual chart review to determine the clinical scenario in which the test was used. c) Guidelines are not laws and are meant to guide but not completely dictate clinical decision making. Sometimes external circumstances justify not following the guidelines. Do the authors have any sense of how often the clinicians made a conscious decision not to follow the guidelines? Response c) We agree that guidelines should not dictate the practice of medicine. We found that the use of HIV resistance tests instead of the HIV screen (Scenario 2) were 100% (n=50) nonadherent to guidelines and mistakes by these clinicians. In the discussion we mention a “simple intervention (e.g., phone call) to reverse course, such as with nonadherent HIV resistance tests (Figure 3).” We did indeed place a phone call to these providers, and they freely admitted they had mistakenly ordered the HIV resistance test instead of the HIV screen test. This was likely the reason for the drastic improvement in Figure 3. d) It would be useful and interesting to have more discussion of the downstream effect of these checks. For example, more details on the intervention and how it was implemented in this setting would be helpful. Response d) We do not have more details about the intervention. As described in the response above, the intervention consisted of us making providers aware of their erroneous orders over the phone. The extent of this intervention is described in the discussion. We also do not want to distract from the novelty of the paper, mainly the graphical methodology. e) The authors state “The method does not involve advanced mathematics (e.g., algebra, calculus).” Can you give some examples of methods that require that? Response e) Test utilization methods that involve advanced mathematics would include this paper written by one of the study authors (see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4355837/). The appendix of this paper contains an algebraic derivation. Reviewer #2: This is a nice paper, clearly presented, good, generally reusable methodology. My only comment is that you miss an opportunity to show off your approach. Why not do the following: 1) Have Figure 1B show potential erroneous edges (or maybe anticipate the important ones and just include those, if there are too many possible permutations). Response 1) We appreciate the kind comments from this reviewer, and we certainly do want to show off our approach! The reviewer has asked we show the erroneous edges in Figure 1B. These solid edges in this graph show patterns that adhere to guidelines. The dotted lines show nonadherent patterns. 2) Show the figure currently in 1B as a separate figure in the Results section and use line thickness to show frequency, as you mention in your introduction. Response 2) We appreciate the suggestion, and we created Figure 1B as requested. As stated in the paper, we reviewed 8.790 million HIV tests with an estimated 20 thousand incorrect tests performed. The ratio of correct tests to incorrect tests makes the line thickness for the correctly performed tests much thicker than those of the incorrect tests. The line thickness of the incorrect tests is smaller and less noticeable to the reader. We want to draw the reader’s attention to the incorrect test ordering, because even though these tests make up a minority, they could lead to a missed diagnosis of HIV. We want to highlight this. For these reasons, we did not include the figure in the manuscript. 3) Add a new two-part figure to complement the current figure 3 that shows before and after, as weighted-edge graphs. Response 3) Weighted-edge graphs use the line thickness to convey information about the frequency of the edge. We interpreted this question in a similar line of reasoning to the question above, which requested the use of line thickness, as in a weighted-edge graph. We came to a similar conclusion when deciding to use a weighted-edge graph in this situation. Because the number of incorrect tests makes a thinner line compared to the thicker line of the correctly performed tests, the reader’s attention is drawn away from the incorrectly performed tests. Even though we do not feel this suggestion is in the best interest of the manuscript, we appreciate the reviewer’s thoughtful critique. It is not often I ask for a chance to re-review a paper (maybe never) but in this case I would be delighted to see a revised paper with the graphs as described above. Submitted filename: Response to Reviewers.docx Click here for additional data file. 6 May 2022
PONE-D-20-27454R1
Graphical Analysis of Guideline Adherence to Detect Systemwide Anomalies in HIV Diagnostic Testing
PLOS ONE Dear Dr. Hauser, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. While both reviewers now endorse publication, we note that your revised manuscript does not appear to incorporate your responses to the reviewers' comments. As these queries were raised by the reviewers, it is likely that readers of your manuscript will have similar questions. As such, we request that you further revise your manuscript to incorporate your responses to the reviewers (though regarding comment a) from Reviewer #1 we note that the software used in this study is already available via reference 16). As these are few minor textual changes and the substance of the work has been approved for publication by both reviewers, I do not expect further review will be necessary once these changes have been made. Please submit your revised manuscript by Jun 19 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Hugh Cowley Staff Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: All comments have been addressed Reviewer #2: It appears that the authors have responded to Reviewer #1's comments by disputing them in their response letter, rather than modifying the paper to address the issues, one way or another, so that future readers with the same concerns can seee their reponses *in the paper* (not in some online review thread. I will let Reviewer #1 respond about the appropriateeness, but I feel it is imporant to point out. The authors have provided only a minimum response to my previous comments. They have modified Figure 1B as requested (thank you) but have left the figure in the Methods section, rather than make it a new figure in the Results section (as requested). Their refusal to add weight to the lines seems disingenuous they could certainly use a non-linear scale (for example a log scale) to show relative counts through varying line thickness. Instead, they turned down an opportunity to improve the display of their work. A word about my recommendation: I don't feel "accept" is appropriate, given the almost complete lack of response to the reviewer comments, while "minor revision" seems inappropriate because they have ignored the opportunity to make the reevisions the first time, and "major revision" is too severe for what is requested. I choose "reject" to get the edtior's attention because my real response to the editor is "I did my best to help them improve the paper an they basically ignored me so I see no further need to be involved in the review. It's a good paper. Too bad they don't want ot make it a great paper. now it is up to you". ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
27 May 2022 Response To the editor: We appreciate your perseverance in providing a through review of this paper, balancing the various viewpoints from reviewers and authors. To simplify this hopefully final review, we have done our best to include all reviewer comments and suggestions in the manuscript. The first section is a point-by-point review of the previous round of comments noting where the changes in the manuscript occurred. The second section is the response to the last round of comments. Our most recent comments are labeled with "(5/2022)" to distinguish them from previous comments. Section 1: Editor From the editor: While both reviewers now endorse publication, we note that your revised manuscript does not appear to incorporate your responses to the reviewers' comments. As these queries were raised by the reviewers, it is likely that readers of your manuscript will have similar questions. As such, we request that you further revise your manuscript to incorporate your responses to the reviewers (though regarding comment a) from Reviewer #1 we note that the software used in this study is already available via reference 16). As these are few minor textual changes and the substance of the work has been approved for publication by both reviewers, I do not expect further review will be necessary once these changes have been made. Response (5/2022): We have copied our initial responses to the reviewers here, highlighting additional changes we have made to the manuscript to incorporate the responses to the reviewers. ------------------------------------------------------------------------------------------------------ To the editor: Thank you for addressing the delay incurred with this submission. We appreciate that it has been difficult to secure reviewers to evaluate the study. We have reviewed the style requirements, amending the manuscript as necessary. 1. Changed headings to sentence case 2. Modified heading style 1, 2, and 3 to match the recommendations 3. Added a funding section: “Funding: The study had no designated funds, and the authors received no specific funding for this work. Therefore, funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.” 4. Modified the figure references from “Fig.” to “Fig”. 5. Specified the data was identifiable when we received it. See line 142, “Nearly all facilities contributed identifiable data for the full duration of the study.” Note: Due to the sensitive nature of HIV and the VA data sharing policy, we are not permitted to place this data set in the public domain. Reviewer #1: The authors propose to use graphical models to evaluate if guidelines are followed. This is done by comparing the observed and guideline recommended graphs for HIV diagnostic testing. My comments are: a) I think similar procedures are in place at many places. For example, when my group works with EHRs we perform similar procedures to check for data quality. Without user friendly software being available that allows easy implementation of the method is different settings, the impact of this is unclear. Response a) We believe the reviewer has suggested that procedures similar to what we have described are “in place at many places”. We agree that there are similar processes for visual checking of data quality and that the availability of user-friendly software for easy access to these procedures is indeed lacking. But we believe that the packaging of the data quality checks with guidelines in a graphical form is novel, and we have made the software available in the public domain for others to use. Response a (5/2022)) This questions was incorporated in the previous edit as the editor noted in their comments, “though regarding comment a) from Reviewer #1 we note that the software used in this study is already available via reference 16”. We did not further modify the manuscript in response to this question. b) In EHRs it is pretty common to have entry errors. Did the manual chart review check for those? Response b) Data entry errors are certainly common when manual processes are used. The dataset for this project was created by automated platforms used by clinical laboratories. Typographic errors are relatively rare with these automated platforms. (See https://pubmed.ncbi.nlm.nih.gov/28505339/.) We employed manual chart review to determine the clinical scenario in which the test was used. Response b (5/2022)) We incorporated this concern into the manuscript. Specifically, we updated the description of the test result standardization process to note that the process would identify data entry errors. “We identified HIV laboratory tests and standardized their results, including checks for manual data entry errors, with previously published methods.19,20” c) Guidelines are not laws and are meant to guide but not completely dictate clinical decision making. Sometimes external circumstances justify not following the guidelines. Do the authors have any sense of how often the clinicians made a conscious decision not to follow the guidelines? Response c) We agree that guidelines should not dictate the practice of medicine. We found that the use of HIV resistance tests instead of the HIV screen (Scenario 2) were 100% (n=50) nonadherent to guidelines and mistakes by these clinicians. In the discussion we mention a “simple intervention (e.g., phone call) to reverse course, such as with nonadherent HIV resistance tests (Figure 3).” We did indeed place a phone call to these providers, and they freely admitted they had mistakenly ordered the HIV resistance test instead of the HIV screen test. This was likely the reason for the drastic improvement in Figure 3. Response c (5/2022)) We have added the following sentence to better explain why these clinicians did make a conscious decision not to follow the HIV testing guidelines. “When these clinicians were contacted by phone, they erroneously believed the HIV resistance test was the HIV screen and agreed to modify their HIV test ordering practices.” See Result > “Scenario 2: HIV resistance test used in place of an HIV screen”. d) It would be useful and interesting to have more discussion of the downstream effect of these checks. For example, more details on the intervention and how it was implemented in this setting would be helpful. Response d) We do not have more details about the intervention. As described in the response above, the intervention consisted of us making providers aware of their erroneous orders over the phone. The extent of this intervention is described in the discussion. We also do not want to distract from the novelty of the paper, mainly the graphical methodology. Response d (5/2022)) The paper contains a mention of our intervention in three separate locations: methods, discussion, and Figure 3 caption. We added the description of the intervention in the results with this revision. • Results: “When these clinicians were contacted by phone, they erroneously believed the HIV resistance test was the HIV screen and agreed to modify their HIV test ordering practices.” • Discussion: “First, the strategy to reverse nonadherence may originate from the review itself. Nonadherence limited to relatively few facilities or clinicians suggests a simple intervention (e.g., phone call) to reverse course, such as with nonadherent HIV resistance tests (Fig 3).” • Figure 3 caption: “Reduction of Inappropriate HIV Resistance Tests Performed at Outlier Facilities Before and After an Intervention (i.e., phone calls).” e) The authors state “The method does not involve advanced mathematics (e.g., algebra, calculus).” Can you give some examples of methods that require that? Response e) Test utilization methods that involve advanced mathematics would include this paper written by one of the study authors (see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4355837/). The appendix of this paper contains an algebraic derivation. Response e (5/2022)) We added a reference to this paper alongside the sentence quoted above. “The method does not involve advanced mathematics (e.g., algebra24, calculus).” Reviewer #2: This is a nice paper, clearly presented, good, generally reusable methodology. My only comment is that you miss an opportunity to show off your approach. Why not do the following: 1) Have Figure 1B show potential erroneous edges (or maybe anticipate the important ones and just include those, if there are too many possible permutations). Response 1) We appreciate the kind comments from this reviewer, and we certainly do want to show off our approach! The reviewer has asked we show the erroneous edges in Figure 1B. These solid edges in this graph show patterns that adhere to guidelines. The dotted lines show nonadherent patterns. Response 1 (5/2022)) We made this change as requested by Review 2. (Figure 1B, dashed lines show nonadherent edges.) 2) Show the figure currently in 1B as a separate figure in the Results section and use line thickness to show frequency, as you mention in your introduction. Response 2) We appreciate the suggestion, and we created Figure 1B as requested. As stated in the paper, we reviewed 8.790 million HIV tests with an estimated 20 thousand incorrect tests performed. The ratio of correct tests to incorrect tests makes the line thickness for the correctly performed tests much thicker than those of the incorrect tests. The line thickness of the incorrect tests is smaller and less noticeable to the reader. We want to draw the reader’s attention to the incorrect test ordering, because even though these tests make up a minority, they could lead to a missed diagnosis of HIV. We want to highlight this. For these reasons, we did not include the figure in the manuscript. Response 2 (5/2022)) We created a new figure as the reviewer suggested. We used the line thickness the reviewer suggested as well. See revised Fig 1B. Rather than apply the line thickness to all lines, we implement the line thickness for only the inappropriate tests. We think this is a compromise between what the reviewer suggested and our concerns about interpretability in our previous response. 3) Add a new two-part figure to complement the current figure 3 that shows before and after, as weighted-edge graphs. Response 3) Weighted-edge graphs use the line thickness to convey information about the frequency of the edge. We interpreted this question in a similar line of reasoning to the question above, which requested the use of line thickness, as in a weighted-edge graph. We came to a similar conclusion when deciding to use a weighted-edge graph in this situation. Because the number of incorrect tests makes a thinner line compared to the thicker line of the correctly performed tests, the reader’s attention is drawn away from the incorrectly performed tests. Even though we do not feel this suggestion is in the best interest of the manuscript, we appreciate the reviewer’s thoughtful critique. Response 3 (5/2022)) A new two-part figure was added to complement Figure 3. It is labeled Figure 4. It is not often I ask for a chance to re-review a paper (maybe never) but in this case I would be delighted to see a revised paper with the graphs as described above. ------------------------------------------------------------------------------------------------------ These are additional comments from the second round of reviews. Section 2: Reviewer 2, Q6 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: All comments have been addressed Reviewer #2: It appears that the authors have responded to Reviewer #1's comments by disputing them in their response letter, rather than modifying the paper to address the issues, one way or another, so that future readers with the same concerns can seee their reponses *in the paper* (not in some online review thread. I will let Reviewer #1 respond about the appropriateeness, but I feel it is imporant to point out. Response (5/2022)) We have gone through the previous reviews point-by-point above and added content to the paper to explain each point. 1) The authors have provided only a minimum response to my previous comments. They have modified Figure 1B as requested (thank you) but have left the figure in the Methods section, rather than make it a new figure in the Results section (as requested). Response (5/2022)) We created a new figure as the reviewer suggested. We used the line thickness the reviewer suggested as well. Rather than apply the line thickness to all lines, we implement the line thickness for only the inappropriate tests. We think this is a compromise between what the reviewer suggested and our concerns about interpretability in our previous response. 2) Their refusal to add weight to the lines seems disingenuous they could certainly use a non-linear scale (for example a log scale) to show relative counts through varying line thickness. Instead, they turned down an opportunity to improve the display of their work. Response (5/2022)) A new two-part figure was added to complement Figure 3. It is labeled Figure 4. A word about my recommendation: I don't feel "accept" is appropriate, given the almost complete lack of response to the reviewer comments, while "minor revision" seems inappropriate because they have ignored the opportunity to make the reevisions the first time, and "major revision" is too severe for what is requested. I choose "reject" to get the edtior's attention because my real response to the editor is "I did my best to help them improve the paper an they basically ignored me so I see no further need to be involved in the review. It's a good paper. Too bad they don't want ot make it a great paper. now it is up to you". Submitted filename: Response to Reviewers (05.2022).docx Click here for additional data file. 10 Jun 2022 Graphical Analysis of Guideline Adherence to Detect Systemwide Anomalies in HIV Diagnostic Testing PONE-D-20-27454R2 Dear Dr. Hauser, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Hugh Cowley Staff Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: 24 Jun 2022 PONE-D-20-27454R2 Graphical Analysis of Guideline Adherence to Detect Systemwide Anomalies in HIV Diagnostic Testing Dear Dr. Hauser: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Mr Hugh Cowley Staff Editor PLOS ONE
  17 in total

1.  Natural language processing and its future in medicine.

Authors:  C Friedman; G Hripcsak
Journal:  Acad Med       Date:  1999-08       Impact factor: 6.893

2.  GEM: a proposal for a more comprehensive guideline document model using XML.

Authors:  R N Shiffman; B T Karras; A Agrawal; R Chen; L Marenco; S Nath
Journal:  J Am Med Inform Assoc       Date:  2000 Sep-Oct       Impact factor: 4.497

3.  Reducing the use of coagulation test panels.

Authors:  Timothy K Amukele; Geoffrey S Baird; Wayne L Chandler
Journal:  Blood Coagul Fibrinolysis       Date:  2011-12       Impact factor: 1.276

4.  GELLO: an object-oriented query and expression language for clinical decision support.

Authors:  Margarita Sordo; Omolola Ogunyemi; Aziz A Boxwala; Robert A Greenes
Journal:  AMIA Annu Symp Proc       Date:  2003

Review 5.  Analyzing computer based patient records: a review of literature.

Authors:  Tricia L Erstad
Journal:  J Healthc Inf Manag       Date:  2003

6.  Advancing the science for active surveillance: rationale and design for the Observational Medical Outcomes Partnership.

Authors:  Paul E Stang; Patrick B Ryan; Judith A Racoosin; J Marc Overhage; Abraham G Hartzema; Christian Reich; Emily Welebob; Thomas Scarnecchia; Janet Woodcock
Journal:  Ann Intern Med       Date:  2010-11-02       Impact factor: 25.391

Review 7.  Temporal reasoning and temporal data maintenance in medicine: issues and challenges.

Authors:  C Combi; Y Shahar
Journal:  Comput Biol Med       Date:  1997-09       Impact factor: 4.589

8.  Electronic screening improves efficiency in clinical trial recruitment.

Authors:  Samir R Thadani; Chunhua Weng; J Thomas Bigger; John F Ennever; David Wajngurt
Journal:  J Am Med Inform Assoc       Date:  2009-08-28       Impact factor: 4.497

9.  The guideline interchange format: a model for representing guidelines.

Authors:  L Ohno-Machado; J H Gennari; S N Murphy; N L Jain; S W Tu; D E Oliver; E Pattison-Gordon; R A Greenes; E H Shortliffe; G O Barnett
Journal:  J Am Med Inform Assoc       Date:  1998 Jul-Aug       Impact factor: 4.497

10.  A bayesian approach to laboratory utilization management.

Authors:  Ronald G Hauser; Brian R Jackson; Brian H Shirts
Journal:  J Pathol Inform       Date:  2015-02-24
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.