Literature DB >> 34962961

Veterans Health Administration staff experiences with suicidal ideation screening and risk assessment in the context of COVID-19.

Summer Newell1, Lauren Denneson1,2, Annabelle Rynerson1, Sarah Rabin1, Victoria Elliott1, Nazanin Bahraini3,4, Edward P Post5,6, Steven K Dobscha1,2.   

Abstract

Universal screening for suicidal ideation in primary care and mental health settings has become a key prevention tool in many healthcare systems, including the Veterans Healthcare Administration (VHA). In response to the coronavirus pandemic, healthcare providers faced a number of challenges, including how to quickly adapt screening practices. The objective of this analyses was to learn staff perspectives on how the pandemic impacted suicide risk screening in primary care and mental health settings. Forty semi-structured interviews were conducted with primary care and mental health staff between April-September 2020 across 12 VHA facilities. A multi-disciplinary team employed a qualitative thematic analysis using a hybrid inductive/deductive approach. Staff reported multiple concerns for patients during the crisis, especially regarding vulnerable populations at risk for social isolation. Lack of clear protocols at some sites on how to serve patients screening positive for suicidal ideation created confusion for staff and led some sites to temporarily stop screening. Sites had varying degrees of adaptability to virtual based care, with the biggest challenge being completion of warm hand-offs to mental health specialists. Unanticipated opportunities that emerged during this time included increased ability of patients and staff to conduct virtual care, which is expected to continue benefit post-pandemic.

Entities:  

Mesh:

Year:  2021        PMID: 34962961      PMCID: PMC8714081          DOI: 10.1371/journal.pone.0261921

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Screening for suicidal ideation and assessing patients for suicide risk has become a key component of suicide prevention efforts in healthcare settings, including the Veterans Health Administration (VHA) [1,2]. The joint VA-Department of Defense Clinical Practice Guidelines for the Assessment and Management of Patients at Risk for Suicide [3], published in 2019, recommends universal screening using a validated screening tool to identify individuals at risk and comprehensive follow-up assessment of individuals identified as being at risk. In accordance with these guidelines, the VHA recently implemented a multi-stage suicide risk assessment protocol across primary care, emergency department, and specialty mental health settings called VA Suicide Risk Identification Strategy, or “Risk ID” [4]. In early 2020, we initiated a study to understand Veteran and staff perceptions of Risk ID and how screening processes and perceptions of screening may impact subsequent care. The coronavirus pandemic that emerged in March of 2020 [5] necessitated shifting healthcare visits to telephone or video platforms rapidly, pushing clinicians and other staff to quickly scale up telehealth and video technologies, policies, and procedures. This shift to virtual care has strong potential to affect risk screening and assessment processes due to challenges to establishing rapport in video- or telephone-meditated visits, unanticipated technological failures, or difficulties coordinating care with other staff and resources. Prior studies have suggested that risk assessment or screening should take place in the context of a trusting relationship with a provider, and that providers may often rely on colleagues for assistance [6-8]. Increasing physical and psychological distance between patients and their clinicians through telephone or video visits may make developing trust more challenging and further test clinicians’ feelings of uncertainty about conducting risk assessment–ultimately hindering effective response to patients’ clinical needs. During the early phase of the pandemic, our study team began interviewing clinicians and nursing staff from primary care, mental health, and emergency department settings about their experiences conducting screening and risk assessment for suicide using VA’s Risk ID procedures. To better understand how the pandemic impacted screening and risk assessment, we also asked staff participants directly about impacts of the pandemic on these processes. In this manuscript, we report on these findings, and discuss clinical implications for screening and suicide risk assessment in the context of a sudden shift to virtual care.

Materials and methods

At the time of the interviews, the Risk ID process consisted of three stages: question 9 of the Patient Health Questionnaire-9 (PHQ-9), the Columbia Suicide Severity Rating Scale Screener (C-SSRS) [9], and VA Comprehensive Suicide Risk Evaluation (CSRE), a structured clinical assessment tool developed internally by the VA which inquires about factors critical to suicide risk [4]. Patients who screened positive on the PHQ-9 question (response of ‘yes’ to the question “thoughts that you would be better off dead or of hurting yourself in some way”) were screened using the C-SSRS; a positive C-SSRS (defined as yes response to items 3, 4, 5, or 6b.) led to same-day completion of the CSRE [4]. Over the past few decades, VHA has implemented the Primary Care-Mental Health Integration (PC-MHI) initiative which places specially trained mental health specialists on-site to support primary care teams [10]; PC-MHI staff are often called in to assist with risk assessment following positive C-SSRS screens. Data were collected from primary care and mental health staff at 12 VHA facilities across the U.S. between April and September 2020. Potential facilities were purposively identified from the larger pool of 171 VHA facilities nationally to reflect a range of characteristics including regional and geographic variability, operative complexity level, size (patient capacity), and adherence to the Risk-ID initiative (based on several performance measures). Facility directors were then contacted via email to invite them to participate. After receiving facility leadership permission, the study team contacted primary care, mental health, and emergency department leads to disseminate recruitment emails to their staff. Those interested in participating contacted our project coordinator to schedule interviews. Forty participants completed interviews. Eighty-three percent of participants identified as women (n = 33), 68% identified as white and non-Hispanic (n = 27), 15% identified as African American (n = 6), 8% identified as Asian American (n = 3), 5% identified as white and Latinx (n = 2), and 5% identified as white and Middle Eastern (n = 2). Participants included: Six physicians, seven nurse practitioners, six psychologists, five licensed clinical social workers, one physician’s assistant, one psychiatrist, eight registered nurses, three licensed practical nurses, one advanced medical support assistant, one peer support specialist, and one program manager. The average length of time participants had worked for the VA was seven years (range 2 months to 32 years), and the average time elapsed since training was 14 years (range 1 to 41 years). The study team developed a semi-structured interview guide informed by our overall study research questions (S1 Appendix). COVID-19 was declared a pandemic by the World Health Organization (WHO) [5] shortly before recruitment began. In response, the authors adapted the interview guide to include questions regarding changes to the screening process as a result of COVID-19 and the shift to virtual care. All interviews were audio-recorded and transcribed. Transcriptions were analyzed using Atlas.ti software by four coders: two primary coders and two secondary coders. The interdisciplinary coding team consisted of two research assistants and two experienced qualitative researchers including a sociologist and a social psychologist. The coding team met weekly to discuss emerging themes and new code categories and consulted co-authors with expertise in psychiatry and clinical psychology during analyses. We implemented an inductive-deductive hybrid approach for thematic analysis. Using the interview guide and research questions, the authors created an initial codebook. Each author independently reviewed three transcripts using the initial codebook, followed by a meeting to discuss and refine the first iteration of the codebook. Codes were added or amended during the coding process to capture themes not previously defined. Data used for the current analysis were limited to specific discussions of the COVID-19 pandemic and care and screening changes due to the pandemic. All authors discussed themes arising from these data until agreement was reached on main findings. This study was reviewed and approved by the joint institutional review board (IRB) of the medical center and university at which the study was jointly conducted. A waiver of written informed consent was approved; all participants provided verbal consent to be recorded prior to interviews.

Results

At the time of interviews, participant sites were at various stages of adapting to social distancing and technological requirements brought on by the pandemic. This enabled us to learn about a range of experiences regarding challenges and concerns, adaptations, and potential opportunities arising from pandemic-related changes. Some sites were able to pivot to telephone or video-based care screenings quickly, whereas others struggled. All sites reported adaptations they made in screening processes to continue to assess and meet patient needs, as well as unexpected opportunities that arose that could continue following the pandemic. There were overlaps in the themes elicited by primary care and mental health staff, but each setting also generated unique themes.

Increased concerns for patient well-being

Consistent among all sites and both care settings was a concern for patients who were experiencing increased social isolation. I think there is extreme loneliness. And I’m not sure how we combat that, but then given this COVID stuff where everybody is staying inside, that’s most of the comments that I’ve been getting on the telephone appointments. I’ve been trying to ask them, how are you doing amidst this COVID shutdown? What’s happening with you? And most of them are saying I’m doing pretty well, although I’m getting bored. You know? But you can see where, I mean, even young people have so much loneliness. Like, I had a guy who, his dreams didn’t come true in the Navy, and he lives by himself. He doesn’t have anybody to talk with. He doesn’t have friends. He doesn’t interact with his family. And I think, oh my God. This guy is so much at risk. Nurse Practitioner, PC Setting One factor leading to this concern was cancellation of all in-person mental health and other health related groups that offer social interaction, and some staff speculated that this may lead to increased suicidal ideation for some patients. I think before, we had a lot of great resources at [site]. I mean, there’s a lot of groups going on. PTSD groups. A lot of different groups for all sorts of Veterans. And the ones that were in it, liked it. They had tai chi. They had yoga; they had all that. Post-COVID, they’re having a hard time. Nurse Practitioner, PC setting Vulnerable patients, including those needing to enroll in inpatient substance use treatment or experiencing homelessness had less access to resources to meet their needs. Some of them, before—pre-COVID, a lot of them were already in the shelter, but post—I mean, currently, with COVID, with a lot of shelters that’s closed down or not accepting that many people without jumping through hoops to get there, COVID screening or just the social distancing, it’s a little bit harder, so we’re seeing more people answer yes to suicidal ideations. Physician’s Assistant, MH setting Participants reported that for some patients, the inability to physically come to the office further exacerbated their social isolation. I think for some patients, it’s about the experience. You’re leaving your house. These are some patients who maybe live by themselves, and leaving their houses, [sic] they come to the clinic, they make themselves a cup of coffee, they get a snack because the volunteers always bring snacks. Then they come into my office and they talk to me. Now it’s not the same because they’re just sitting in their house doing that and they’re not having that experience and I think that’s what they miss. I don’t think it’s seeing my face or missing me. I’m still the same person, I’m just through a video on their phone. LCSW

Screening patients during pandemic challenged quality of care

Both primary care and mental health settings increased use of virtual care for their patients, and the biggest challenge regarding screening reported by primary care staff during this time was concern with the ability to conduct warm handoffs to mental health providers in a timely manner if additional assessment or follow-up by a specialist was required. Staff had to rely on instant messaging to find support for the patients, but this was not always efficient. …this has been a learning experience. It’s been kind of a mess, you know, like they’ll answer my call, five minutes later, the doctor will call, they won’t answer. And it’s just back and forth, back and forth…That’s the hardest part. LPN Ensuring the safety of a patient reporting having suicidal thoughts via phone or video was a concern among both primary care and mental health staff. For me I think if I’ve got somebody over the phone and there’s clear concern for suicide, I would want to make sure that they’re safe first. You know if they’re mentioning that they’re having thoughts of suicide, you know are you someplace safe, have you made a plan? As far as do you have the gun in your hand, because that’s going to be the focus of safety. Where if somebody is sitting in my office, I can tell that they’re safe right in front of me. You know? LCSW Some staff reported that screenings were not being conducted at all because clear protocols did not exist for what to do when a patient screens positive for SI. So a lot of people are just not asking them at all…A lot of people are talking about the liability and not being able to do the warm handoff. Physician Staff members were concerned about how the virtual formats might feel for some patients—and how that might limit their ability to be forthcoming about their mental health experiences. I think the technology, whether it’s VVC or just on the telephone, it’s just so impersonal. I just feel that way. LPN Because and now with COVID, there’s a lot of situations where people don’t feel comfortable asking these questions over the phone or over video and Veterans don’t feel comfortable because they feel they’re being recorded. Psychologist Mental health staff reported being more comfortable screening for SI via video or telephone compared to primary care but reported that it is more difficult without body language and other cues to complete a full assessment. I’d like to have eyes on him so I can make an assessment because I know, it’s hard to explain, when you’re doing mental health, you have to make a diagnosis. That diagnosis is based off a lot of different things. It’s based off of how they present, what they’re talking about with all of these different components of that and if you can’t see this person, you can’t really see what their affect is. You can’t really see what their face is like. It does make it more difficult. LCSW This lack of non-verbal cues was especially difficult when making decisions about whether to hospitalize patients who were experiencing SI. So I was apprehensive when this whole thing started, because especially for patients who might need to be 5150’ed [term used for involuntary hospitalization] there’s less of a control, should I say? They’re not right there with you. Psychiatric Nurse Practitioner

Adapted screening processes for virtual care

Despite the reported challenges involved with pivoting to virtual care, most sites were able to adapt to the circumstances to ensure patients continued to be screened. To reduce the chances of a patient “falling through the cracks” during a handoff, some sites adapted their screening systems to solely have licensed providers conduct screenings, rather than the nurse or other non-physician staff, reducing some of the anxiety reported by these staff when they were unable to contact the needed specialist when a patient screened positive. Well, we actually haven’t been doing them if we are teleworking, because if they screen positive, and my doctor—because I’ll call and do their check-in questions, like I normally would in the clinic, prior to their appointment, that way they’re ready to go right at their appointment time. But if they screen positive, I can’t give them a warm hand-off to my doctor. Because I can’t transfer phones from home, so we don’t do those and the doctors have been doing those screenings. LPN Some participants were selective in which patients they screened and reduced the formality of the screening to allow for flexibility in how to respond. Yeah, I think like, just basically informal things. Like, how are you feeling and what’s going on? And if they are saying that there is anything of concern, then we’re prompting—Do they feel safe? Do they carry a gun—you know, we’ll ask those questions, but we’re not like, screening or going off a checklist about that, specifically. It’s almost like, we talk about it because something comes up. But we’re not asking every person when we talk to them. RN Other participants informally added some questions about recent losses related to the pandemic to make sure they were capturing recent events. Mental health staff reported fewer adaptations to virtual screenings processes although many reported asking patients 1) where were they located; 2) were they alone, and if not, who was in the home; and 3) do they have sufficient privacy. Others increased their communication with the patient about the purpose and process of screening and assessment. I mean I kind of, everything I do in person is what I do. And I just kind of acknowledge this is over telephone or VVC, when I’m reading off of self-report measures I try to lighten the mood and just say, I might sound a little bit like a robot because I’m reading verbatim these items, so just bear with me. And so I try to be really good about reading the instructions for each self-report measure, their answer options, and then each item as it appears. And so it’s been working well. Psychologist

Unexpected benefits of pivoting to virtual care

Several participants reported unexpected opportunities for improved care that arose during the pandemic. Many participants, but especially mental health care staff, lauded the increased capacity provided by VHA to conduct virtual care, particularly video-based care. We didn’t have a lot of people prior to COVID be set up for VVC appointments. And now with this pandemic or whatever, more and more people are getting approved for it, obviously. Or they’ll qualify for a VA issued iPad because they’re seen frequently enough to meet the criteria…So, you know, I do think they’re more apt to follow up that way. You know? So, I mean I do think that it’s some good that has come out if this is I do think that more people have been attending their appointments. RN, MH setting Several staff reported that the ability to participate in virtual care was a pleasant surprise for some patients—especially given the many barriers that come with in-person care such as transportation and scheduling. I think people actually are really enjoying virtual, actually…Maybe because our VA is really hard to get to…You know, traffic is awful, parking is awful. LPN Although increased use of technology by both patients and staff was considered a remarkable improvement, several participants cautioned that this was not a solution for all patients, particularly older patients who are less inclined to use technology. These patients were able to meet by telephone, but this was less ideal than video-based care, and even less ideal than in-person visits. Participants reported that more appointments are being kept, and this was especially true for mental health visits. People are definitely keeping more appointments in mental health in VVC and telephone because they don’t have the obstacles of also coming to the clinic, waiting in line, checking in. They don’t have to deal with that. They can literally just from the home computer, just turn it on and their appointment’s done, they sit down as soon as it starts and they’re at home already when it’s over. I think that they really do like that and I think we’re getting better turn out because of that. RN, MH setting Despite the increased rate of patients keeping appointments, participants reported that fewer appointments were being scheduled overall, especially in primary care. This reduction in appointments was thought to be due to the pandemic. While this caused concern because patients are receiving less care, the lower census of patients allowed for staff to provide increased follow-up to their at-risk patients experiencing social isolation and other concerns. I feel like if anything, with our lower volumes, we’re able to follow up more closely with patients that we’re worried about. And there’s certainly patients that we’re picking up based on CAN scores [a measure to assess risk of hospitalization or mortality], that score high, based on hospitalizations and ER visits, that are also sort of high-risk suicide flagged. Physician researcher

Discussion

To our knowledge, this is the first analysis to explore impacts of a pandemic on universal screening processes for suicide risk from the perspective of clinicians and staff. This analysis highlighted challenges and concerns as well as adaptations in processes and indirect benefits that have potential to impact care after the pandemic subsides. A frequent theme among participants was concern about their patients’ welfare. Staff were concerned about impacts of social isolation in general but also potential disruptions in care among those individuals who may depend on the healthcare system for social connection and support. Rapid elimination of group offerings may be especially impactful for this subgroup of individuals, and it could be important to further develop plans and technology platforms to better equip patients and clinical staff to be able to shift to, and sustain, group treatments during crises that limit in-person gatherings. An additional key concern of participants regarded challenges in reliably assessing patients’ level of risk. In particular, staff were concerned about the inability to see other potential clues pertinent to determining level of risk such as body language and were afraid they would miss something that would indicate greater attention was needed. Concern was expressed about virtual technology limiting patients’ disclosure of sensitive information, which might have been compounded among older patients who some felt were less inclined to use technology. These findings suggest that health and other systems may wish to invest more in training people to use healthcare-related technology and developing more user-friendly platforms for vulnerable individuals. The data also bring to light the importance of adequate security and privacy protections for telehealth systems to increase patient trust and comfort in disclosing sensitive information through a virtual platform. One benefit of adaptations made for COVID-19 is that many individuals (patients and staff) who were previously uncomfortable virtual care have gained knowledge and experience which may help to lessen this impact going forward. Ensuring the safety of participants should they have positive screens or other concerning behaviors was a consistent theme for staff, citing lack of or delay in development of clear procedures to follow compounded concerns. In usual practice, if a nurse conducts a screening and it is positive, the primary care provider is readily available to carry on with further assessment. Further, in recent years, PC-MHI programs have dramatically enhanced warm-handoff capability; that is, being able to connect patients felt to be at risk in primary care immediately with mental health specialists. Although it is possible to make these handoffs virtually, it can be cumbersome, especially if a patient is at higher risk and thus potentially requires hospitalization. Sites varied in their approaches in response to this set of challenges, with some nurses deferring initial screening to licensed providers or other screeners ensuring they had good contact information for the patient in case emergency services were required. Looking forward, development and dissemination of clear procedures and expectations for screening in a virtual environment and following up on screens is needed. In addition to patients and clinicians gaining new knowledge and experience working through virtual platforms, other adaptations and perhaps unexpected positive impacts were noted. Patients who were more stable or for whom the care team was less concerned about had decreased health care appointments due to the pandemic, and this provided opportunity for the care team to prioritize and reach out to known higher-risk patients. We also found that some facilities were able to shift their processes rapidly to accommodate the new circumstances, especially those who already had some virtual care in place. Others struggled considerably and further analyses of reasons for this variation has potential to yield important insights about factors influencing system-level crisis response. There are several limitations worth noting. This was a qualitative study—the project was designed to identify key themes and generate hypotheses for further study, and we purposively interviewed individuals across a broad range of sites and disciplines. Despite having variation in site characteristics and staff roles, our data did not reveal sufficient contextual differences between sites to explain the varying levels of adaptability. As such, contextual factors which might allow for examination and comparisons of facility-level approaches are generally absent. Although participants were from various facilities across the country, the sample included only staff who served Veterans within the context of the VHA. We also did not include Veteran perspectives in this analysis; staff reports of patient impacts and other issues may be misattributed in some cases. Finally, the interviews were done rather early during the COVID-19 pandemic. Care processes, policy, and technological options have continued to evolve, and our results may have less applicability to current practice than they did a year ago. There are several lessons to be learned from our findings. First, overall, staff adapted to the circumstances and maintained flexibility as they attempted to cope with rapidly changing circumstances—their dedication to care and to Veterans was evident. Although systems have continued to evolve since COVID-19 began, our results suggest that additional preparation should be made for future pandemics or other disaster situations that decrease physical access. These preparations include continued development, implementation, and perhaps even practice using virtual platforms by clinicians and patients, as well as improving broadband access, especially for more vulnerable individuals who rely heavily on healthcare systems for social connection and support. Such preparations can help ensure that rapid shifts to virtual care increase access to mental health services during and post-pandemic [11] without exacerbating health disparities. Health systems would also benefit from pro-active development of policy and procedures to enact should in-person access become suddenly unavailable. For example, individual facilities might wish to conduct rigorous needs assessments or failure mode (and effects) analysis exercises to better prepare for rapid shifts to virtual care during times of crisis.

Interview guide: Primary care staff.

(DOCX) Click here for additional data file. 1 Oct 2021 PONE-D-21-22480Veterans Health Administration staff experiences with suicidal ideation screening and risk assessment in the context of COVID-19PLOS ONE Dear Dr. Newell, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Nov 15 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Sarah A. Arias Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1.Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf. 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Overall, this was a well-written and thoughtful manuscript reporting clinical impressions of the VA transition to remote administration of Risk ID in the early stages of the COVID crisis. My primary concerns are: 1) that some of the background information about VA practices is not included and is not easily accessible to readers outside of that system, and 2) use of neutral language. Please see comments below: Materials and Methods: The opening paragraph of the methods refers to two VA initiatives (VA CSRE; PC-MHI) without providing citations or links to any online documentation. This information should be easily accessible to non-VA affiliated readers who may not be able to properly contextualize the methods and results without supplementary information. Results Language should be more neutral in places. Phrases like “deep concern” instead of just “concern” would be better. Though I agree with the sentiment, this statement from the Discussion read as a value judgement: "First, overall, staff exhibited ingenuity and flexibility as they attempted to cope with rapidly changing circumstances—their clear dedication to care and to Veterans was evident." Discussion The diversity of expertise among participants and hospital size and setting was a strength of the paper, but greater consideration of how those differences aligned with themes is warranted. For example, there is some discussion at length about a shift away from having nurses administer Risk ID. Was this occurring at larger hospitals, rural hospitals, etc.? Though COVID has been challenging across levels of expertise, did baseline staff experience impact decisions to shift assessment assignment? The discussion of changes in quality of care would be more useful if the results were contextualized in the hospital setting, e.g., were there particular types of hospitals, or staff makeup, that were more typical of hospitals that made the decision to not carry out parts or Risk ID? I agree with the passage below, but what kinds of evaluations do the authors recommend? As a closed health care system, there are some unique opportunities for addressing such questions at a depth that is more difficult to achieve when working in a majority private health insurance setting. It would be good if the authors made some more concrete recommendations. "Others struggled considerably and further analyses of reasons for this variation has potential to yield important insights about factors influencing system-level crisis response." In the discussion, the authors draw distinctions between hospitals who were, and were not, able to transition to virtual care, but the discussion of characteristics of both groups was shallow. I appreciate that this is a preliminary, hypothesis-generating piece, but I don't have a strong sense of where the next logical step in this line of research lies. How would qualitative work like yours be integrated into larger scale program evaluation efforts? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 9 Nov 2021 Reviewer #1: Overall, this was a well-written and thoughtful manuscript reporting clinical impressions of the VA transition to remote administration of Risk ID in the early stages of the COVID crisis. My primary concerns are: 1) that some of the background information about VA practices is not included and is not easily accessible to readers outside of that system, and 2) use of neutral language. Please see comments below: Author response: We will respond to specific comments where they occur below. Materials and Methods: The opening paragraph of the methods refers to two VA initiatives (VA CSRE; PC-MHI) without providing citations or links to any online documentation. This information should be easily accessible to non-VA affiliated readers who may not be able to properly contextualize the methods and results without supplementary information. Author response: Thank you for pointing out this oversight. We have provided additional description and added a relevant citation. Results Language should be more neutral in places. Phrases like “deep concern” instead of just “concern” would be better. Though I agree with the sentiment, this statement from the Discussion read as a value judgement: "First, overall, staff exhibited ingenuity and flexibility as they attempted to cope with rapidly changing circumstances—their clear dedication to care and to Veterans was evident." Author response: We agree that some of the language holds some value judgement. We have edited the language to be more neutral. Discussion The diversity of expertise among participants and hospital size and setting was a strength of the paper, but greater consideration of how those differences aligned with themes is warranted. For example, there is some discussion at length about a shift away from having nurses administer Risk ID. Was this occurring at larger hospitals, rural hospitals, etc.? Though COVID has been challenging across levels of expertise, did baseline staff experience impact decisions to shift assessment assignment? The discussion of changes in quality of care would be more useful if the results were contextualized in the hospital setting, e.g., were there particular types of hospitals, or staff makeup, that were more typical of hospitals that made the decision to not carry out parts or Risk ID? Author Response: Thank you for the suggestion to better describe the contextual differences in the focal clinics. The study was not designed for, and our qualitative sample did not allow for, meaningful comparison between sites. We intentionally sought out variation across these characteristics to ensure broad representation of ideas and experiences, which can lead to better identification of key themes and hypotheses for later study (Sofaer, 1999 p. 1104). We did reexamine our data, and as expected, due to substantial variation across site and provider characteristics we were unable to detect clear patterns of association. We do note in the discussion (line 367) that sites that had existing virtual care prior to the pandemic might have had an easier time adjusting. Further, we highlight the limited contextual data as a limitation. I agree with the passage below, but what kinds of evaluations do the authors recommend? As a closed health care system, there are some unique opportunities for addressing such questions at a depth that is more difficult to achieve when working in a majority private health insurance setting. It would be good if the authors made some more concrete recommendations. "Others struggled considerably and further analyses of reasons for this variation has potential to yield important insights about factors influencing system-level crisis response." In the discussion, the authors draw distinctions between hospitals who were, and were not, able to transition to virtual care, but the discussion of characteristics of both groups was shallow. I appreciate that this is a preliminary, hypothesis-generating piece, but I don't have a strong sense of where the next logical step in this line of research lies. How would qualitative work like yours be integrated into larger scale program evaluation efforts? Author Response: We agree that we did not outline clear next steps for further work. Given that we did not have sufficient data to identify contextual differences between sites that adapted their SI protocols versus those that did not, further work that more precisely measures contextual variables is suggested. We have outlined further recommendations on line 396. Reference Sofaer, S. (1999). Qualitative methods: what are they and why use them?. Health services research, 34(5 Pt 2), 1101. Submitted filename: PLOS response to reviewers 11.9.21.docx Click here for additional data file. 14 Dec 2021 Veterans Health Administration staff experiences with suicidal ideation screening and risk assessment in the context of COVID-19 PONE-D-21-22480R1 Dear Dr. Newell, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Sarah A. Arias Academic Editor PLOS ONE Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No 17 Dec 2021 PONE-D-21-22480R1 Veterans Health Administration staff experiences with suicidal ideation screening and risk assessment in the context of COVID-19 Dear Dr. Newell: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Sarah A. Arias Academic Editor PLOS ONE
  7 in total

1.  Primary care-mental health integration in healthcare in the Department of Veterans Affairs.

Authors:  Andrew S Pomerantz; Steven L Sayers
Journal:  Fam Syst Health       Date:  2010-06       Impact factor: 1.950

2.  Increasing Mental Health Care Access, Continuity, and Efficiency for Veterans Through Telehealth With Video Tablets.

Authors:  Josephine C Jacobs; Daniel M Blonigen; Rachel Kimerling; Cindie Slightam; Amy J Gregory; Tolessa Gurmessa; Donna M Zulman
Journal:  Psychiatr Serv       Date:  2019-08-05       Impact factor: 3.084

3.  Trust is the basis for effective suicide risk screening and assessment in veterans.

Authors:  Linda Ganzini; Lauren M Denneson; Nancy Press; Matthew J Bair; Drew A Helmer; Jennifer Poat; Steven K Dobscha
Journal:  J Gen Intern Med       Date:  2013-09       Impact factor: 5.128

4.  If You Listen, I Will Talk: the Experience of Being Asked About Suicidality During Routine Primary Care.

Authors:  Julie E Richards; Sarah D Hohl; Ursula Whiteside; Evette J Ludman; David C Grossman; Greg E Simon; Susan M Shortreed; Amy K Lee; Rebecca Parrish; Mary Shea; Ryan M Caldeiro; Robert B Penfold; Emily C Williams
Journal:  J Gen Intern Med       Date:  2019-07-25       Impact factor: 5.128

5.  The Columbia-Suicide Severity Rating Scale: initial validity and internal consistency findings from three multisite studies with adolescents and adults.

Authors:  Kelly Posner; Gregory K Brown; Barbara Stanley; David A Brent; Kseniya V Yershova; Maria A Oquendo; Glenn W Currier; Glenn A Melvin; Laurence Greenhill; Sa Shen; J John Mann
Journal:  Am J Psychiatry       Date:  2011-12       Impact factor: 18.112

6.  Does response on the PHQ-9 Depression Questionnaire predict subsequent suicide attempt or suicide death?

Authors:  Gregory E Simon; Carolyn M Rutter; Do Peterson; Malia Oliver; Ursula Whiteside; Belinda Operskalski; Evette J Ludman
Journal:  Psychiatr Serv       Date:  2013-12-01       Impact factor: 3.084

7.  Assessment of Rates of Suicide Risk Screening and Prevalence of Positive Screening Results Among US Veterans After Implementation of the Veterans Affairs Suicide Risk Identification Strategy.

Authors:  Nazanin Bahraini; Lisa A Brenner; Catherine Barry; Trisha Hostetter; Janelle Keusch; Edward P Post; Chad Kessler; Cliff Smith; Bridget B Matarazzo
Journal:  JAMA Netw Open       Date:  2020-10-01
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.