Literature DB >> 35721342

Library consultations and a global pandemic: An analysis of consultation difficulty during COVID-19 across multiple factors.

Raeda Anderson1,2, Katherine Fisher3, Jeremy Walker3.   

Abstract

The purpose of this study is to examine the relationship between librarians' perception of the difficulty of patron consultations and a variety of factors that characterize these interactions in the context of an academic library at a large public university. The study also provides insight into how changes in library service operations due to the global COVID-19 pandemic have affected the perceived difficulty of library consultations. Data samples were drawn from a LibInsight dataset and limited to consultations from Fall 2019 and Spring 2020 (N = 3331). Statistical analysis was conducted using ordinal logistic regression to quantify the relationship between perceptions of difficulty and factors indicating pre/post-COVID-19 modifications, patron type, scheduling, question format, library department, consultation duration, semester, and campus. Most notably, results indicate a statistically significant (p < 0.001) increase in the perceived difficulty of consultations that followed the closure of the library's physical spaces due to COVID-19, even when controlling for other factors in multiple model formulations. These results, as well as insights pertaining to other factors associated with library consultations and perceptions of difficulty, have implications for how librarians frame, understand, and manage their workloads. Additionally, findings may provide library service managers with the evidence needed to better coordinate and evaluate library services.
© 2020 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Assessment; COVID-19; Library consultations; Patron services; Reference interactions

Year:  2020        PMID: 35721342      PMCID: PMC9188924          DOI: 10.1016/j.acalib.2020.102273

Source DB:  PubMed          Journal: 


Introduction

Patron support is an integral component of library work, regardless of institution type. Specifically, consultations to help patrons find books, access materials, and complete research activities are a key facet of most librarians' work in both academic and other types of libraries. Despite extensive research in the library and information sciences literature on consultation service models and assessment, there has been limited examination of which patron and consultation characteristics are most related to the difficulty of these exceptionally important interactions. A better understanding of factors associated with difficult patron interactions would aid librarians and managers in predicting workloads, planning and balancing schedules, and providing adequate internal support. Library consultations were drastically shifted in spring of 2020, when many libraries halted face-to-face interactions due to COVID-19. This unexpected disruption presented an opportunity to investigate the difficulty of patron interactions under varying circumstances. This study examines the relationship between the librarian's perception of the difficulty of an academic library consultation and related factors such as patron demographics, location, duration, the medium of communication, and the nature of the inquiry, as well as the relative change in these perceptions associated with the campus closure due to COVID-19. This analysis employed an ordinal logistic regression to examine data from Georgia State University's Library Patron Transaction Form for the Fall 2019 and Spring 2020 semesters.

Literature review

Reference statistics and assessment

Assessing reference services and patron interactions, or consultations, is a prominent theme in library and information sciences literature (Logan, 2009). As Krikelas (1966) points out, there are three common reasons libraries collect statistics: to support administrative decisions, to describe organizational activities, and “to establish general principles and relationships concerning library organizations, administration, and use” (p. 494). While assessment methods ranging from surveys and focus groups to case studies and observational studies have been used (Cassell & Hiremath, 2018), one of the simplest and most common modes is self-imposed observation, which often takes the form of transaction diaries or preset forms. The reference statistics librarians gather through self-imposed observation and supplemental assessment methods can be used to evaluate and improve services, determine optimal staffing levels and locations, and identify unmet service needs (Maloney & Kemp, 2015; Reiter & Huffman, 2016; Scales et al., 2015; Sullivan et al., 1994). Statistics might also be utilized to demonstrate the importance of reference services (Kloda & Moore, 2016) and advocate for additional resources, or, as Ryan (2008) notes, as a source of guidance when facing budget reductions that necessitate adjustments to staffing and service hours. Despite a seeming consensus in the literature about the general utility and normative practice of collecting reference statistics, there has not been one single approach that has worked for, or been adopted by, a majority of libraries. This lack of standardization has long been a concern: Rothstein (1964) points out that varying approaches have been debated for decades, and as Krikelas (1966) notes, it has long been difficult for libraries to develop uniform terms and categories. Novotny (2002) reported in a study of Association of Research Library (ARL) member libraries that there was still little consensus on how reference statistics should be recorded; this inconsistency persisted despite efforts to standardize methods in the 1970s (Emerson, 1977) and 1980s (Association of Research Libraries, 1986). When the ARL began an in-depth discussion of reference statistics, most libraries wanted the ability to monitor success rates of reference transactions and the effect of bibliographic instruction on reference queries, but 23% of libraries involved could not supply statistics on reference transactions (Association of Research Libraries, 1986). Even in the twenty-first century, with reference statistics now more widely captured, data recorded about reference interactions is often specific to an institution and focused on what a particular library wants to know about the services it provides (Murgai, 2006), suggesting that the long history of inconsistency reflects the varying locations under study and the individual needs and choices of those collecting the data (Oberg, 2011). Logan (2009) argues that measuring reference duties has changed over time from assessing quality to justifying the need for reference services, and concludes that because local conditions and advocacy needs vary, institutions should develop their own individualized assessment methodologies that create a holistic perspective of reference services. Thus, while inconsistent data makes comparative studies of reference statistics challenging, the institution-specific benefits of collecting data in a particular way might be perceived to outweigh the disadvantages of nonstandard practice. Regardless of their degree of standardization, data collection and assessment projects focused on reference services in academic libraries tend to center patron service needs (Scales et al., 2015), student experiences and satisfaction (Murgai, 2012; Reiter & Cole, 2019; Rogers & Carrier, 2017), or student learning outcomes (Bradley et al., 2020; Maddox & Stanfield, 2020; Miller, 2018; Newton & Feinberg, 2020; Sikora & Fournier, 2016). The effects on librarians of providing these services and the use of reference statistics in predicting or assessing librarian workload and effort are comparatively little studied.

Difficulty of patron interactions

The Reference Effort Assessment Data (READ) Scale is a widely used tool, first developed in 2003 with the goal of systematizing perceptions of question difficulty when recording patron interactions in libraries and thus making reference statistics more useful (Gerlich & Berard, 2007). The six-point scale is the most common method that librarians use for measuring and comparing reference data within a standardized framework, although recent initiatives like ACRL's Project Outcome aim to provide more consistent instruments for assessing public services. Designed as a way to measure the skills and resources required to answer a question or complete a consultation, thus providing a mechanism for tracking and reporting “workload effort” (Gerlich & Berard, 2010, p. 137), the READ scale is often incorporated into reference tracking and assessment models and used to help differentiate between complex and simple interactions or those that require more and less effort from the librarian (Cassell & Hiremath, 2018). Although the READ Scale was widely tested, with many institutions reporting willingness to adopt it (Gerlich & Berard, 2010), it is not without limitations. Some librarians, preferring fewer levels to choose from, needing a scale tailored to a particular reference format, or wishing to incorporate existing assessment terminology or systems for categorizing patron interactions, have instead implemented modified versions of the scale (Sheehan, 2011; Stieve & Wallace, 2018). Furthermore, while the READ Scale is a usable model for tracking effort in a somewhat consistent way and offers more nuance than traditional reference classification systems that use broad categories such as “directional” and “ready-reference” questions, its effectiveness and the accuracy of the resulting data depend on consistent training of users and normalization of inputs (Bowron & Weber, 2017; Gerlich & Berard, 2010). In other words, although the READ Scale relies on clearly defined categories, it does not perfectly address the underlying issue of consistently assessing effort and difficulty in order to select an appropriate category when reporting interactions. Any attempt to study the effort or difficulty involved in patron interactions has limitations, particularly because effort and difficulty are relative and are not easily defined. While the difficulty of reference questions asked in academic libraries was once little studied (Whitlatch, 1989), the challenge of determining the difficulty of reference questions has received considerable attention in the literature over the last thirty-five years (Brown, 1985; Childers et al., 1991; Janes, 2002; Michell & Dewdney, 1998; Robinson, 1989; White & Iivonen, 2002). Some of the interest in determining the difficulty of questions stems from the use of these judgments to triage and appropriately direct inquiries. For example, question difficulty is often part of the training process and workflow for helping student workers, paraprofessionals, or other non–subject experts providing in-person or chat reference services know when to refer a patron to a specialist librarian (Avery & Ward, 2010; Keyes & Dworak, 2017; Pomerantz, 2004). Recognizing difficult questions or complex research needs, however, is not always straightforward. Oberg (2011) notes that a recurring theme in this literature about difficulty is in fact the challenge of defining and measuring difficulty. While there are many different frameworks for defining the difficulty or complexity of a reference question, all of these typically involve necessarily self-determined and thus subjective judgments by librarians rather than consistent, objective measures. These subjective measures include the type of source required to answer (Brown, 1985), the amount of effort required (Robinson, 1989), and the time or skill required (Childers et al., 1991). Childers et al. (1991) propose addressing this problem by instead using proxy measures that are perceived to be more easily measured, such as number of sources, prior knowledge of subject, and ease of access to sources. But even less subjective measures are problematic proxies for difficulty; for example, the time spent answering a question or completing a consultation can be influenced by schedule constraints, additional patrons waiting for assistance, and other factors unrelated to the complexity of the question (Oberg, 2011). A further problem for tracking and understanding the effort expended in patron interactions is that the difficulty of the initial question itself does not necessarily correspond to the difficulty or complexity of—in other words, the effort expended during—the consultation. That is to say, studies of question difficulty used to inform referral decisions might not map perfectly onto assessments of interaction difficulty as gauged after the fact (Michell & Dewdney, 1998). The literature exploring the difficulty of patron transactions covers a wide range of patron interaction types, from bibliographic questions presented at a service desk to emails and phone calls with reference librarians to extended, individualized appointments with specialists. The latter category of interaction is sometimes treated simply as a subset of reference service featuring particularly involved research questions (Cassell & Hiremath, 2018) and other times considered part of a library's instruction program (Yi, 2003). These interactions—referred to variously as one-on-one instruction, reference appointments, research clinics (Becker, 1993; Cardwell et al., 2001), personal research sessions (Whelan & Hansen, 2017), personalized research assistance, and research conversations (Maksin, 2015)—have emerged from the shift in service models from the “professionally staffed reference desk” to “a research consultation service” that relies on office hours or appointments where “librarians can spend uninterrupted time working with a user to offer research assistance and targeted instruction” (Bopp & Smith, 2011, p. 329). But while discussions of this type of patron interaction acknowledge their intensive nature, the existing literature pays little attention to how the rise of this model affects the interactions' difficulty and what degree of effort is required from librarians. These extended consultations are not the exclusive focus of this article, but they are of particular interest because such interactions tend to be time- and resource-intensive and thus warrant careful consideration in the context of operational decision-making.

COVID-19

The dataset examined in this study contains data collected during the period in which the Georgia State University Library's model for patron transactions shifted suddenly and dramatically in response to the SARS-CoV-2 (or COVID-19) pandemic. On Friday, March 13th, GSU closed for an extended spring break and then transitioned to online learning for the remainder of the semester. From March 13th until May 5th (the end of the semester and last date included in the dataset), the majority of library employees worked from home, all in-person reference services were suspended, and all patron interactions took place via phone, email, chat, or other remote technologies. GSU's response was typical of academic libraries during this time: as the Centers for Disease Control and Prevention began issuing social distancing guidelines (National Center for Immunization and Respiratory Diseases, 2020) and cases of COVID-19 appeared across the United States, many U.S. academic libraries expanded their virtual consultation offerings, shifted rapidly from an in-person to an online consultation service model, and advertised one-on-one virtual research appointments (“). Lisa Janicke Hinchliffe and Christine Wolff-Eisenberg, with Ithaka S + R, quickly launched a survey on March 11th to gather detailed information about how academic libraries were responding to the crisis. Final results are not yet available, but preliminary reports indicate that 65% of libraries at first continued reference services as usual, with only 25% limiting hours or switching their offerings to virtual or phone only (Hinchliffe & Wolff-Eisenberg, 2020a). Within two weeks after the survey launched, however, 96% of libraries initially reporting continuation of normal reference services reported that they had switched to exclusively virtual and online reference (Hinchliffe & Wolff-Eisenberg, 2020b). In quickly adjusting to these new circumstances, academic libraries were in effect expanding on the growing practice of offering and assessing virtual patron consultations—that is, patron interactions that occur over email, instant messaging, or video conferencing (Bennett, 2017; Duff & Johnson, 2001; Maddox & Stanfield, 2019; Steiner, 2011; Steiner, 2013; Tibbo, 1995). This virtual service is particularly important for libraries serving institutions with multiple campuses or extensive distance-learning programs (Guillot & Stahr, 2004). The sudden growth in virtual reference services spurred by COVID-19 highlights the relevance of the Association of College and Research Libraries' (ACRL's) “Standards for Distance Learning Library Services,” which deems consultation services essential support in an online-learning environment (ACRL Distance Learning, 2008), and the Reference and User Services Association's (RUSA's) “Guidelines for Implementing and Maintaining Virtual Reference Services,” which outlines definitions and core requirements for any remote reference services, including virtual consultations (American Library Association, 2017). The aforementioned extensive body of literature on standards and methods of conducting virtual reference tends to focus primarily on its effectiveness in meeting patron needs, which has largely been the focus of the emergency online pivot during the COVID-19 crisis, and less on virtual reference's consequences for librarians. Transitioning to fully online modes of patron interaction during the recent disruption has enabled librarians to provide continuity of service, emphasize their ongoing availability to student and faculty researchers, and develop and test new skills and strategies. But while some academic libraries have been forced in the past to rapidly modify their references services in response to localized disruptions such as fires and natural disasters (Benefiel & Mosley, 2000; Littrell & Coleman, 2019; Liu et al., 2017; Missingham & Fletcher, 2020), it has not been well established how such disruptions might affect interaction difficulty, particularly in the case of widespread physical library closures prompted by public health concerns. It is also not yet known how the many disruptive and challenging circumstances that COVID-19 has created for students (Betancourt, 2020) might affect the frequency, type, or difficulty of patron interactions during the relevant period. These conditions have also underscored the need for clarity regarding the burdens and requirements of patron interactions in order to effectively manage expectations and workloads in an environment of rapidly evolving demands and delivery methods.

Patron type, interaction format, and time spent

Beyond a broad examination of reference difficulty via proxies such as skill required or number of sources consulted for a particular question, there are factors not connected to the topic or substance of the question that affect how that exchange takes place and how much effort it involves. Analyses of reference statistics have frequently examined relationships between such factors as the type of patron, the format of the interaction, and the time a librarian spends on the interaction. Some of these studies have examined the relationship between patron type and interaction format in particular. For example, research into which patron categories are most likely to use virtual reference has found that undergraduates have comparatively high rates of usage for chat reference services, while faculty and graduate students initiate most of the virtual interactions overall (Broughton, 2002; Nolen et al., 2012). Similarly, Schwartz (2004) and Lewis and DeGroote (2008) report that graduate students are more likely than other groups to use email reference. Gerlich and Berard (2010) observe that the format of an interaction can alter its difficulty level from what might be expected based on question content alone, and Maloney and Kemp (2015) specifically found that chat reference questions were seen as more complex than those asked in person. Several studies have evaluated the amount of time spent by librarians on patron interactions in a variety of formats (Attebury et al., 2009; Gale & Evans, 2007; Lederer & Feldmann, 2012; Spencer & Dorsey, 1998; Yi, 2003). Time spent is often a particularly important measure when determining whether a reference service or initiative can scale, although use of scheduling tools, patron-initiated appointment models, and other management strategies can mitigate workload problems associated with interaction length or improve otherwise improve scalability by decreasing some of the logistical burdens on librarians (Cole & Reiter, 2017; Hess, 2014; Hoskisson & Wentz, 2001; Newton & Feinberg, 2020; Reiter & Cole, 2019). Magi and Mardeusz (2013) note that individual research consultations increase the likelihood that interactions based on complex or difficult questions will be rewarding because they “give students and librarians more time and space” compared to more perfunctory exchanges in person or online. These non-scheduled interactions are “often stressful and frustrating because the librarian feels pressure to ‘dispense with’ the student more quickly to help other waiting patrons” (pp. 290–91). This observation suggests that while the amount of time spent on an interaction might correspond with the difficulty of the question, time spent does not necessarily reflect the stress or burden involved in the interaction. It is clear from the literature that patron type, interaction format, and time spent are interrelated and affect the outcome of an interaction. However, despite extensive research over decides on librarian burnout across various library types (Affleck, 1996; Birch et al., 1986; Lindén et al., 2018; Nardine, 2019; Nelson, 1987; Sheesley, 2001; Smith & Nielsen, 1984), there has been little exploration of whether these characteristics of consultations (e.g. patron type, format, and time spent) are meaningfully predictive of an interaction's perceived complexity or difficulty and thus its effects on librarian capacity or burnout potential.

Patron transactions within specialized teams

One notable variable explored in the current study is the relationship between the reported difficulty of a patron interaction and the involvement of a specialized team of reference professionals, particularly those in research data services and special collections units. Dedicated support in libraries for data analysis, visualization, and management has grown considerably in recent years, making research data services an emerging area of academic librarianship extensively explored in the literature (Corrall et al., 2013; Koltay, 2017; Pryor et al., 2013; Si et al., 2015; Swygart-Hobaugh, 2017; Tenopir et al., 2012; Tenopir et al., 2014). Less well examined is the difficulty or complexity of these increasingly frequent research data transactions. Gao et al. (2018) use time spent as a proxy for difficulty in data-related patron interactions, describing data consultations as “lengthy” and noting that this “suggests the complexity and intensity of data-related questions” (p. 589). This observation may not be generalizable, though, as the time used for data-focused interactions is not compared to that for all patron interactions, and it is unclear how many of the questions in their dataset were answered by data specialists versus other librarians. Parrish (2006) similarly argues that GIS consultations, one type of research data interaction, are more time-consuming than other patron interactions but does not provide comparative data. While many special collections and archives have long been housed within academic libraries, often staffed by individuals with traditional library reference training and tasked with collecting reference statistics in library-wide systems, the literature on reference assessment in these settings and on the difficulty of patron interactions is also sparse. This might be attributable in part to differences in the types of questions asked of special collections and archives teams (Lavender et al., 2005; Martin, 2001). With the release and implementation of the Society of American Archivists' “Standardized Statistical Measures and Metrics for Public Services in Archival Repositories and Special Collections Libraries” (SAA-ACRL/RBMS Joint Task Force, 2018), which includes metrics such as “time spent responding” and “question complexity” for patron interactions, special collections teams may now begin to collect more standardized statistics to assess the difficulty and burden of interactions (Hawk, 2018). Currently, though, there is little known about how difficulty of interactions compares between special collections units and other specialized teams or institutions' overall reference services, especially during a disruptive event like COVID-19 that limits the team's and researchers' access to collections.

Methods

Data

Data for this analysis spans the duration of the Fall 2019 and Spring 2020 semesters and was drawn from Georgia State University (GSU) Library's Patron Transaction Log. Samples were drawn from the first day of courses for Fall 2019, August 26th, through the last day of courses for Spring 2020, May 5th. GSU is a large public university with more than 50,000 students across seven campuses: Atlanta (main campus), Alpharetta, Buckhead, Clarkston, Decatur, Dunwoody, and Newton. The library at each campus is a central hub for student learning, socializing, and coursework completion. Given the dynamic nature of GSU, with one of the largest student bodies in the United States, multiple campuses, and a high number of library consultations, the data from GSU's patron interactions are uniquely suited for this analysis. Beyond the demographic characteristics of the university, GSU Library employees use an instrument that records myriad details about their individual consultations, including date, campus, patron type, question difficulty, time spent, question format, patron department, whether the consultation was scheduled, whether the consultation was related to a specialized team, and space to write in more information. The analysis of consultation difficulty in this study uses most of the aforementioned variables, including difficulty level, whether the consultation was scheduled, patron type, format, date, and campus.

Level of Difficulty

Level of Difficulty is measured with a four-point ordinal scale: (1) “Directional/Very Basic,” (2) “Some Effort Required,” (3) “Effort Required,” and (4) “Significant Effort Required.” This ordinal scale represents a modified version of the READ Scale (Gerlich & Berard, 2007) as it is implemented at GSU. The (1) “Directional/Very Basic” designation is used as the reference group in the models. Level of Difficulty is the dependent variable in this study.

COVID-19

GSU Library implemented a work-from-home policy for the overwhelming majority of employees as of March 16th, 2020, which was the first day of spring break and thirteen days after the midterm of the semester. We generated a variable based on the date to indicate whether a consultation occurred prior to the work-from-home policy or from the time the policy began until the end of the semester, May 5th, 2020. Thus, the COVID-19 variable is coded as (0) prior to the work-from-home policy and (1) after the work-from-home policy. Prior to COVID-19 (0) is used as the reference group in the models. COVID-19 is a main independent variable in this study.

Duration

Time spent on a consultation is measured with a seven-point interval scale: (1) “less than 10 minutes,” (2) “10–20 min,” (3) “20–30 min,” (4) “30–40 min,” (5) “40–50 min,” (6) “50–60 min,” and (7) “60+ min,” with the additional possibility of (8) “Unknown.” “Unknown” responses were removed, as they shift the level of measurement for the scale from interval to nominal and thus limit the analytical interpretations of the datum points. Consultations recorded as (1) “less than 10 minutes” are used as the reference group in the models. Duration is a main independent variable in this study.

Scheduled

Scheduled is measured with a single indicator “Was this transaction scheduled in advance?” and the options are coded as (0) “No” and (1) “Yes.” The reference category used in this variable for the model is (0) “No.” Scheduled is a main independent variable in this study.

Specialized Teams

GSU Library records patron interactions that are managed by two teams whose work is widely considered to be distinct from “traditional” subject-librarian and reference-librarian work. The first group is Special Collections and Archives (SCA), representing a team of archivists and other professionals who support research, scholarship, and preservation activities pertaining to GSU as an institution and to the subject areas represented in the collections. The second group is Research Data Services (RDS), representing a group of five library faculty members and one graduate research assistant who specialize in data visualization, quantitative and qualitative research methods, and a variety of software and methods related to data-driven research. Within the data, individual samples that pertain to SCA or RDS are explicitly noted and easily distinguished from other patron interactions. Both SCA and RDS observations in the data are labelled as (0) or (1) where appropriate. Implicitly, the reference group for Specialized Teams represents “traditional” patron-librarian interactions and is represented by observations where both SCA and RDS are labelled as (0).

Patron Type

Patron Type is generated from a single indicator with the following options: “Alumni,” “Community,” “Library Donor,” “Faculty,” “Graduate Student,” “Library Colleague,” “PhD Student,” “Staff,” “Undergraduate Student,” “University Administration,” and “Unknown.” We collapsed “PhD Student” into the “Graduate Student” category. Each of the aforementioned categories was dichotomized for the analysis. While some types of patrons had small numbers of consultations, we decided to keep them separate, as they are distinct groups of library patrons and collapsing them into a generic “other” category would have removed the potential nuance of differences between the groups. The smallest groups were “University Administration” (n = 3), “Alumni” (n = 28), and “Staff” (n = 55). Patrons with an unknown status were used as the reference group in the models. Patron Type is a main independent variable in this study.

Format of Consultation

Library consultation format was measured with a series of dichotomous variables generated from a single nominal indicator titled “Question Format” with the response options of (1) “In Person,” (2) “Email,” (3) “Phone,” (4) “Online, real-time,” and (5) “Social Media.” “Online, real-time” and “Social Media” were collapsed into a single indicator of “Online.” “In Person” was used as the reference group in the models, because most traditional library consultations occur in person at locations like a reference desk or face to face with a librarian. Format of Consultation is a main independent variable in this study.

Semester

The semester in which the consultation occurred was controlled for in this study. We used the dates of the consultations to determine whether each consultation occurred in the Fall 2019 or Spring 2020 semester. “Fall 2019” is used as the reference group to respect temporal order. “Spring 2020” has a higher average difficulty (M = 2.63, SD = 1.02) than “Fall 2019” (M = 2.33, SD = 1.03), and this difference is highly statistically significant (t(3329) = 8.19, p < 0.001). Given the higher number of consultations in the fall semester (n = 1756, 52.72%) and the statistically significantly difference between the semesters on Level of Difficulty, the dependent variable, Semester is used as a control variable in this study.

Campus

GSU has a total of seven campuses: Atlanta (main campus), Alpharetta, Buckhead, Clarkston, Decatur, Dunwoody, and Newton. Campus is measured by a single indicator where librarians selected from the list of aforementioned cities across metro Atlanta to indicate the campus where the consultation took place or with which the librarian is affiliated. We generated dichotomous variables for each campus location. “Atlanta,” the main campus, is used as a reference in the model. Campuses are used as control variables in this study. We used listwise deletion on any patron transactions that were missing one or more of the study variables. The final sample size is N = 3331, from the total number of patron transactions (N = 26,334). This large amount of missing data is due to the normative practice at GSU of logging only the difficulty of the question, with little or no other information, when librarians are particularly busy at service points.

Methods of analysis

Basic descriptive statistics and ordered logistic regression were used to complete this analysis. The basic descriptive statistics used were percent, n, and range. We do not report standard deviation, as each study variable is dichotomous and not informative. Ordered logistic regression was used because the dependent variable, Level of Difficulty, is an ordinal scale. Thus, higher values mean more difficulty, but the exact difference in difficulty between groups is impossible to meaningfully discern. Prior to running the analysis, we tested for multicollinearity between all independent variables and control variables with the dependent variable to ensure the model assumptions were all met for ordered logistic regression (Pampel, 2000). Six unique model formulations were evaluated in a stepwise manner, where each successive model included a progressively broader set of independent variables. Model 1, the base model, uses only the COVID-19, Semester, and Campus independent variables. Model 6 represents the model formulation containing all independent variables. Analysis was completed in Stata 16.1. GSU granted IRB approval for this study in April 2020.

Results

Descriptive statistics

As seen in Table 1 , observations of Level of Difficulty, the dependent variable in our models, are spread out across four categories: “Basic,” “Some Effort,” “Effort,” and “Significant Effort.” The most common observation for this variable was “Some Effort” (N = 1092, 32.78%). Most observations for the COVID-19 variable are from before the GSU Library closed its physical spaces (N = 2651, 79.59%), which approximately reflects the percent of days of the academic year pre-COVID-19 and post-COVID-19. Observations for the Patron Type variable are dominated by three groups, the largest being “Undergraduate Student” (N = 1095, 32.87%), followed closely by “Graduate Student” (N = 1021, 30.65%) and then “Faculty” (N = 504, 15.13%). All remaining patron types combined account for less than 22% of total observations.
Table 1

Descriptive statistics.

Descriptive statistics.

Ordered logistic regression modelling results

In Table 2 , the magnitude and statistical significance of individual independent variable parameters are reported across six different models. These results highlight a variety of interesting insights into the data and the nature of GSU Library's patron services.
Table 2

Level of difficulty of consultations, ordered logistic regression.

Model 1
Model 2
Model 3
Model 4
Model 5
Model 6
bbbbbb
COVID-190.45⁎⁎⁎0.34⁎⁎⁎0.40⁎⁎⁎0.48⁎⁎⁎0.35⁎⁎0.12
Patron Type (undergrad)
 Alumni0.570.260.200.750.24
 Community−0.26−0.33−0.250.51⁎⁎0.10
 Library donor−0.50⁎⁎−0.74⁎⁎⁎−0.58⁎⁎⁎0.340.46
 Faculty0.200.140.190.45⁎⁎⁎0.07
 Graduate student0.98⁎⁎⁎0.65⁎⁎⁎0.66⁎⁎⁎0.93⁎⁎⁎0.35⁎⁎
 Library colleague0.21−0.17−0.170.400.12
 Staff0.360.150.200.86⁎⁎0.31
 Administration0.81−0.34−0.120.740.69
 Unknown−1.03⁎⁎⁎−0.74⁎⁎⁎−0.98⁎⁎⁎−0.280.68⁎⁎⁎
Scheduled1.61⁎⁎⁎1.30⁎⁎⁎1.47⁎⁎⁎0.09
Format (in person)
 Email−0.54⁎⁎⁎−0.85⁎⁎⁎−0.11
 Online0.390.560.34
 Phone−0.39−0.60⁎⁎−0.55
Specialized Teams
 Special collections−1.28⁎⁎⁎−0.75⁎⁎⁎
 Research data services−1.50⁎⁎⁎−1.71⁎⁎⁎
Duration (less than 10 min)
 10–20 min2.43⁎⁎⁎
 20–30 min3.95⁎⁎⁎
 30–40 min4.85⁎⁎⁎
 40–50 min5.74⁎⁎⁎
 50–60 min5.74⁎⁎⁎
 60 min or longer7.17⁎⁎⁎
Semester (fall)−0.17−0.23⁎⁎0.00−0.010.04−0.21
Campus (Atlanta)
 Alpharetta−3.08⁎⁎⁎−2.97⁎⁎⁎−3.11⁎⁎⁎−3.41⁎⁎⁎−4.13⁎⁎⁎−2.53⁎⁎⁎
 Buckhead−0.05−0.61−0.51−0.71−1.45⁎⁎0.18
 Clarkston−3.22⁎⁎⁎−3.11⁎⁎⁎−2.85⁎⁎⁎−3.23⁎⁎⁎−3.97⁎⁎⁎−2.88⁎⁎⁎
 Decatur−3.01⁎⁎⁎−2.83⁎⁎⁎−2.59⁎⁎⁎−2.95⁎⁎⁎−3.75⁎⁎⁎−2.73⁎⁎⁎
 Dunwoody0.040.370.020.07−0.27−0.31
Intercepts (E = Effort)
 Basic | Some E−2.28−2.19−1.89−2.29−2.98−1.62
 Some E | E−0.24−0.030.35−0.03−0.592.11
 E | Sig E1.101.431.991.631.194.98
Pseudo r2 (Δ in r2)0.130.16 (0.03)0.20 (0.04)0.20 (0.00)0.23 (0.03)0.43 (0.20)
χ2 (df)1145 (7)1420 (16)1788 (17)1828 (20)2107 (22)3923 (28)
⁎⁎⁎⁎⁎⁎⁎⁎⁎⁎⁎⁎⁎⁎⁎⁎⁎⁎

N = 3331. Data: Georgia State University Library Consultations Fall 2019–Spring 2020.

p < 0.05.

p < 0.01.

p < 0.001.

Level of difficulty of consultations, ordered logistic regression. N = 3331. Data: Georgia State University Library Consultations Fall 2019–Spring 2020. p < 0.05. p < 0.01. p < 0.001. The overall model fit for all six models is shown to be statistically significant (p < 0.001), according to each model's respective chi-squared (χ2) and degrees-of-freedom (df) metrics. While the pseudo-r2 performance metric increases with almost every progressive model formulation, there was little to no increase in model performance between Models 3 and 4 (Δ in r2 < 0.001), indicating that the addition of the Format variable to the model had no appreciable effect on overall model performance. Conversely, the largest increase in model performance is between Models 5 and 6 (Δ in r2 = 0.20) with the inclusion of Duration (length of consultation). This indicates that the addition of the Duration variable has a notable influence on overall model performance. All models reported in Table 2 contain the variables COVID-19, Semester, and Campus. For COVID-19, the parameter estimates show that patron interactions that took place after the libraries closed their physical spaces were rated as more difficult by librarians. In all but Model 6, the COVID-19 variable's relationship with difficulty is highly significant (Models 1–4: p < 0.001, Model 5: p < 0.01). Inversely, the Semester variable has a negative relationship with the dependent variable in multiple model formulations, indicating that patron interactions in Spring 2020 (both before and after the COVID-19 pandemic) were generally given a lower difficulty rating. The Campus variable indicates that patron interactions at three out of five library locations (“Alpharetta,” “Clarkston,” and “Decatur”) have highly significant negative relationships (p < 0.001) with the dependent variable when compared to patron interactions at the downtown Atlanta campus. These relationships are manifest across all models. The Campus parameter estimates for the remaining two campuses, “Buckhead” and “Dunwoody,” are not statistically significant in most of the models. This is likely due to the relatively small sample sizes associated with these two campuses (N = 18, 0.54%, and N = 11, 0.33%, respectively). Patron Type, Scheduled, Format, Specialized Teams, and Duration are only included in specific subsets of models. For Patron Type, although the parameter estimates and statistical significance vary from group to group and model to model, patron interactions involving graduate students are consistently shown to be labelled as more difficult by librarians in all model formulations as compared to the “Undergraduate Student” reference group. In Model 6 this result is statistically significant (p < 0.01), and for Models 2, 3, 4, and 5, the results are highly significant (p < 0.001). The Scheduled variable exhibits similar patterns to Patron Type. With the exception of Model 6, where the relationship is not statistically significant, the parameter estimates for Scheduled are highly significant in each model (p < 0.001) and indicate that patron transactions that were scheduled in advance tended to be more difficult than unscheduled interactions. For the Format variable, the parameter estimates across all factor levels and relevant model formulations vary with respect to their statistical significance. When compared to in-person interactions, patron interactions labelled as “Email” and “Phone” had a negative relationship with the dependent variable, indicating that these interactions tended to be recorded as less difficult by librarians. While the parameter estimates indicate that interactions labelled as “Online” tended to be recorded as more difficult by librarians, these results are not statistically significant in two out of three relevant models. The results for the variable Specialized Teams indicate a strong distinction between the substantive types of patron interactions librarians engage in. In Models 5 and 6, the parameter estimates for both “Special Collections” and “Research Data Services” show a negative relationship with the dependent variable and are statistically highly significant (p < 0.001) as compared to more “traditional” patron-librarian interactions. Lastly, the Duration variable, which is only present in Model 6, is statistically highly significant (p < 0.001), and the magnitude of the parameter estimate increases notably for each progressive factor-level of Duration. The presence of Duration in Model 6 also appears to ameliorate the effects and statistical significance of many variables and individual factor-levels present in the model. While both Level of Difficulty and Duration are ordinal variables, these results suggest that the two variables are at least partially autocorrelated. This resonates intuitively, as one would expect patron interactions and queries that take longer to resolve would be perceived as more difficult by individual librarians.

Discussion

Utility and application

The analytic approach shown can be used to provide libraries with measurable insights into the nature of librarian-patron interactions. While the study results do not provide causal explanations for the relationships that are manifest in the data, they offer a variety of insights that can provide library service managers with the leads necessary to further investigate and improve the quality of patron services. First, and most timely, the results show that the average difficulty of patron interactions has increased since the GSU campuses closed due to COVID-19. Since the results also show that online patron interactions are associated with higher Level of Difficulty ratings in general, library service managers may expect to see elevated difficulty ratings going into the Fall 2020 semester as patron services remain largely remote and online. This increased perception of difficulty may be a consequence of librarians needing better IT equipment, training, and practice in using synchronous communication tools for providing library services and support. These results may also mean that librarians need to be able to allocate larger portions of their time toward individual patron interactions at the expense of time spent on other duties in order to maintain a healthy work schedule. Second, the results indicate that librarians do not rate the difficulty of individual patron transactions in a standardized or uniform way. For instance, the library's Special Collections and Research Data Services teams' interactions with patrons are generally limited to a specialized subset of patron needs and inquiries. As a result, it is assumed that these interactions are rarely characterized by brief durations and low-level inquiries with simple, canned, or predictable responses. Consequently, we would expect to see patron interactions associated with Specialized Teams to be more difficult on average. Despite this, the results indicate that patron interactions involving specialized librarians and staff are actually rated as less difficult on average. This information is a rich foundation for follow-up investigations, training, and inquiry. It is possible that different sub-groups within the library have different understandings of how to rate the difficulty of individual patron interactions. It is equally possible that the types of inquiries that different groups of librarians engage with are measurably more or less challenging than the types of inquiries that other groups engage with. In both cases, further investigation by service managers is warranted to determine if organization-wide training and normalization is needed and whether the duties and responsibilities both within and across different groups in the library need to be adjusted. Further research comparing Level of Difficulty ratings between Specialized Teams and the library overall might also examine whether the unexpected Level of Difficulty gap widened or narrowed during the COVID-19 period. Factors such as physical facility closures and service changes might have influenced the type and complexity of patron questions handled by members of Special Collections and Research Data Services. Lastly, the results show very notable discrepancies in Level of Difficulty ratings between campus locations. Specifically, when compared with the downtown Atlanta campus, librarians at all other GSU campuses consistently rate their interactions with patrons with lower difficulty ratings. Similar to the results for Specialized Teams, these results may indicate that librarians at various campus locations may have different perspectives on how to apply difficulty ratings, even if the types of patron inquiries they engage with are largely the same across all campus locations. Alternatively, it is possible that there are intrinsic differences in the nature and relative difficulty of patron interactions at different campus libraries. If the former scenario is true, then service managers, again, may wish to pursue organization-wide training that ensures everyone has a shared understanding of the difficulty scale. If the latter scenario is true, then it represents an opportunity for librarians to engage in outreach efforts targeting students and faculty with the objective of promoting more advanced interactions between librarians and patrons at specific campuses.

Limitations

As with all research, this study has several notable limitations: missing data, lack of input from librarians on their perceptions, and limited data scope. We have a significant amount of missing data, as only 3331 out of 26,334 consultations in the original LibInsight dataset contained enough information to complete the analysis. The cases with missing data are largely from the reference desk, where librarians do not log information beyond difficulty, campus, and date. Since the data was not missing at random (Rubin, 1987) and the majority of consultations contained missing data (87.35%), multiple imputation was not used (Rubin, 2019). Additionally, this study only used the records generated by librarians about the difficulty of their consultations and related factors. The study would have been strengthened by an explanatory sequential mixed methods (Ivankova et al., 2006) approach in which the researchers followed up with librarians qualitatively to ask their perceptions of difficulty of consultations based on the statistical findings within the analysis as well as perceptions of how to reduce the difficulty of consultations. Lastly, the data used in this study represents only a single institution, and the results shown may not generalize to other institutions.

Recommendations and future directions

Librarians at all levels can use these findings to help frame their approach to and understanding of the difficulty level of consultations. On an individual level, librarians can use these findings to potentially identify sources of strain and opportunities for growth with respect to how they engage with patrons. Library administrators and supervisors can use these findings to inform deeper assessments of service operations, develop effective training programs, and better ensure that librarians receive the support they need to prevent burnout, particularly during periods of disruption or when providing increased remote services. Administrators and supervisors can also use these findings for training purposes to better prepare librarians who work directly with patrons to be aware of common patterns within more difficult consultations. Burnout among librarians is common, and understanding which factors are associated with more difficult consultations provides opportunities to reduce burden on librarians. Most libraries already collect, maintain, and use datum points comparable to those used in this study. Scholars from other libraries should seek to reproduce this analysis with their own data. Analysis of a library's own data will help library leaders to better prepare and train their librarians for their patron interactions. Qualitative researchers should use these findings to frame interviews and focus groups of librarians on why consultations that occurred during the COVID-19 pandemic and online consultations are reported as more difficult than data analysis–focused consultations and interactions with specific types of patrons. Specifically, librarians should be interviewed as to why library consultations are more difficult as well as what could be done to reduce the difficulty of library consultations. The insights gleaned from those studies paired with this analysis would be instrumental in developing best practices for librarians working through difficult consultations and under conditions that might increase the effort required to provide reference services.

Conclusion

With the COVID-19 pandemic still ongoing at the time of writing, we believe that now is an opportune time for individual librarians and service managers to seriously investigate the factors contributing to the perceived burden and difficulty of individual interactions with library patrons. While we have presented a quantitative approach to evaluating the types of data that many librarians already have access, there are many rich opportunities for continued quantitative and qualitative inquiry in this area. Further examination of the perceived difficulty and burden imposed by different types of patron interactions is an essential strategy for understanding how the growth of virtual reference affects librarian workload, mitigating the deleterious effects of librarian burnout and low morale, and maintaining high-quality public services during facility closures or changes in service models.

CRediT authorship contribution statement

Raeda Anderson: Conceptualization, Methodology, Software, Formal analysis, Writing - original draft, Writing - review & editing, Project administration. Katherine Fisher: Resources, Methodology, Writing - original draft, Writing - review & editing. Jeremy Walker: Conceptualization, Methodology, Formal analysis, Resources, Writing - original draft, Writing - review & editing, Visualization.
LibInsight question labelResponse options
Start dateDatetime
CampusAlpharetta
Atlanta
Buckhead
Clarkston
Decatur
Dunwoody
Newton
LocationCURVE
Circulation
Circulation/reference
Office
Other
Reference
Roving
Special collections
Tech support
Patron typeAlumni
Community
Donor
Faculty
Graduate student
Library colleague
Ph.D. student
Staff
Undergraduate student
University administration
Unknown
Question type1. Directional/very basic
2. Some effort required
3. Effort required
4. Significant effort required
Time spentLess than 10 min
10–20 min
20–30 min
30–40 min
40–50 min
50–60 min
60+ min
Unknown
Question formatIn person
Email
Phone
Online, real-time
Social media
Department(s)73 unique options
Special Collections & Archives areaPhotographic Collections
Music & Radio Broadcasting
Pulp Literature & Zines
Social Change
Southern Labor Archives
University Archives
Women's Collections
Gender & Sexuality
Rare Books
Other
QuestionFree text response field
AnswerFree text response field
Was this transaction scheduled in advance?Yes/no
Campus IDFree text response field
  1 in total

1.  Analysis of the use of reference services in an academic health sciences library.

Authors:  W Sullivan; L A Schoppmann; P M Redman
Journal:  Med Ref Serv Q       Date:  1994
  1 in total
  1 in total

1.  "Death of social encounters": Investigating COVID-19's initial impact on virtual reference services in academic libraries.

Authors:  Marie L Radford; Laura Costello; Kaitlin E Montague
Journal:  J Assoc Inf Sci Technol       Date:  2022-08-04       Impact factor: 3.275

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.