Literature DB >> 34185796

Designing optimal COVID-19 testing stations locally: A discrete event simulation model applied on a university campus.

Michael Saidani1, Harrison Kim1, Jinju Kim1.   

Abstract

Providing sufficient testing capacities and accurate results in a time-efficient way are essential to prevent the spread and lower the curve of a health crisis, such as the COVID-19 pandemic. In line with recent research investigating how simulation-based models and tools could contribute to mitigating the impact of COVID-19, a discrete event simulation model is developed to design optimal saliva-based COVID-19 testing stations performing sensitive, non-invasive, and rapid-result RT-qPCR tests processing. This model aims to determine the adequate number of machines and operators required, as well as their allocation at different workstations, according to the resources available and the rate of samples to be tested per day. The model has been built and experienced using actual data and processes implemented on-campus at the University of Illinois at Urbana-Champaign, where an average of around 10,000 samples needed to be processed on a daily basis, representing at the end of August 2020 more than 2% of all the COVID-19 tests performed per day in the USA. It helped identify specific bottlenecks and associated areas of improvement in the process to save human resources and time. Practically, the overall approach, including the proposed modular discrete event simulation model, can easily be reused or modified to fit other contexts where local COVID-19 testing stations have to be implemented or optimized. It could notably support on-site managers and decision-makers in dimensioning testing stations by allocating the appropriate type and quantity of resources.

Entities:  

Mesh:

Substances:

Year:  2021        PMID: 34185796      PMCID: PMC8241042          DOI: 10.1371/journal.pone.0253869

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Context and motivations

In accordance with the Centers for Disease Control and Prevention (CDC), proactive testing for COVID-19 infection is a key factor in determining where and how the SARS-CoV-2 virus is spreading within a population. The early identification of infected people leads to more rapid treatment and isolation for them, as well as for those who were exposed to them [1-3]. This type of monitoring is essential to reduce the spread of the disease (CDC, 2020). Fast and innovative solutions are indeed necessary to mitigate the consequences of the COVID-19 crisis [4]. In this line, since August 2020, the University of Illinois at Urbana-Champaign (UIUC) is providing free COVID-19 diagnostic walk-in testing stations on campus. UIUC has actually implemented a time-efficient saliva-based COVID-19 test under an approved FDA Emergency Use Authorization [5]. This innovative saliva-based, specific nucleic acid (i.e., PCR), and rapid-result RT-qPCR COVID-19 testing process has been developed by the “COVID-19 SHIELD: Target, Test, Tell” team of UIUC [6]. According to the SHIELD team, direct saliva testing can address bottlenecks of time, cost, and supplies, enabling fast and frequent testings on a large scale. The participation throughout the semester by all students, faculty, and staff members is vital for collecting data to support the ongoing monitoring and tracking of the pandemic[7]. While the saliva-based process for COVID-19 testing, enables high-throughput, rapid, and scalable testing of a large population [8], the optimal design and dimensioning of the laboratory processing the samples are key to ensuring fast feedback to the people being tested. On the one hand, UIUC has one of the most innovative on-campus COVID-19 testing programs in the United States of America (USA), offering up to 17 sites across campus [9]. On the other hand, it is of utmost importance to design an adequate on-site laboratory infrastructure and process–i.e., with the appropriate number of operators, machines, and adequate allocation of these resources–to test the samples in a time-efficient manner, as illustrated in Fig 1. The laboratory processing the saliva samples has achieved the regulatory compliance necessary to perform high-complexity testing under federal Clinical Laboratory Improvement Amendments guidelines. By the first day of classes (for the academic year 2020–2021), the goal was not only to administer more than 10,000 tests per day, but also to test all the samples collected and provide results within 24 hours. Such comprehensive testings allow for quick quarantine, public health contact tracing, and rapid delivery of any necessary medical care [10].
Fig 1

COVID-19 testing workflow at the University of Illinois at Urbana-Champaign (UIUC).

To put that in a national context, this number of 10,000 tests per day represented in August 2020 around two percent of all COVID-19 tests performed in the USA daily [10, 11]. The UIUC mass testing program and its associated platform have been touted as a model system, and have attracted the interest of many institutions [12]. On this basis, the overarching objective of this study is to figure out the optimal allocation of resources (i.e., operators, machines, and their allocation) to test and process a significant amount of samples locally on campus, by developing a simulation model that can be replicated and deployed in other contexts.

Related work

The flexibility and adaptability of mobile health stations make them as commendable solutions to respond to pandemics, such as the COVID-19 crisis [13]. While they represent an untapped resource for healthcare systems, such mobile stations are still not widely implemented. An extensive literature review, through the evaluation of more than 50 articles, on the strengths and weaknesses of mobile health stations in the United States, has been conducted recently [14]. A growing body of evidence shows that mobile health stations are particularly successful in delivering services directly at the curbside of communities in need. Yet, further work is necessary to augment the availability of mobile health care delivery [15]. Furthermore, in the context of the COVID-19 pandemic, with an increasing number of persons to be tested and “where limited intensive care resources can be overwhelmed by a large number of cases requiring admission in a short space of time” [16], managing healthcare demand and capacity is even more challenging. In this line, mobile testing stations appear to be a suitable solution to face the increasing demand for on-site COVID-19 testing services for workers and students. For instance, a design and engineering company has started designing mobile COVID-19 testing laboratories in conjunction with a laboratory equipment supplier to perform quick testings on large corporate and academic campuses [17]. The “mobile biosafety labs” developed can be deployed rapidly to locations that require COVID-19 testing for active or suspected cases of COVID-19. Their newly developed mobile lab can accommodate up to nine staff members, and two diagnostic machines (the first one for sample collection, the other one for testing) capable of testing 80 samples at a time, with a potential output of over 1,100 tests per day. In the present case, to reach the objective of 10,000 samples tested per day, ten mobile labs would be required, with 90 staff members and 20 machines, which represents a resource-intensive solution. With this background, it becomes thus of utmost importance to take advantage of the capabilities offered by modeling and simulation tools to optimize the design and implementation of local and ad hoc COVID-19 testing stations. In this line, the following paragraphs focus on the applications of computer simulation to improve the performance of health services during the COVID-19 crisis. Lamé and Simmons (2020) discussed how simulation could be used in research works aiming at improving the quality, safety, and efficiency of healthcare systems [18]. In this line, Lamé and Dixon-Woods (2020) emphasized the substantial potential of simulation in healthcare systems, stating that “simulation can offer researchers access to events that can otherwise not be directly observed, and in a safe and controlled environment” and that “it is a flexible and pluripotent technique that can be used in multiple study designs in healthcare improvement research” [19]. Simulation notably allows many “what if?” scenarios to be tested in an efficient way for decision-making [20]. For instance, simulation-based failure mode analysis can be useful to identify the risks related to the readiness of the healthcare workers and emergency departments for the COVID-19 [21]. The use of realistic system models can actually help manage and mitigate a systemic crisis such as the COVID-19 pandemic [4]. Recently, researchers have discussed the role of systemic models to support better and agile management of the COVID-19 crisis, and suggested a structure for a COVID-19 decision-aid system based on three hierarchical layers [4]: (i) a top-level strategic level to master the crisis at a global level in a consistent fashion, (ii) an intermediate operational layer for operational decisions, based on the information captured from (iii) the tactical layer on a more local geographic scope. Typical examples of information and decisions at a tactical layer include the monitoring of equipment, beds, and ventilators used by COVID-19 patients in a given location. In a complementary way, Currie et al. (2020) recently started to investigate how simulation models can help reduce the impact of COVID-19 [22], as simulation models can be deployed for a variety of purposes, such as in the design of systems [23]. Currie et al. (2020) notably identified challenges resulting from the COVID-19 pandemic and discussed how simulation models could support decision-makers in making the most informed decisions [22]. The authors provided a mapping of the leading modeling techniques–namely system dynamics (SD), agent-based modeling (ABM), discrete event simulation (DES), and hybrid–on four scales (global, country or regional, organizational, individual), for three emergency management phases (preparedness, response, recovery), and eleven COVID-related decisions, namely: quarantine, social distancing, end of lockdown, delivery of testing, targeting vaccination, hospital capacity, staffing, resource management, admission and discharge thresholds, other patients, health and wellbeing [22]. According to their mapping, for the delivery of testing (scope of the present paper), all three modeling techniques could be relevant. More specifically, for the modeling, simulation, and improvement of the COVID-19 testing process here, we argue that DES appears to be the most commendable approach to use. In fact, DES is a method for simulating the behavior and performance of a real-life process, facility, or system [18]. In comparison with the principal features of SD and ABS models [24], DES models focus on processes that involve the use of a queue. By simulating the operation of a real-world system or process over time, DES models provide decision-makers with an evidence-based tool to develop and test operational solutions before implementation [25, 26]. DES models are also convenient to deploy at an operational and tactical level [27]. In addition, DES modeling includes three advantages that are commendable in the present case: (i) easy for the user to understand with the help of animations and graphics (available in the freely accessible AnyLogic PLE software package used in this study); (ii) flexibility to determine the behavior of entities; and (iii) modeling phase straightforward once the problem is clearly defined [4]. DES modeling is actually increasingly deployed in healthcare for improvement of services [27, 28], as an “effective decision-making tool for the optimal allocation of scarce health care resources to improve patient flow, while minimizing health care delivery costs and increasing patient satisfaction” [29]. Through a systematic literature review on the application of DES in healthcare [30], including more than 200 original research articles, it has been found that the applications of DES can be divided into four major classes: health and care systems operation, disease progression modeling, screening modeling, and health behavior modeling. For instance, DES can be deployed to determine the effectiveness of increasing the number of post-surgical inpatient beds on the proportion of patients admitted to a healthcare center [31]. Lamé et al. (2016) also applied DES to identify the sources of patient waiting times in an outpatient oncology clinic and to define relevant corrective actions [32]. By using DES to evaluate different scenarios, they quantitatively demonstrated that advanced preparation has the strongest potential for improving patient waiting times [32]. In addition, Rusnock et al. (2017) used DES to quantitatively model the mental workload of healthcare staff in an inpatient unit at a medical center [33]. The model was deployed to find the optimal idle time, average workload, and overload time of healthcare staff under different patient loads. More recently, a stochastic DES model–freely available–has been developed to represent the critical dynamics of the intensive care admissions process for COVID-19 patients [16].It has been applied in large hospital in England for which the effect of several possible interventions were simulated. Particularly, model inputs were aligned with the action levers available to the planners, including duration of time at maximum capacity in order to inform workforce requirements. Almagor and Picascia (2020) evaluated the effectiveness of a COVID-19 contact tracing application using an agent-based model [34]. Fiore et al. (2021) deployed multi-agent simulations to estimate the daily testing capacity required to find and isolate a number of infected agents sufficient to break the transmission chain of COVID-19 infections [1]. Ghaffarzadegan (2021) developed a simulation model for what-if analyses to further monitor and mitigate the spread of COVID-19 in universities [35, 36]. In the present study, the objective is to build and deploy a new DES model to design optimal (in terms of time and resources) COVID-19 testing stations locally. The present research demonstrates how a high volume of saliva samples for COVID-19 testing can be achieved in a time-efficient way with proper process optimization under resource constraints and optimal allocation of testing machines and operators.

Materials and methods

The need for quick and reliable COVID-19 testing has become crucial as students return to campus and employees to their workplace [37]. Both time and space are in limited supply for most of these places, which means building a novel and large structure for COVID-19 testing is not a convenient or practical solution [17]. While there was no pre-existing model available to process a significant number of COVID-19 tests on-campus on a daily basis, an initial process flow (table-based) has been proposed by the SHIELD team. The goal was to be capable of collecting and testing more than 10,000 samples in a given day, within a time window from 10 to 12 hours. In this paper, modeling and simulation tools are investigated and applied to verify and modify the process flow, and potentially draw newer and more effective process maps. Based on the model built and simulated for the University of Illinois detailed in this paper, a complementary objective of the present research work is to provide further insights and recommendations for decision-makers when designing and dimensioning testing stations in other contexts. In this line, the discrete event simulation (DES) model developed here can be replicated or scaled up for saliva testing station optimization in various situations, such as for testing in remote communities or concentrated cities. To improve the scientific soundness and reproducibility of the DES model developed, the 20-item checklists aiming at “Strengthening The Reporting of Empirical Simulation Studies” (STRESS) [38] has been used, as reported in Table 1. The stepwise process for saliva sample testing is first mapped through a visual flowchart for better understanding, and a time-based Gantt chart is used to help visualize hotspots and potential areas of improvement, as illustrated in Fig 2. A DES model is then developed to run different scenarios (in terms of process configuration and resources allocation) to find the optimal testing process configuration to reach the testing objective while minimizing the resources (operators and machines) deployed. To build and run the DES model, the software AnyLogic has been used as it is widely acknowledged for DES modeling [24, 39]. For instance, AnyLogic recently illustrated how such simulation-based models could help provide insight and decision support when applied to challenges like the COVID-19 outbreak [40]. In the present case, the AnyLogic Process Modeling Library, available in the free PLE version of AnyLogic, has been a helpful resource to model the real-world testing process in terms of agents (here, vials of saliva samples to be tested), processes (sequences of operations typically involving queues, delays, resources utilization), and resources (operators and machines), to optimize an existing mobile COVID-19 testing station and evaluate the impact of different configurations and resources allocations through simulations. In fact, the DES model is here a relevant stochastic tool for estimating probability distributions of potential outcomes by allowing for random variation in inputs over time [22]. DES models are typically deployed to model systems operation (e.g., a testing procedure) over time, where entities flow through several queues and activities. They are generally suitable for determining the impact of resource availability–operators and machines in the present case–on waiting times and the number of entities waiting in the queues or going through the system–vials to be tested here.
Table 1

Application of the STRESS checklist [38] to the present DES model.

CategoryChecklist itemPresent simulation model
ObjectivesPurpose of the modelDesigning better COVID-19 testing stations
Model outputsNumber of vials being processed on a daily basis
Experimentation aimsTesting different configuration (in terms of operators and machines number and allocation)
LogicBase model overview diagramGantt diagram of the testing process (S1 Appendix)
Base model logicFlowchart of the COVID-19 testing process (Fig 3)
Scenario logicBased on the hotspots (bottlenecks) identified
AlgorithmsNot applicable (N/A)
ComponentsNumber and allocation of operators and machines
DataData sourcesThe SHIELD team of the University of Illinois
Input parametersTime distribution and resources allocation (Table 2)
Pre-processingN/A
AssumptionsProvided with the initial data by the experts from the SHIELD team (see values in Fig 3)
ExperimentationInitialisationInitial configuration provided by the SHIELD team. See the initial transient regime for the first batch in Fig 7, before reaching the steady-state regime.
Run-lengthTwo consecutive processing days (10 to 12 working hours per day)
Estimation approachMultiple replications (and box plots) for each scenario
ImplementationSoftwareAnyLogic PLE
Random samplingTriangular distribution function in AnyLogic (Monte Carlo simulation)
Model executionAnyLogic simulation engine FIFO (first in, first out)
System specificationIntel Core i7-8550U, 1.80Ghz, 8.0GB RAM (Windows 10 Enterprise environment 64-bit)
Code accessComputer modelSupplementary digital file (DES_model.alp)
Fig 2

Overview of the modeling approach.

The present complete manuscript complements the initial study, and its associated two-page COVID-19 brief report, made by the present authors [41] to ensure rapid dissemination among the community in this context of the COVID-19 pandemic. In fact, after a synthetic literature survey providing background elements and inspiration sources, all the steps of the present research approach are now thoroughly detailed and illustrated to be clearly understandable, even by non-experts in modeling and simulation tools. Importantly, the DES model is made available (see S1 File) for researchers, managers, or decision-makers who want to reuse or adapt it in other contexts. Finally, the verification and validation of the DES model is a key point that is now further addressed in the discussion section, based on Sargent’s recommendations [23], by comparing the outputs of the DES model with the data from the real situation on campus.

Results

Modeling phase

Description and visualization of the COVID-19 saliva-based testing process

The baseline or background information for this research work was the table-based process flow, given by the UIUC SHIELD team, including the innovative saliva-based process for large scale SARS-CoV-2 testing developed by a group of researchers at UIUC [8]. This model, as illustrated in Fig 2, allowed an initial understanding of the testing process, including a description of the different tasks to be performed, the resources required (operators and machines), and the time duration of each task. Following the stepwise modeling approach depicted in Fig 2, the next step consisted of translating the table-based process flow into a Gantt diagram to better visualize the testing process and identify the time-consuming tasks. As this step is not mandatory for building the DES model, but could provide additional insights for decision-makers (e.g., a more visual understanding of the process on a timeline), two Gantt diagrams have been drawn and are available in S1 Appendix: one Gantt diagram with all tasks performed in serial in the minimum configuration (i.e., only one operator and one machine of each type available), and one Gantt diagram with the first proposition of improvement (ten operators and two testing machines available) allowing some tasks to be performed in parallel). The bottlenecks and key areas of improvement are highlighted by a box surrounded by a black border, in S1 and S2 Figs in S1 Appendix. On this basis, it has been possible to quickly identify appropriate ways to enhance the testing process, such as the tasks that could be parallelized by increasing the number of operators (e.g., between the tasks ID100 and ID140). In parallel, the process flow for testing saliva samples has been mapped out using a logigram representation, in Fig 3, as a useful basis to build the DES model (see sub-section 3.2). Note that in Fig 3, the first logo–a vial–indicates the number of samples handled in each step; the second logo–a person–indicates the number of operators required to perform each task and process the associated number of vials; the third logo–a clock–indicates the time needed by the operator to perform and complete this task; and, the fourth logo–a gear–indicates whether or not a machine is required for a given task, and if any, the working time of the machine to process the associated number to vials for this task.
Fig 3

Workflow model of the COVID-19 testing process, used for developing the DES model.

In the following sub-section, a discrete event simulation model is built and deployed to optimize the four main steps of the process flow, represented in Fig 3, in terms of resource allocation on-site. Note that the “offline” operations in Fig 3 refer to the operations that can be prepared in advance. The “offline” operations are out of scope for the present study, as it is assumed that a sufficient quantity of ready-to-be-used racks (including 96 and 384 well plates) is available to receive and carry the vials that have to be tested.

Discrete event simulation (DES) model

Fig 4 provides a complete overview of the DES model of the COVID-19 testing process for saliva samples deployed at UIUC, following the four main phases described in the previous sub-section, namely: preparation, collection, pre-testing, and testing. All key resources are modeled: a pool of operators and a pool of equipment, as illustrated in Fig 4. These resources are allocated to specific tasks, as listed in Table 2. Note that for some specific sequences of operations, the same operator is assigned (e.g., “assignOpPrep”) to perform the whole sequence before being released (e.g., “releaseOpPrep”) to handle a new (set of) vial(s). The time distribution of each task is given in Table 2, based on information provided by the “COVID-19 SHIELD: Target, Test, Tell” team. Note that while a constant time is used for equipment, a triangular distribution has been chosen, based on experts’ knowledge, to model the variability of performance among the operators (with a distribution of +/- 20 percent around the mean value).
Fig 4

Developed DES model, with pools of resources and parameters to optimize.

Table 2

Time distribution and resources allocation in the DES process.

TaskTime distribution (seconds)ResourcesIn DES model
Open bag and extract vialtriangular(4,5,6)OperatorOpPrep
Attach labels and scan vialtriangular(24,30,36)OperatorOpPrep
Place vial in racktriangular(8,10,12)OperatorOpPrep
Transfer vial rack to tanktriangular(24,30,36)OperatorOpTran
Heat to 95°Ctriangular(1800,1830,1860)MachineEqHeat
Transfer vial racktriangular(24,30,36)OperatorOpTran
Open tube and pipett to PCR tubetriangular(24,30,36)OperatorOpTran
Load test tube rack into Biomektriangular(24,30,36)OperatorOpColl
constant(30)MachineEqBio
Load 96 well plate into Biomektriangular(24,30,36)OperatorOpColl
constant(30)MachineEqBio
Transfer to 96 well plateconstant(10)MachineEqBio
Unload and store platetriangular(24,30,36)OperatorOpColl
constant(30)MachineEqBio
Discard test tubetriangular(96,120,144)OperatorOpColl
Load 4 96 well platetriangular(24,30,36)OperatorOpLoad
constant(30)MachineEqBio
Load 384 well platetriangular(12,15,18)OperatorOpLoad
constant(30)MachineEqBio
Transfer to 384 well plateconstant(1800)MachineEqBio
Transfer to Vortextriangular(24,30,36)OperatorOpLoad
Vortexconstant(180)MachineEqCent
Centrifugeconstant(180)MachineEqCent
Transfer plate to QuantiStudiotriangular(48,60,72)OperatorOpTest
constant(60)MachineEqTest
Test-RT-qPCRconstant(5400)MachineEqTest
Output results and prepare for next batchtriangular(240,300,360)OperatorOpTes
constant(120)MachineEqTest

Simulation phase and interpretation of the results

The DES model, developed using AnyLogic and based on the conceptual model described in the previous sub-section, has been run ten times for each of the different configurations in order to find out the optimal one, i.e., minimizing the number of resources used while achieving the time objective of 10,000 samples being tested in one working day (time window of 10–12 hours). In the present model, one replication of the DES model corresponds to one working day, to be coherent with the actual on-campus testing process, starting between 6 and 8 a.m. and finishing between 6 and 8 p.m. depending on the day and workload. The different configurations have been tested following an experiment plan, for which an extract is provided in the table at the bottom of Fig 5. Note that running each scenario more times does not change the box plots’ features. This can be explained by the fact that the operator times (subjected to triangular distribution) are on average ten times lower than the machining times (constant time) (see Table 2). As such, in the present case, running the scenarios ten times allows being both time and cost-efficient, while generating sound simulation results.
Fig 5

Evaluation of scenarios through simulation runs (in established and continuous regime).

After running a couple of simulations with realistic numbers for the set of parameters (i.e., varying the number of operators for each cluster of tasks, and the number of machines available), three key hotspots have been identified on the process flow, as highlighted through the dotted frames in Fig 4. These hotspots correspond to bottlenecks, where an accumulation (queue) of vials to be tested occurs, leading to slowing down the overall testing process flow. The first bottleneck is noticed when the number of operators allocated to the preparation of the vials is insufficient to deal with the number of vials collected for testing. The second one is also related to the number of operators allocated to the task “opening and pipetting”, which needs to be performed individually for each sample. The third one is due to the time required (one hour and a half) to complete the “Test-RT-qPCR” task for a batch of 384 vials. As illustrated at the bottom of Fig 4, when running a DES simulation, the resources that are underused (idle units) or overused (high utilization percentage) can be readily detected. Fig 5 presents the time distribution for testing 10,000 samples times under different scenarios with ten replications for each scenario. To eliminate the three bottlenecks slowing down the whole process flow, the scenarios are built along three dimensions: (i) the number of operators for vials preparation, (ii) the number of resources allocated to the transfer operation, and (iii) the number of machines available to test a batch of 384 vials. Results in Fig 5 clearly show that two measures have a significant impact on testing times: adding more operators for preparation to a certain extent, and having sufficient testing machines available, as further discussed in the next paragraph. Another interesting insight is that, overall, there is a low variability induced by the operators’ performances. Of course, the more operators and machines there are, the more time-efficient the process will be. Yet, the resources have to be optimized not only based on cost constraints but also to limit the number of operators working together at the same workplace or station to further prevent the spread of the virus. In the first simulation (Sim. #1) listed in Fig 5, the insufficient number of both operators for preparation and testing machines creates two critical bottlenecks on the testing process flow, leading to a mean time above 13.5 hours to test 10,000 samples. Adding an extra testing machine (Sim. #2) allows reducing the meantime by one hour. For this configuration (to handle 10,000 samples) a day, having more than four machines (Sim. #3) does not bring any improvement in terms of time efficiency. Augmenting the number of operators for preparation from 10 to 12 (Sim. #4) significantly decreases the queue, without completely solving this bottleneck. Also, as more vials are being treated simultaneously at the beginning of the testing process flow, this creates a queue for transfer operations (Sim. #4 to Sim. #6). In Sim. #7, no more bottlenecks are detected, and in this configuration, 10,000 saliva samples can be tested for COVID-19 in less than 10.5 hours, when operating in a continuous regime. Augmenting further the number of resources available (Sim. #8 to Sim. #10) does not significantly decrease the meantime. In Fig 6, more details are given for the most promising configurations, i.e., the ones minimizing the use of resources while having a mean time below 11 hours. The boxplot shows the minimum, first quartile, median, third quartile, and maximum time after running ten simulations for each of these configurations. While the middle line of the box represents the median, dividing the time set into a bottom half and a top half, the “X” in the box represents the mean value. In all, as illustrated through the optimal steady state of Fig 7, the optimal resource allocation to test 10,000 samples within the time window available has been found (see Sim. #7 of Fig 5 and Table 3). Note that the transient regime only happens when a new testing center (re-)opens (e.g., after a break or holiday on campus). Other than that, it can be assumed that the process can operate in a continuous regime, as the testing/processing center is operating on a continuous basis (i.e., 7 days a week) on campus. For information, the left part of the plot in Fig 7 indicates the additional time to consider when the process needs to be re-initialized or started from scratch.
Fig 6

Detailed box and whisker chart for key configurations (Sim. #4 to Sim. #7).

Fig 7

Testing time of vials batches in transient and continuous operation.

Table 3

Optimal resources allocation, as a function of the number of samples to be tested.

SamplesOpPrepOpTranOpCollOpLoadOpTestEqHeatEqBioEqCentEqTest
2000321121112
4000641121112
6000971131213
800011101232213
1000013121242314
1200015121252315
1400019132362316
1600021142372317
1800023162373317
2000026193383318

Discussion and implications

This study presented a DES model to help streamline operations at a large COVID-19 testing station on a university campus in the US. It has been shown that testing centers could benefit from the use of simulation models to increase the time-efficiency of their process while avoiding any overutilization of resources. This is particularly crucial in the current COVID-19 context, where millions of people are getting tested [42], and practitioners have to build new, or adapt, existing testing centers while making rapid but well-dimensioned design decisions. Through a DES model, it has been demonstrated that with a process flow designed and optimized in terms of resource use and allocation, it is feasible to achieve the goal of collecting, transporting, and testing 10,000 samples on-site per day with a reasonable quantity of resources mobilized. The verification, validity, and reproducibility of simulation models are of utmost importance to serve scientific, societal, and practical benefits, notably for the advancement and reuse of operational knowledge [38]. Sargent (2013) provided and discussed practical approaches for the verification and validation of simulation models [23]. A given simulation model can be considered as valid when the model is an accurate representation of the real-world system, and when its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model [23]. Here, we are comparing the inputs and outputs of the DES model with the actual process and the number of samples being processed on the field. The model has been designed based on the inputs given by an expert from the “COVID-19 SHIELD: Target, Test, Tell”, who actually developed and implement the testing process on campus at the University of Illinois at Urbana-Champaign. The University of Illinois Urbana-Champaign has since released a data dashboard that displays daily information about the University’s on-campus COVID-19 testing program [43], available at https://go.illinois.edu/COVIDTestingData. This dashboard displays the number of tests performed on a given day as well as the positivity rate. As shown in Fig 8, an average of 10,118 daily tests has been monitored for the first two weeks of class on-campus (for the academic year 2020/2021), which is well-aligned with the purpose and objective of the DES model, showing that the simulation output is close to the actual system output, in accordance with the event validity in [23]. On average, during the fall 2020 semester, UIUC conducted about 10,000 tests each weekday and about half that number during weekend days [44]. Also, in Fig 8, the higher number of tests performed is noticed at the beginning and end of each week, while it is on the weekend and on Wednesday that the lower number of persons being tested is recorded. As UIUC requires their students, staff, and faculty members to be tested twice a week to have access to on-campus facilities [7, 11], or at least once every four days, Monday/Thursday and Tuesday/Friday are the two couples of days with the higher number of tests to be processed.
Fig 8

Variability of the number of persons being tested on a daily basis (source: [43]).

On this basis, a timely and relevant line for future research would be to forecast the number of tests performed on a daily basis, in order to adapt the resources needed, e.g., to anticipate the days when a higher demand in terms of resources is required to ensure sufficient capacities and to provide testing results in a time-efficient manner (i.e., to reduce the time it takes to return COVID-19 test results). Real-time data from the UIUC testing program is expected to detect any emerging trends rapidly and to act quickly in response. While more data points are needed to build a sound prediction model (the testing policy on-campus is still evolving, and the trend cannot accurately be set until the new policy stabilizes), the DES model has been run to estimate the optimal resource allocation according to the number of samples to be tested per day. The DES model developed in this paper can be launched quickly for scenario exploration to help adjust and refine the operations of a given testing program. In fact, it can be used to conduct what-if analyses. In practice, it can also be easily modified, for example, if some parts of the process flow evolved or if a new machine (e.g., a heating machine having a lower working time) can be implemented on the testing center. The optimal resources allocation (i.e., the minimal number of operators and machines for each task) depending on the number of saliva samples (from 2,000 to 20,000) to be tested in a single day (time window of 10 working hours available to process the test in a continuous regime) is given in Table 3 for the current process flow. As the statewide program, SHIELD Illinois, is currently working to increase current testing capacity to serve institutions nationally and entities in Illinois that have expressed interest in the new technology [45, 46], such results can be useful for decision-makers willing to implement a similar testing procedure in their respective contexts (e.g., an organization, a city) with more or fewer samples to be processed each day. As reminded by Lyng et al. (2021), the optimal use of COVID-19 tests will depend on different parameters such as the goals of testing, the population, or setting [2]. Last but not least, by following and applying the six principles of reporting simulation studies [38], the present DES model and its results can be reproduced, the model can be reused to investigate further hypotheses in the same application area or to test the generalizability of this COVID-19 testing process in other situations. A promising line for future research would be to combine such simulation models with newly developed artificial intelligence techniques, e.g., automated machine learning [3], deep learning techniques [47] to further predicting and mitigating the COVID-19, as well as to share and maintain these data in a transparent and decentralized way using Blockchain technology [48].

DES model.

(ALP) Click here for additional data file.

Gantt diagrams of the testing process.

(DOCX) Click here for additional data file. 3 Jun 2021 PONE-D-21-16234 Designing optimal COVID-19 testing stations locally: a discrete event simulation model applied on a university campus PLOS ONE Dear Dr. Saidani, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Based on the comments received from the reviewers and my own observation, I recommend major revisions for the article. Please submit your revised manuscript by Jul 18 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Thippa Reddy Gadekallu Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2.Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical. 3.Thank you for stating the following in the Acknowledgments Section of your manuscript: "This material is partially based upon the initial work and process flow provided by Christian Messmacher, from the SHIELD team. Any opinions, findings, and conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the University of Illinois or the SHIELD Team." We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: "The authors received no specific funding for this work." Additionally, because some of your funding information pertains to commercial funding, we ask you to provide an updated Competing Interests statement, declaring all sources of commercial funding. In your Competing Interests statement, please confirm that your commercial funding does not alter your adherence to PLOS ONE Editorial policies and criteria by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” as detailed online in our guide for authors  http://journals.plos.org/plosone/s/competing-interests.  If this statement is not true and your adherence to PLOS policies on sharing data and materials is altered, please explain how. Please include the updated Competing Interests Statement and Funding Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors have presented the research on. DESIGNING RESOURCE-EFFICIENT COVID-19 TESTING STATIONS LOCALLY: A DISCRETE EVENT SIMULATION MODEL APPLIED ON A UNIVERSITY CAMPUS 1. The authors are advised to check the error in the title ( By mistake 0 is added in the title) 2. Secondly, overall, the language of the paper and grammatical error needs to be fixed throughout the paper, At present it is not upto the mark of the journal's expectations. 3. The font size and font style of the research paper needs to be according to the template instructions 4. Related work and literature review is very weak in the paper, authors are advised to be detailed in this section. 5. Authors have mentioned Fig 2. Overview of the modeling approach, but there is not figure 2 in the paper, Lastly, authors are advised to format the references section and add below references a) Srivastava D., Kohli R. and Gupta S. 2017 Advances in Computer and Computational Sciences (Springer) Implementation and statistical comparison of different edge detection techniques 211-228 b) Gomathi, S.; Kohli, R.; Soni, M.; Dhiman, G.; Nair, R. Pattern analysis: Predicting COVID-19 pandemic in India using AutoML. World J. Eng. 2020 Reviewer #2: • Introduction section can be extended to add the issues in the context of the existing work • Literature review techniques have to be strengthened by including the issues in the current system and how the author proposes to overcome the same. • There are some grammatical and editing problems in English. English presentation should be further polished • The objective of the research should be clearly defined in the last paragraph of the introduction section. • Add the advantages of the proposed system in one quoted line for justifying the proposed approach in the Introduction section. The authors can add the few advantages of deep learning for COVID -19 diagnosis. The following papers can be referred. Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey. Authors can refer An Incentive Based Approach for COVID-19 planning using Blockchain Technology. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 11 Jun 2021 Thank you for considering our research work to be published in PLOS ONE. We hope that the present responses and additional explanations, as well as the changes and add-ons made in the manuscript accordingly, address adequately the remarks and suggestions made by Reviewers #1 and #2. The attached "Response to Reviewers" document includes specific responses - point-by-point - in blue font to each reviewer’s comments and suggestions. You will also find two versions of the revised manuscript: one with changes visible highlighted (showing clearly the changes made in response to the reviewers’ comments, as required), and one unmarked version. Submitted filename: Response to Reviewers.docx Click here for additional data file. 15 Jun 2021 Designing optimal COVID-19 testing stations locally: a discrete event simulation model applied on a university campus PONE-D-21-16234R1 Dear Dr. Saidani, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Thippa Reddy Gadekallu Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: All comments have been addressed. However, it hits at strongly recommended to a th reference suggested previously Reviewer #2: The authors have addressed all of my comments. The paper can can be accepted in the current format. Thank you ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No 18 Jun 2021 PONE-D-21-16234R1 Designing optimal COVID-19 testing stations locally: a discrete event simulation model applied on a university campus Dear Dr. Saidani: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Thippa Reddy Gadekallu Academic Editor PLOS ONE
  19 in total

1.  Discrete event simulation for healthcare organizations: a tool for decision making.

Authors:  Eric Hamrock; Kerrie Paige; Jennifer Parks; James Scheulen; Scott Levin
Journal:  J Healthc Manag       Date:  2013 Mar-Apr

2.  Simulation-Based Evaluation of the Effects of Patient Load on Mental Workload of Healthcare Staff.

Authors:  Christina F Rusnock; Erich W Maxheimer; Kyle F Oyama; Vhance V Valencia
Journal:  Simul Healthc       Date:  2017-08       Impact factor: 1.929

3.  A Simulation-Based Failure Mode Analysis of SARS-CoV-2 Infection Control and Prevention in Emergency Departments.

Authors:  Reinis Balmaks; Alise Grāmatniece; Aija Vilde; Mārtiņš Ļuļļa; Uga Dumpis; Isabel Theresia Gross; Ieva Šlēziņa
Journal:  Simul Healthc       Date:  2021-12-01       Impact factor: 1.929

4.  Covid-19 transmission modelling of students returning home from university.

Authors:  Paul R Harper; Joshua W Moore; Thomas E Woolley
Journal:  Health Syst (Basingstoke)       Date:  2021-01-17

5.  Mobile health clinic model in the COVID-19 pandemic: lessons learned and opportunities for policy changes and innovation.

Authors:  Sharon Attipoe-Dorcoo; Rigoberto Delgado; Aditi Gupta; Jennifer Bennet; Nancy E Oriol; Sachin H Jain
Journal:  Int J Equity Health       Date:  2020-05-19

6.  Using clinical simulation to study how to improve quality and safety in healthcare.

Authors:  Guillaume Lamé; Mary Dixon-Woods
Journal:  BMJ Simul Technol Enhanc Learn       Date:  2018-09-29

Review 7.  The Diffusion of Discrete Event Simulation Approaches in Health Care Management in the Past Four Decades: A Comprehensive Review.

Authors:  Shiyong Liu; Yan Li; Konstantinos P Triantis; Hong Xue; Youfa Wang
Journal:  MDM Policy Pract       Date:  2020-06-06

8.  Piloting an integrated SARS-CoV-2 testing and data system for outbreak containment among college students: A prospective cohort study.

Authors:  Laura Packel; Arthur Reingold; Lauren Hunter; Shelley Facente; Yi Li; Anna Harte; Guy Nicolette; Fyodor D Urnov; Michael Lu; Maya Petersen
Journal:  PLoS One       Date:  2021-01-26       Impact factor: 3.240

9.  Containment of COVID-19: Simulating the impact of different policies and testing capacities for contact tracing, testing, and isolation.

Authors:  Vincenzo G Fiore; Nicholas DeFelice; Benjamin S Glicksberg; Ofer Perl; Anastasia Shuster; Kaustubh Kulkarni; Madeline O'Brien; M Andrea Pisauro; Dongil Chung; Xiaosi Gu
Journal:  PLoS One       Date:  2021-03-31       Impact factor: 3.240

10.  Mobile health clinics in the United States.

Authors:  Nelson C Malone; Mollie M Williams; Mary C Smith Fawzi; Jennifer Bennet; Caterina Hill; Jeffrey N Katz; Nancy E Oriol
Journal:  Int J Equity Health       Date:  2020-03-20
View more
  5 in total

1.  Applying Discrete Event Simulation to Reduce Patient Wait Times and Crowding: The Case of a Specialist Outpatient Clinic with Dual Practice System.

Authors:  Weng Hong Fun; Ee Hong Tan; Ruzelan Khalid; Sondi Sararaks; Kar Foong Tang; Iqbal Ab Rahim; Shakirah Md Sharif; Suhana Jawahir; Raoul Muhammad Yusof Sibert; Mohd Kamal Mohd Nawawi
Journal:  Healthcare (Basel)       Date:  2022-01-19

2.  Heterogeneity in testing for infectious diseases.

Authors:  Christian Berrig; Viggo Andreasen; Bjarke Frost Nielsen
Journal:  R Soc Open Sci       Date:  2022-05-18       Impact factor: 3.653

3.  On an optimal testing strategy for workplace settings operating during the COVID-19 pandemic.

Authors:  X Hernandez; S Valentinotti
Journal:  PLoS One       Date:  2022-03-02       Impact factor: 3.240

4.  Acceptability of Community Saliva Testing in Controlling the COVID-19 Pandemic: Lessons Learned from Two Case Studies in Nursing Homes and Schools.

Authors:  Benoit Pétré; Marine Paridans; Nicolas Gillain; Eddy Husson; Anne-Françoise Donneau; Nadia Dardenne; Christophe Breuer; Fabienne Michel; Margaux Dandoy; Fabrice Bureau; Laurent Gillet; Dieudonné Leclercq; Michèle Guillaume
Journal:  Patient Prefer Adherence       Date:  2022-03-04       Impact factor: 2.711

5.  Using simulation modelling and systems science to help contain COVID-19: A systematic review.

Authors:  Weiwei Zhang; Shiyong Liu; Nathaniel Osgood; Hongli Zhu; Ying Qian; Peng Jia
Journal:  Syst Res Behav Sci       Date:  2022-08-19
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.