| Literature DB >> 36060541 |
Andrew J Goodwin1,2, Danny Eytan1,3, William Dixon1, Sebastian D Goodfellow1,4, Zakary Doherty5, Robert W Greer1, Alistair McEwan2, Mark Tracy6,7, Peter C Laussen8, Azadeh Assadi1,9, Mjaye Mazwi1,10.
Abstract
A firm concept of time is essential for establishing causality in a clinical setting. Review of critical incidents and generation of study hypotheses require a robust understanding of the sequence of events but conducting such work can be problematic when timestamps are recorded by independent and unsynchronized clocks. Most clinical models implicitly assume that timestamps have been measured accurately and precisely, but this custom will need to be re-evaluated if our algorithms and models are to make meaningful use of higher frequency physiological data sources. In this narrative review we explore factors that can result in timestamps being erroneously recorded in a clinical setting, with particular focus on systems that may be present in a critical care unit. We discuss how clocks, medical devices, data storage systems, algorithmic effects, human factors, and other external systems may affect the accuracy and precision of recorded timestamps. The concept of temporal uncertainty is introduced, and a holistic approach to timing accuracy, precision, and uncertainty is proposed. This quantitative approach to modeling temporal uncertainty provides a basis to achieve enhanced model generalizability and improved analytical outcomes.Entities:
Keywords: ICU; clinical; clocks; errors; medicine; metrology; time; uncertainty
Year: 2022 PMID: 36060541 PMCID: PMC9433547 DOI: 10.3389/fdgth.2022.932599
Source DB: PubMed Journal: Front Digit Health ISSN: 2673-253X
Figure 1Times recorded in clinical databases may not represent the true time the event occurred. The precision and accuracy of a recorded time may be affected by several different factors.
Figure 2Approximate accuracy and resolution of different time sources in the ICU. Positions of the ellipses are determined by the summation of factors discussed in the literature cited in Sections 1–4. The position of a device shown in this figure indicates its approximate precision and accuracy before any of the timekeeping improvements described in Section 6 have been implemented.
Figure 3Conceptual illustration of heteroscedastic uncertainties associated with a time series of temperature measurements. This figure illustrates concepts discussed in the literature cited in Sections 4.3, 7.1, and 7.2. Two different thermometers were used to measure temperatures during a hypothetical experiment, a mercury thermometer with a resolution of 1°C and a digital thermometer with a resolution of 0.1°C; times for each temperature were recorded using two different timepieces, a wall clock with a temporal resolution of 1 min and a wristwatch with temporal resolution of 1 s. Four observations (labeled a, b, c, and d) are made using different combinations of these clocks and thermometers. Panel (A) shows a plot of the numerical values displayed on the instruments when making the observation, while Panel (B) uses shaded regions to represent the range of possible true values that could have resulted in these values. The shape of the shaded regions in Panel (B) are determined by the temporal and thermal resolution of the instruments used to make each individual observation. For simplicity, other sources of uncertainty are not considered in this figure.
Sources of timing errors and solutions.
|
|
|
|
|
|---|---|---|---|
| Drifting clocks | 2.1 | Clock synchronization, modeling clock drift | 6.1 |
| Delays due to digital filters | 3.2 | Audit algorithmic delays | 6.1 |
| Lag of variables derived from waveforms | 3.3, 3.1 | Reverse engineer averaging windows | 6.2 |
| Imprecisely defined sample frequencies | 2.4 | Accurately measure all sample frequencies | 2.3, 2.4 |
| Lack of standard definitions | 5.4 | Establish standard nomenclature | 5.4 |
| Changes in data collection systems | 6.3 | Establish “epochs” of data uncertainty | 7.2 |
| Transcription errors | 4.5 | Implement temporal logic checks | 6.5 |
| Limitations of digital data types | 6.6 | Select appropriate temporal data types | 6.6 |
| Uncertainty due to rounding of times | 4.3 | Monte Carlo Analysis | 7.1, 7.3 |
| Digit Preferencing | 4.4 | De-convolve mixed precision datasets | 7.2 |
| Access to multiple, inaccurate sources of time | 4.1 | Clock synchronization, use of a “master clock” | 6.1 |
| Fallible human perception of elapsed time | 4.2 | Estimate extent of possible errors | 4.2 |
| Software bugs | 5.5 | Audit timestamps | 6.5 |
| Quantization of recorded times | 7.1 | Audit resolution of all recorded times | 7.1 |
| Unsynchronized signals | 6.2 | Real-time or retrospective synchronization | 6.2 |
| Event time not recorded | N/A | Algorithmically determine event time | 6.4 |
The solutions generally aim to identify, model, and correct epistemic temporal uncertainties, and to represent any remaining aleatoric temporal uncertainty as a probability density function.
Figure 4Conceptual illustration of epistemic and aleatoric uncertainties resulting from clock drift. This figure illustrates concepts discussed in the literature cited in Sections 2.2 and 7.1. A series of measurements of time are made using a hypothetical clock that is synchronized with a more accurate timepiece once every 300 seconds. Panel (A) shows timing errors caused by a drift rate measured to one significant figure (3 ppm), while panel (B) shows timing errors caused by a drift rate measured to two significant figures (3.2 ppm). Shaded regions in Panels (C,D) show the accuracy and precision of temporal measurements assuming these two drift rates 50 and 170 s after synchronization (labeled t1 and t2, respectively). Note that imprecisely specified drift rates result in aleatoric uncertainties that increase as a function of time, and that the magnitude of the uncertainty is inversely proportional to the number of significant figures used to specify the drift rate.