| Literature DB >> 35634270 |
Peter Washington1, Brianna Chrisman1, Emilie Leblanc2, Kaitlyn Dunlap2, Aaron Kline2, Cezmi Mutlu3, Nate Stockham4, Kelley Paskov5, Dennis Paul Wall6.
Abstract
Artificial Intelligence (A.I.) solutions are increasingly considered for telemedicine. For these methods to serve children and their families in home settings, it is crucial to ensure the privacy of the child and parent or caregiver. To address this challenge, we explore the potential for global image transformations to provide privacy while preserving the quality of behavioral annotations. Crowd workers have previously been shown to reliably annotate behavioral features in unstructured home videos, allowing machine learning classifiers to detect autism using the annotations as input. We evaluate this method with videos altered via pixelation, dense optical flow, and Gaussian blurring. On a balanced test set of 30 videos of children with autism and 30 neurotypical controls, we find that the visual privacy alterations do not drastically alter any individual behavioral annotation at the item level. The AUROC on the evaluation set was 90.0% ±7.5% for unaltered videos, 85.0% ±9.0% for pixelation, 85.0% ±9.0% for optical flow, and 83.3% ±9.3% for blurring, demonstrating that an aggregation of small changes across behavioral questions can collectively result in increased misdiagnosis rates. We also compare crowd answers against clinicians who provided the same annotations for the same videos as crowd workers, and we find that clinicians have higher sensitivity in their recognition of autism-related symptoms. We also find that there is a linear correlation (r = 0.75, p < 0.0001) between the mean Clinical Global Impression (CGI) score provided by professional clinicians and the corresponding score emitted by a previously validated autism classifier with crowd inputs, indicating that the classifier's output probability is a reliable estimate of the clinical impression of autism. A significant correlation is maintained with privacy alterations, indicating that crowd annotations can approximate clinician-provided autism impression from home videos in a privacy-preserved manner.Entities:
Year: 2022 PMID: 35634270 PMCID: PMC9139408 DOI: 10.1016/j.ibmed.2022.100056
Source DB: PubMed Journal: Intell Based Med ISSN: 2666-5212
Fig. 1.Six intensities of pixelation used in the study. The bottom left image (highlighted in green) depicts the intensity level used for the primary portion of the study. The other intensities are used for a secondary analysis comparing the effect of pixelation intensity on annotation quality.
Fig. 2.Dense optical flow was evaluated as a drastic privacy alteration as depicted here. Original frame is on the left.
Fig. 3.Six intensities of Gaussian blurring used in the study. The most bottom left image (highlighted in green) depicts the intensity level which was used for the primary portion of the study. The other intensities are used for a secondary analysis comparing the effect of blurring intensity on annotation quality.
Fig. 4.There is a clear linear correlation (r = 0.75, p < 0.001) between the mean Clinical Global Impression (CGI) score provided by professional clinicians for each video and the corresponding classifier score emitted by the logistic regression classifier with crowd inputs.
Fig. 5.The linear correlation between the mean Clinical Global Impression (CGI) score provided by professional clinicians for each video and the corresponding classifier score emitted by the logistic regression classifier with crowd inputs is maintained with privacy-preserving video modifications. The correlation is weaker for Gaussian blurring (r = 0.64, p = 0.001 for Gaussian blurring) than for dense optical flow and pixelation (r = 0.71, p = 0.0002 for both).
The effect of increasing levels of blurring intensity on mean performance of a separate testing set from the primary evaluation.
| Blurring Kernel Size (Relative to Original) | Mean Probability of the Correct Class | Mean Accuracy | Mean Precision | Mean Recall | Mean Specificity | Mean AUROC | Mean AUPRC |
|---|---|---|---|---|---|---|---|
| 1/6th | 0.766 ± 0.253 | 92.9 | 87.5 | 100.0 | 85.7 | 73.5 | 70.8 |
| 1/5th | 0.768 ± 0.238 | 78.6 | 75.0 | 85.7 | 71.4 | 77.6 | 83.3 |
| 1/4th | 0.842 ± 0.151 | 85.7 | 85.7 | 85.7 | 85.7 | 63.3 | 77.5 |
| 1/3rd | 0.670 ± 0.313 | 78.6 | 83.3 | 71.4 | 85.7 | 44.9 | 60.1 |
| ½ | 0.740 ± 0.261 | 92.9 | 87.5 | 100.0 | 85.7 | 63.3 | 73.3 |
| Full Image | 0.690 ± 0.316 | 92.9 | 87.5 | 100.0 | 85.7 | 79.6 | 83.7 |
The effect of increasing levels of pixelation intensity on mean performance of a separate testing set from the primary evaluation.
| Pixelation Intermediate Frame Size | Mean Probability of the Correct Class | Mean Accuracy | Mean Precision | Mean Recall | Mean Specificity | Mean AUROC | Mean AUPRC |
|---|---|---|---|---|---|---|---|
| 96 × 96 | 0.809 ± 0.190 | 90.9 | 100.0 | 85.7 | 100.0 | 42.9 | 70.1 |
| 64 × 64 | 0.741 ± 0.230 | 92.3 | 100.0 | 85.7 | 100.0 | 52.4 | 66.9 |
| 48 × 48 | 0.788 ± 0.216 | 91.7 | 100.0 | 85.7 | 100.0 | 65.7 | 79.8 |
| 32 × 32 | 0.781 ± 0.210 | 75.0 | 83.3 | 71.4 | 80.0 | 45.7 | 67.2 |
| 16 × 16 | 0.814 ± 0.179 | 83.3 | 85.7 | 85.7 | 80.0 | 71.4 | 85.1 |
| 8 × 8 | 0.802 ± 0.189 | 75.5 | 83.3 | 71.4 | 80.0 | 42.9 | 66.1 |
The mean absolute deviation for each privacy condition from the baseline condition answers for the behaviors used as inputs to the autism classifier. This difference provides a measure of the privacy condition’s effect on annotation quality.
| Mean Deviation for Pixelation | Mean Deviation for Dense Optical Flow | Mean Deviation for Gaussian Blurring | |
|---|---|---|---|
| Abnormal Speech | 0.28 | 0.32 | 0.29 |
| Echolalia | 0.39 | 0.45 | 0.47 |
| Repetitive or Odd Language | 0.25 | 0.30 | 0.26 |
| Expressive Language and Conversation | 0.29 | 0.41 | 0.33 |
| Eye Contact | 0.29 | 0.37 | 0.33 |
| Facial Expressiveness | 0.25 | 0.32 | 0.29 |
| Social Interaction Initiation | 0.26 | 0.30 | 0.31 |
| Shares Excitement | 0.34 | 0.32 | 0.37 |
| Aggressive Behavior | 0.09 | 0.12 | 0.12 |