| Literature DB >> 24740236 |
Vincent A Fusaro1, Jena Daniels2, Marlena Duda2, Todd F DeLuca1, Olivia D'Angelo1, Jenna Tamburello1, James Maniscalco1, Dennis P Wall3.
Abstract
Autism is on the rise, with 1 in 88 children receiving a diagnosis in the United States, yet the process for diagnosis remains cumbersome and time consuming. Research has shown that home videos of children can help increase the accuracy of diagnosis. However the use of videos in the diagnostic process is uncommon. In the present study, we assessed the feasibility of applying a gold-standard diagnostic instrument to brief and unstructured home videos and tested whether video analysis can enable more rapid detection of the core features of autism outside of clinical environments. We collected 100 public videos from YouTube of children ages 1-15 with either a self-reported diagnosis of an ASD (N = 45) or not (N = 55). Four non-clinical raters independently scored all videos using one of the most widely adopted tools for behavioral diagnosis of autism, the Autism Diagnostic Observation Schedule-Generic (ADOS). The classification accuracy was 96.8%, with 94.1% sensitivity and 100% specificity, the inter-rater correlation for the behavioral domains on the ADOS was 0.88, and the diagnoses matched a trained clinician in all but 3 of 22 randomly selected video cases. Despite the diversity of videos and non-clinical raters, our results indicate that it is possible to achieve high classification accuracy, sensitivity, and specificity as well as clinically acceptable inter-rater reliability with nonclinical personnel. Our results also demonstrate the potential for video-based detection of autism in short, unstructured home videos and further suggests that at least a percentage of the effort associated with detection and monitoring of autism may be mobilized and moved outside of traditional clinical environments.Entities:
Mesh:
Year: 2014 PMID: 24740236 PMCID: PMC3989176 DOI: 10.1371/journal.pone.0093533
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Characteristics of ASD and Non-ASD videos on YouTube.
| Characteristic | ASD (n = 55) | Non-ASD (n = 45) |
| Gender | 37/18 | 23/22 |
| Age | 4.35 (1.5–15) | 2.89 (0.92–6) |
| Race | ||
| White, % | 89.1 | 82.2 |
| Black, % | 1.8 | 3.6 |
| Other, % | 9.0 | 13.3 |
| Video length (min), mean (range) | 3∶19 (0∶31–9∶16) | 2∶37 (0∶36–6∶09) |
| YouTube views | 17,661 (17–96,469) | 4,107,244 (146–102,871,735) |
| Date posted on YouTube, range | 03/21/07–06/19/12 | 06/25/06–11/04/12 |
| Appearance of other people | ||
| One person | 20.0 | 33.3 |
| Two or more people | 18.2 | 17.8 |
| Interacting with adult, % | 96.4 | 95.6 |
| Interacting with peer, % | 16.4 | 17.8 |
| Types of videos | ||
| Exhibiting a talent | 5.5 | 33.3 |
| Having a conversation | 60.0 | 80.0 |
| Playing | 49.0 | 42.2 |
| Inside | 85.5 | 77.8 |
| Outside | 9.0 | 13.3 |
| In a car | 0.0 | 6.7 |
| Party or birthday | 1.8 | 8.9 |
| Eating | 10.9 | 22.2 |
If not explicitly stated in metadata associated with the video value was estimated based on consensus of two or more investigators.
The number of views on YouTube accessed on 12/10/13.
Number of additional people in the video excluding the child and person recording the video.
Videos were broadly categorized, after initial identification, to illustrate the diversity of videos evaluated.
Video Scoring Performance.
| Characteristic | Total (n = 100) |
| Accuracy, % | 96.8 |
| Sensitivity, % | 94.1 |
| Specificity, % | 100 |
| No call (ASD/non-ASD) | 4/1 |
| Interrater classification agreement (ASD vs. non-ASD), % mean (range) | 92.2 (89–95) |
| Intraclass correlation across items | |
| Communication | 0.84 |
| Social | 0.83 |
| Total | 0.88 |
| Rater item-level mean agreement | 73.3 (58.1–95.3) |
| Rater vs. Expert item-level mean agreement | 71.3 (37.5–100) |
We required majority rater agreement (3 out of 4) for classification.
Item-level agreement shown is the mean and range of 29 item agreements across four non-clinical raters.
Clinical evaluation compared to four non-clinical raters for item-level agreement. All 29 items across a subset of 22 videos were considered.
Scorability of each behavior across the ASD and non-ASD video collections.
| ASD | Non-ASD | |||
| Behaviors (Code) | Code | %N/A | Code | %N/A |
| Anxiety (E3) | E3 | 0 | D3 | 0 |
| Facial Expressions (B3) | B3 | 0.0045 | E2 | 0 |
| Gaze to Initiate Interaction (B4) | B4 | 0.0045 | E3 | 0 |
| Repetitive Interests (D4) | D4 | 0.0045 | A3 | 0.0056 |
| Overactivity (E1) | E1 | 0.0045 | D1 | 0.0056 |
| Tantrums/Aggression (E2) | E2 | 0.0045 | D4 | 0.0056 |
| Spontaneous Expressive Language (A2) | A2 | 0.0091 | E1 | 0.0056 |
| Social Overtures (B12) | B12 | 0.0091 | A1 | 0.01 |
| Complex Mannerisms (D2) | D2 | 0.0091 | A5 | 0.01 |
| Gestures (A8) | A8 | 0.0182 | D2 | 0.01 |
| Self-Injurious Behavior (D3) | D3 | 0.023 | B12 | 0.017 |
| Eye Contact (B1) | B1 | 0.023 | A4 | 0.02 |
| Sensory Interest (D1) | D1 | 0.023 | A8 | 0.028 |
| Shared Enjoyment (B5) | B5 | 0.045 | B1 | 0.028 |
| Intonation (A3) | A3 | 0.086 | B4 | 0.028 |
| Responsive Smile (B2) | B2 | 0.1 | A2 | 0.033 |
| Idiosyncratic Use of Words (A5) | A5 | 0.12 | B5 | 0.039 |
| Pointing (A7) | A7 | 0.14 | B3 | 0.044 |
| Initiation of Joint Attention (B10) | B10 | 0.24 | A6 | 0.061 |
| Response to Name (B6) | B6 | 0.24 | B2 | 0.14 |
| Requesting (B7) | B7 | 0.27 | B7 | 0.17 |
| Showing (B9) | B9 | 0.29 | B6 | 0.22 |
| Spontaneous Expressive Language (A1) | A1 | 0.3 | A7 | 0.3 |
| Functional Play (C1) | C1 | 0.35 | B11 | 0.34 |
| Giving (B8) | B8 | 0.38 | B10 | 0.35 |
| Use of Other’s Body to Communicate (A6) | A6 | 0.44 | B9 | 0.38 |
| Echolalia (A4) | A4 | 0.49 | C1 | 0.57 |
| Imagination/Creativity (C2) | C2 | 0.60 | B8 | 0.65 |
| Social Overtures (B11) | B11 | 0.62 | C2 | 0.79 |
The frequency of N/A (not applicable) answer codes per video over all video raters is listed in descending order for both video collections. The lowest values correspond to the most readily scored behaviors. A large majority of items were readily detectable and resulted in only a small fraction of N/As.
Figure 1Inter-rater variability among cases previously diagnosed with autism and controls with no known diagnosis of autism.
While subjectivity shifts did occur among the four independent raters, these shifts in qualitative judgment did not significantly impact the agreement among the reviewers. Both the inter-rater classification agreement (>90%) and classification accuracy (>95%) were found to be high in this analysis.