| Literature DB >> 35838815 |
Amanda Venta1, Veronica McLaren2, Carla Sharp2, Anna Abate3, Madeleine Allman2, Breana Cervantes2, Sophie Kerr2, Jessica Hernandez Ortiz2, Eric Sumlin2, Jesse Walker2, Kiana Wall2.
Abstract
The Child Attachment Interview (CAI) has demonstrated promise in youth, yet widespread use is thwarted by the need for interview transcription, face-to-face training, and reliability certification. The present study sought to examine the empirical basis for these barriers. Thirty-five archival CAIs were re-coded by: (1) expert coders (i.e., trained and reliable) without access to transcripts, (2) trained coders who had not completed reliability training, and (3) novice coders who had no formal training. Agreement with consensus classifications was computed with the expectation of moderate agreement. Results supported coding by experts without transcription of the interview. Near-moderate agreement preliminarily supported the use of trained coders who have not attempted reliability certification with appropriate caveats. While moderate agreement was not achieved for novice raters, findings suggest that self-paced training options for the CAI may hold future promise. These contributions erode a number of significant barriers to the current use of the CAI.Entities:
Keywords: Child attachment interview; Internal working model; Kappa; Reliability; Training
Year: 2022 PMID: 35838815 PMCID: PMC9283821 DOI: 10.1007/s10578-022-01385-w
Source DB: PubMed Journal: Child Psychiatry Hum Dev ISSN: 0009-398X
Intraclass correlations across raters in experimental groups and the original interrater reliability set
| Expert coders, with reliability certification, without transcripts | Trained coders without reliability certification | Self-taught coders without reliability certification | Control: Two expert coders utilizing transcripts | |||||
|---|---|---|---|---|---|---|---|---|
| Subscale | ICC | Interpretation | ICC | Interpretation | ICC | Interpretation | ICC | Interpretation |
| Emotion | 0.42 | Poor | 0.13 | Poor | 0.44 | Poor | 0.64 | Moderate |
| Balance | 0.45 | Poor | 0.43 | Poor | 0.54 | Moderate | 0.79 | Good |
| Examples | 0.54 | Moderate | 0.54 | Moderate | 0.49 | Poor | 0.73 | Moderate |
| Anger M | 0.76 | Good | 0.72 | Moderate | 0.70 | Moderate | 0.80 | Good |
| Anger P | 0.75 | Good | 0.74 | Moderate | 0.73 | Moderate | 0.93 | Excellent |
| Idealization M | 0.56 | Moderate | 0.35 | Poor | 0.31 | Poor | 0.75 | Good |
| Idealization P | 0.34 | Poor | 0.25 | Poor | 0.31 | Poor | 0.68 | Moderate |
| Dismissal M | 0.45 | Poor | 0.52 | Moderate | 0.51 | Moderate | 0.71 | Moderate |
| Dismissal P | 0.59 | Moderate | 0.55 | Moderate | 0.55 | Moderate | 0.88 | Good |
| Conflict | 0.56 | Moderate | 0.58 | Moderate | 0.01 | Poor | 0.68 | Moderate |
| Coherence | 0.55 | Moderate | 0.43 | Poor | .3 | Poor | 0.75 | Good |
| Average | 0.54 | Moderate | 0.48 | Poor | 0.45 | Poor | 0.76 | Good |
Notes. M = maternal, P = Paternal, ICC = Intraclass correlations computed using two-way random effects models and the single measures value is reported. Interpretation was based on [25]’s guidelines where ICC less than 0.5 = poor reliability, ICC between 0.5 and 0.75 = moderate reliability, ICC between 0.75 and 0.9 = good reliability, and ICC greater than 0.90 = excellent reliability