| Literature DB >> 31112445 |
Lee Branum-Martin1, Katherine T Rhodes2, Congying Sun1, Julie A Washington3, Mi-Young Webb3.
Abstract
Purpose Many language tests use different versions that are not statistically linked or do not have a developmental scaled score. The current article illustrates the problems of scores that are not linked or equated, followed by a statistical model to derive a developmental scaled score. Method Using an accelerated cohort design of 890 students in Grades 1-5, a confirmatory factor model was fit to 6 subtests of the Test of Language Development-Primary and Intermediate: Fourth Edition ( Hammill & Newcomer, 2008a , 2008b ). The model allowed for linking the subtests to a general factor of language and equating their measurement characteristics across grades and cohorts of children. A sequence of models was fit to evaluate the appropriateness of the linking assumptions. Results The models fit well, with reasonable support for the validity of the tests to measure a general factor of language on a longitudinally consistent scale. Conclusion Although total and standard scores were problematic for longitudinal relations, the results of the model suggest that language grows in a relatively linear manner among these children, regardless of which set of subtests they received. Researchers and clinicians interested in longitudinal inferences are advised to design research or choose tests that can provide a developmental scaled score.Entities:
Mesh:
Year: 2019 PMID: 31112445 PMCID: PMC6808375 DOI: 10.1044/2019_JSLHR-L-18-0362
Source DB: PubMed Journal: J Speech Lang Hear Res ISSN: 1092-4388 Impact factor: 2.297