Literature DB >> 29483786

Reliability and validity of the revised Gibson Test of Cognitive Skills, a computer-based test battery for assessing cognition across the lifespan.

Amy Lawson Moore1, Terissa M Miller1.   

Abstract

PURPOSE: The purpose of the current study is to evaluate the validity and reliability of the revised Gibson Test of Cognitive Skills, a computer-based battery of tests measuring short-term memory, long-term memory, processing speed, logic and reasoning, visual processing, as well as auditory processing and word attack skills.
METHODS: This study included 2,737 participants aged 5-85 years. A series of studies was conducted to examine the validity and reliability using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test-retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III Tests of Cognitive Abilities and Tests of Achievement.
RESULTS: Results indicated strong sources of evidence of validity and reliability for the test, including internal consistency reliability coefficients ranging from 0.87 to 0.98, test-retest reliability coefficients ranging from 0.69 to 0.91, split-half reliability coefficients ranging from 0.87 to 0.91, and concurrent validity coefficients ranging from 0.53 to 0.93.
CONCLUSION: The Gibson Test of Cognitive Skills-2 is a reliable and valid tool for assessing cognition in the general population across the lifespan.

Entities:  

Keywords:  auditory processing; cognitive skills; memory; processing speed; testing; visual processing

Year:  2018        PMID: 29483786      PMCID: PMC5813948          DOI: 10.2147/PRBM.S152781

Source DB:  PubMed          Journal:  Psychol Res Behav Manag        ISSN: 1179-1578


Introduction

From the ease of administration and reduction in scoring errors to cost savings and engaging user interfaces, computer-based testing has increased in popularity over recent years for various compelling reasons. Although a recent push has been to utilize digital neurocognitive screening measures to assess postconcussion and age-related cognitive decline,1,2 the primary uses of existing computer-based measures appear to be focused on either clinical diagnostics or academic skill evaluation. An interesting gap in the literature on computer-based assessment, however, is the dearth of studies evaluating individual cognitive skills for the purpose of researching therapeutic cognitive interventions or cognitive skills’ remediation. Given the association of cognitive skills and academic achievement,3 sports performance,4 career success,5 and psychopathology-related deficits,6 it is reasonable to suggest the need for an affordable, easy to administer test that identifies deficits in cognitive skills in order to recommend an intervention for addressing these deficits. Traditional neuropsychological testing is lengthy and costly and requires advanced training in assessment to score and utilize the results. In clinical diagnostics, cognitive skills are assessed as a part of traditional intelligence or neuropsychological testing, typically to rule out intellectual disability as a primary diagnosis. However, there are additional critical uses for cognitive testing as well. As clinicians or researchers, if we adopt the perspective that cognitive skills underlie learning and behavior, we necessarily must seek an efficient, affordable, yet psychometrically sound method of evaluating cognitive skills in order to suggest existing interventions or to examine new methods of remediating cognitive deficits. Despite the growing availability of commercially available cognitive tests, there are notable gaps in the field. For example, digital cognitive tests typically do not include measures of auditory processing skills. School-based screening of these early reading skills dominates the digital achievement testing marketplace but is not traditionally found in digital cognitive tests. Not only do auditory processing skills serve as the foundation for reading ability and language development in childhood,7 they also impact receptive and written language functioning in which these deficits are associated with higher rates of unemployment and lower income8 and influence the trajectory of lifespan decline in auditory perceptual abilities frequently misattributed to age-related hearing loss.9 Auditory processing is a key component of the Cattell–Horn–Carroll (CHC) model of intelligence,10 which serves as the theoretical grounding for most major intelligence tests. As such, it would be a valuable measure on digital cognitive test batteries. Furthermore, a cross-battery approach to assessment aligns with contemporary testing practice that approximates a measurement of multiple constructs succinctly and more efficiently than through the use of individual cognitive and achievement batteries.11 Another notable shortcoming of traditional cognitive tests is the use of numerical stimuli to assess memory in children with learning struggles, given the reliance on numerical processing to perform standard digit span memory tasks.12 Instead, digital cognitive tests should offer non-numerical stimuli in assessing children to ensure that a true measure of memory is captured. The current study addresses these gaps in the literature on digital cognitive testing. The Gibson Test of Cognitive Skills – Version 213 is a computer-based screening tool that evaluates performance on tasks that measure 1) short-term memory, 2) long-term memory, 3) processing speed, 4) auditory processing, 5) visual processing, 6) logic and reasoning, and 7) word attack skills. The 45-minute assessment includes nine different tasks organized as puzzles and games. The development of the new Gibson Test of Cognitive Skills fills a gap in the existing testing market by offering a digital battery that measures seven broad constructs outlined by CHC theory including three narrow abilities in auditory processing and a basic reading skill, word attack. The inclusion of auditory processing subtests is a unique and critical contribution to the digital assessment market. To the best of our knowledge, it is the only digital cognitive test that measures auditory processing and basic reading skills in addition to five other broad cognitive constructs.14 The Gibson Test also addresses the inadequacy of using numerical stimuli in assessing memory in children by using a variety of visual and auditory stimuli to drill down on short-term memory span and delayed retrieval of meaningful associations. With the exception of the US military’s ANAM test,15 the Gibson Test boasts the largest normative database among major commercially available digital cognitive tests and is the largest normative database among those tests that include children. Table 1 illustrates the comparison of the Gibson Test and other commercially available digital cognitive tests in the measurement of cognitive constructs and norming sample size.
Table 1

Comparison of Gibson Test and other major digital cognitive tests

Digital cognitive testSTMLTMVPPSLRAPWANorming sampleNorming group ages (years)
Gibson Test of Cognitive Skills-V213XXXXXXX2,7375–85
NeuroTrax16XXXXX1,5698–120
MicroCog17XXXX81018–89
ImPACT18XX93113 to college
CNS Vital Signs19XXXX1,0697–90
CANS-MCI20XX31051–93
ANAM15XXXXX107,80117–65
CANTAB21XXXX2,0004–90

Abbreviations: ANAM, Automated Neuropsychological Assessment Metrics; AP, auditory processing; CANTAB, Cambridge Neuropsychological Test Automated Battery; CANS-MCI, Computer-Administered Neuropsychological Screen; LR, logic and reasoning; LTM, long-term memory; PS, processing speed; STM, short-term memory; VP, visual processing; WA, word attack.

Prior versions of the Gibson Test of Cognitive Skills have been used in several research studies22–24 and by clinicians since 2002. It was initially developed as a progress monitoring tool for cognitive training and visual therapy interventions. Although the evidence of validity supporting the original version of the test is strong,25,26 recognition that a lengthier test would increase the reliability of cognitive construct measurement served as the primary impetus to initiate a revision. The secondary impetus was the need to add a long-term memory measure.

Methods

A series of studies was conducted to examine the validity and reliability of the Gibson Test of Cognitive Skills (Version 2) using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test–retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III (WJ III) Tests of Cognitive Abilities and Tests of Achievement. The development and validation process for the revised Gibson Test of Cognitive Skills aligned with the Standards for Educational and Psychological Testing.27 Ethics approval to conduct the study was granted by the Institutional Review Board (IRB) of the Gibson Institute of Cognitive Research prior to recruiting participants. During development, subject matter experts in educational psychology, experimental psychology, special education, school counseling, neuropsychology, and neuroscience were consulted to ensure that the content of each test adequately represented the skill it aimed to measure. A formal content validation review by three experts was conducted prior to field testing. Data collection began following the content review.

Measures

The Gibson Test of Cognitive Skills (Version 2) battery contains nine tests leading to the measurement of seven broad cognitive constructs. The battery is designed to be administered in its entirety to ensure proper timing for the long-term memory assessment. Each test in the battery is designed to measure at least one aspect of seven broad constructs explicated in the Cattell–Horn–Carroll model of intelligence:28 fluid reasoning (Gf), short-term memory (Gsm), long-term storage and retrieval (Glr), processing speed (Gs), visual processing (Gv), auditory processing (Ga), and reading and writing (Grw).

Long-term memory test

The test for long-term memory is presented in two parts. It measures the CHC construct of meaningful memory, under the broad CHC construct of long-term storage and retrieval (Glm). The test is given in two parts. At the beginning of the test battery, the examinee sees a collection of visual scenes and short auditory scenarios. After studying the prompts, the examinee responds to questions about them. After the examinee finishes the remaining battery of tests, the original long-term memory test questions are revisited but without the visual and auditory prompts. The test is scored for accuracy and for consistency between answers given during the initial prompted task and the final nonprompted task. There are 24 questions on this test for a total of 48 possible points. An example of a visual prompt is shown in Figure 1.
Figure 1

Example of a long-term memory test visual prompt.

Visual processing test

The visual processing test measures visualization, or the ability to mentally manipulate objects. Visualization is a narrow skill under the broad construct of visual processing (Gv). The examinee is shown a complete puzzle on one side of the screen and individual pieces on the other side of the screen. As each part of the puzzle is highlighted, the examinee must select the corresponding piece that best matches a highlighted part of the puzzle. There are 14 puzzles on the test for a total of 92 possible points. An example of one visual processing puzzle is shown in Figure 2.
Figure 2

Example of a visual processing test item.

Logic and reasoning test

The logic and reasoning test measures inductive reasoning, or induction, which is the ability to infer underlying rules from a given problem. This ability falls under the broad CHC construct of fluid reasoning (Gf). The test uses a matrix reasoning task where the examinee is given an array of images from which to determine the rule that dictates the missing image. There are 29 matrices for a possible total of 29 points (Figure 3).
Figure 3

Example of a logic and reasoning test item.

Processing speed test

The processing speed (Gs) test measures perceptual speed, or the ability to quickly and accurately search for and compare visual images or patterns presented simultaneously. The examinee is shown an array of images and must identify a matching pair in each array (Figure 4) in the time allotted. There are 55 items for a total of 55 possible points.
Figure 4

Example of a processing speed test item.

Short-term memory test

The short-term memory test measures visual memory span, a component of the broad construct of short-term memory (Gsm). In CHC theory, memory span is the ability to “encode information, maintain it in primary memory, and immediately reproduce the information in the same sequence in which it was represented”.28 The examinee studies a pattern of shapes on a grid and then reproduces the pattern from memory when the visual prompt is removed (Figure 5). The patterns become more difficult as the test progresses. There are 21 patterns for a total of 63 possible points.
Figure 5

Example of a short-term memory test item.

Auditory processing test

The auditory processing test measures the following three features of the broad CHC construct auditory processing (Ga): phonetic coding analysis, phonetic coding synthesis, and sound awareness. A sound blending task measures phonetic coding – synthesis, or the ability to blend smaller units of speech into a larger one. The examinee listens to the individual sounds in a nonsense word and then must blend the sounds to identify the completed word. For example, the narrator says, “/n/-/e/-/f/”. The examinee then sees and selects from four choices on the screen (Figure 6).
Figure 6

Example of an auditory processing test item.

There are 15 sound blending items on the test. A sound segmenting task measures phonetic coding analysis, or the ability to segment larger units of speech into smaller ones. The examinee listens to a nonsense word and then must separate the individual sounds. There are 15 sound segmenting items on this subtest. Finally, a sound dropping task measures sound awareness. The examinee listens to a nonsense word and is told to delete part of the word to form a new word. The examinee must mentally drop the sounds and identify the new word. There are 15 sound dropping items on this subtest. The complete auditory processing test comprised 45 items for a total of 72 possible points.

Word attack test

The word attack test measures reading decoding ability, or the skill of reading phonetically irregular words or nonsense words. The measure falls under the broad CHC construct of reading and writing (Grw). The examinee listens to the narrator say four nonsense words aloud. Then, the examinee selects from a set of four options of how the nonsense word should be spelled. For example, the narrator says, say “upt”. The test taker then sees and selects from four choices on the screen (Figure 7). There are 25 nonsense words for a total of 55 possible points.
Figure 7

Example of a word attack test item.

Sample and procedures

The sample (n=2,737) consisted of 1,920 children aged 5–17 years (M=11.4, standard deviation [SD] =2.7) and 817 adults aged 18–85 years (M=41.7, SD =15.4) in 45 states. The child sample was 50.1% female and 49.9% male. The adult sample was 76.4% female and 23.6% male. Overall, the ethnicity of the sample was 68% Caucasian, 13% African-American, 11% Hispanic, and 3% Asian with the remaining 5% of mixed or other race. Detailed demographics are available in the technical manual.29 Norming sites were selected based on representation from the following four primary geographic regions of the USA and Canada: Northeast, South, Midwest, and West. Tests were administered in three types of settings over 9 months between 2014 and 2015. In the first phase, test results were collected from clients in seven learning centers around the country. With written parental consent, clients (n=42) were administered the Gibson Test of Cognitive Skills and the WJ III Tests of Cognitive Abilities and Tests of Achievement to assess concurrent validity. The WJ III was selected because it is a test battery that is also grounded in the CHC model of intelligence and included multiple auditory processing and word attack subtests by which we could effectively compare the Gibson Test. It is also a widely accepted comprehensive cognitive test battery, which would strengthen the trust factor in concurrent validation. The sample ranged in age from 8 to 59 years (M=19.8, SD =11.1) and was 52% female and 48% male. Ethnicity of the sample was 74% Caucasian, 10% African-American, 5% Asian, 2% Hispanic, and the remaining 10% mixed or other race. After obtaining written informed consent, an additional sample of clients (n=50) between the ages of 6 and 58 years (M=20.1, SD =11.6) was administered the Gibson Test of Cognitive Skills once and then again after 2 weeks to participate in test–retest reliability analysis. The sample was 55% female and 45% male, with 76% Caucasian, 8% African-American, 4% Asian, 4% Hispanic, and the remaining 8% mixed or other race. In the second phase of the study, the test was administered to students and staff members in 23 different elementary and high schools. Schools were invited to participate via email from the researchers. Parents of participants in schools were given a letter with a comprehensive description of the norming study with an opt-out form to be returned to the school if they did not want their child to participate. Students could decline participation and quit taking the test at any point. The schools provided all de-identified demographic information to the researchers. Finally, adults and children responded via social media to complete the test from a home computer. Participants and parents who responded via social media provided digital informed consent by clicking through the test after reading the consent document. A demographic information survey was completed by participants along with the test. Participants could quit taking the test at any time, if desired. None of the participants were compensated for participating. The results of this phase of the study were used to calculate internal consistency reliability for each test, split-half reliability for each test, and inter-test correlations and to create the normative database of scores.

Data analysis

Data were analyzed using Statistical Package for the Social Sciences (SPSS) for Windows Version 22.0 and jMetrik software programs. First, we ran descriptive statistics to determine mean scores and all demographics. We conducted item analyses to determine internal consistency reliability with a coefficient alpha for each test. We ran Pearson’s correlations to determine split-half reliability, test–retest reliability, and concurrent validity with other criterion measures. Finally, we examined differences by gender and education level.

Results

Descriptive statistics are presented in Table 2 to illustrate the mean scores, SD, and 95% confidence intervals for each test by age interval. Table 2 illustrates the intercorrelations among all of the tests. Auditory processing and word attack show stronger intercorrelations than with other measures because they measure similar constructs. Visual processing is correlated with logic and reasoning and short-term memory presumably because they are tasks that require the manipulation or identification of visual images. Long-term memory is more correlated with short-term memory than with any other task. These intercorrelations among the tests provide general evidence of convergent and discriminant internal structure validity.
Table 2

Mean and SD (95% CI) for Gibson Test scales by age group

TestStatisticsAge (years)
Overall
6–89–1213–1819–3031–5455+
Long-term memoryn3929435452043791562,619
M15.921.526.130.225.322.523.2
SD10.311.511.811.210.412.212.2
95% CI±1.0±0.73±1.5±1.5±1.0±1.9±0.46
Short-term memoryn3528112971283481452,081
M27.437.943.748.745.138.738.9
SD11.29.29.311.49.29.311.5
95% CI±1.2±0.63±1.0±1.9±.96±1.5±0.49
Visual processingn3738353081554001662,237
M17.529.437.854.242.733.933.0
SD12.614.918.019.920.518.519.4
95% CI±1.3±1.0±2.0±3.1±2.0±2.8±0.80
Auditory processingn3828403141594081622,265
M27.338.347.557.154.948.942.8
SD18.820.019.218.118.318.521.5
95% CI±1.9±1.3±1.9±2.8±1.8±2.8±0.88
Logic and reasoningn3658223011293541512,122
M9.313.215.318.817.214.814
SD3.93.93.93.43.53.54.7
95% CI±0.40±0.26±0.44±0.58±0.50±0.96±0.27
Processing speedn3628193011233531552,115
M14.330.435.139.436.633.532.2
SD1.95.05.54.94.86.16.3
95% CI±0.19±0.34±0.62±0.86±0.50±0.96±0.26
Word attackn3468062951253491452,066
M24.635.341.646.946.244.337.6
SD14.713.610.97.88.89.214.2
95% CI±1.5±0.48±0.63±0.69±0.47±0.76±0.61

Abbreviations: CI, confidence interval; n, number in sample; M, mean score. SD, standard deviation.

Internal consistency reliability

Item analyses for each test revealed strong indices of internal consistency reliability, or how well the test items correlate with each other. Overall coefficient alphas range from 0.87 to 0.98. Overall coefficient alphas for each test as well as coefficient alphas based on age intervals are all robust (Table 3), indicating strong internal consistency reliability of the Gibson Test of Cognitive Skills.
Table 3

Coefficient alpha for each test in the Gibson Test of Cognitive Skills battery

TestAge (years)
Overall
6–89–1213–1819–3031–5455+
Long-term memory0.910.920.930.920.910.930.93
Short-term memory0.870.820.830.900.820.830.88
Visual processing0.960.960.970.980.980.970.98
Auditory processing0.950.950.950.960.950.950.96
Logic and reasoning0.850.830.810.740.770.790.87
Processing speed0.880.810.870.870.870.910.88
Word attack0.930.920.890.830.860.850.93

Split-half reliability

Reliability of the Gibson Test of Cognitive Skills battery was also evaluated by correlating the scores on two halves of each test. Split-half reliability was calculated for each test by correlating the sum of the even numbered items with the sum of the odd numbered items. Then, a Spearman–Brown formula was applied to the Pearson’s correlation for each subtest to correct for length effects. Because split-half correlation is not an appropriate analysis for a speeded test, the alternative calculation for the processing speed test was based on the formula: r11=1 − (SEM2/SD2). Overall and subgroup split-half reliability coefficients are robust, ranging from 0.89 to 0.97 (Table 3), indicating strong evidence of reliability of the Gibson Test of Cognitive Skills (Table 4).
Table 4

Split-half reliability of the Gibson Test of Cognitive Skills

TestAge (years)
Overall
6–89–1213–1819–3031–5455+
Long-term memory0.950.940.950.930.940.950.95
Short-term memory0.900.840.860.920.840.830.90
Visual processing0.970.980.980.980.990.990.99
Auditory processing0.970.970.960.970.960.960.97
Logic and reasoning0.900.860.860.800.810.860.90
Processing speeda0.880.810.870.880.880.910.89
Word attack0.940.940.900.890.900.850.94

Note:

Split-half correlation is not an appropriate analysis for a speeded test; the alternative calculation was based on the formula: r11=1− (SEM2/SD2).

Abbreviations: r, reliability; SD, standard deviation; SEM, standard error of measurement.

Test–retest reliability (delayed administration)

Reliability of each test in the battery was evaluated by correlating the scores on two different administrations of the test to the same sample of test takers 2 weeks apart. The overall test–retest reliability coefficients ranged from 0.69 to 0.91 (Table 5). The results indicate strong evidence of reliability. All overall coefficients were significant at P<0.001; and all subgroup coefficients were significant at P<0.001 except for long-term memory in adults, which was significant at P<0.004.
Table 5

Test–retest reliability of the revised Gibson Test of Cognitive Skills

TestChild(n=29)Adult(n=21)Overall(n=50)
Long-term memory0.530.670.69
Short-term memory0.760.750.82
Visual processing0.890.740.90
Auditory processing0.880.770.91
Logic and reasoning0.840.660.82
Processing speed0.830.760.73
Word attack0.890.680.90

Concurrent validity

Validity was assessed by running a Pearson’s product-moment correlation to examine if each test on the Gibson Test was correlated with other tests measuring similar constructs to determine concurrent validity with other criterion measures. Correlation coefficients were attenuated based on reliability coefficients of the individual criterion tests and corrected for possible range effects using the formula r/√ (r × r), where r is the concurrent correlation coefficient, r is the test–retest coefficient of each WJ III subtest, and r is the test–retest coefficient of each Gibson Test subtest. The resulting correlations range from 0.53 to 0.93, indicating moderate-to-strong relationships between the Gibson Test and other standardized criterion tests (Table 6). All correlations are significant at an alpha of P<0.001, indicating strong evidence of concurrent validity.
Table 6

Correlations between the Gibson Test and the Woodcock Johnson tests

Gibson TestWoodcock Johnson IIIrucrc
Short-term memoryNumbers reversed0.710.84
Logic and reasoningConcept formation0.710.77
Processing speedVisual matching0.500.60
Visual processingSpatial relations0.700.82
Long-term memoryVisual auditory learning0.430.53
Word attackWord attack0.820.93
Auditory processingSpelling of sounds0.750.90
Sound awareness0.700.82

Notes: ruc, uncorrected correlation calculated with Pearson’s product momentum of z scores; rc, corrected correlation using r/√ (r × r).

Post hoc analyses of demographic differences

Because the male-to-female ratio of participants in the adult sample was disproportionate to the population, we examined differences by gender in every adult age range on each subtest through linear regression analyses. After Bonferroni correction for 35 comparisons, gender proved to be a significant predictor of score differences in only five of the comparisons. In the 18–29 years age group, gender was a significant predictor of score differences on the test of auditory processing, P<0.001, R2=0.02, B=−5.8. That is, females outperformed males by 5.8 points on auditory processing in the 20–29 years age group. In the 40–49 years age group, gender was a significant predictor of score differences on the test of auditory processing, P<0.001, R2=0.16, B=−17.8; on the test of visual processing, P<0.001, R2=0.12, B=−17.3; and on the test of word attack, P<0.001, R2=0.07, B=−5.7; that is, females outperformed males by 17.8 points on auditory processing, by 17.3 points on visual processing, and by 5.7 points on word attack in the 40–49 years age group. In the >60 years age group, gender was a significant predictor of score differences on the test of processing speed, P<0.001, R2=0.27, B=−5.1; that is, females outperformed males by 5.1 points on processing speed in the >60 years age group. Gender was not a statistically significant predictor of differences between males and females on any subtest in the 30–39 years or 50–59 years age groups. In addition, the proportion of adults with higher education degrees in the sample was higher than in the population. Therefore, we also examined education level as a predictor of differences in scores on each subtest. Although effect sizes were small, education level was a significant predictor of score differences on all subtests except for processing speed. On working memory, for every increase in educational level, there was a 1.2-point increase in score (P=0.001, R2=0.01, B=1.2). On visual processing, for every increase in educational level, there was a 3-point increase in score (P<0.001, R2=0.02, B=3.0). On logic and reasoning, for every increase in educational level, there was a 0.9-point increase in score (P<0.001, R2=0.01, B=1.2). On word attack, for every increase in educational level, there was a 1.6-point increase in score (P<0.001, R2=0.03, B=1.6). On auditory processing, for every increase in educational level, there was a 4.4-point increase in score (P<0.001, R2=0.07, B=4.4). Finally, on long-term memory, for every increase in educational level, there was a 1.6-point increase in score (P=0.001, R2=0.02, B=1.6).

Discussion and conclusion

The current study evaluated the psychometric properties of the revised Gibson Test of Cognitive Skills. The results indicate that the Gibson Test is a valid and reliable measure of assessing cognitive skills in the general population. It can be used for the assessment of individual cognitive skills to obtain a baseline of functioning in individuals across the lifespan. In comparison with existing computer-based tests of cognitive skills, the overall test–retest reliabilities (0.69–0.91) of the Gibson Test battery are impressive. For example, the test–retest reliabilities range from 0.17 to 0.86 for the Cambridge Neuropsychological Test Automated Battery (CANTAB),21 from 0.38 to 0.77 for the Computer-Administered Neuropsychological Screen (CANS-MCI),20 and from 0.31 to 0.86 for CNS Vital Signs.19 In addition to strong split-half reliability metrics, evidence for the internal consistency reliability of the Gibson Test of Cognitive Skills is also strong, with coefficient alphas ranging from 0.87 to 0.98. The convergent validity of the tests provides a key source of evidence that the test is a valid measure of cognitive skills represented by constructs identified in the CHC model of intelligence. The ease with which the test can be administered coupled with automated scoring and reporting is a key strength of the battery. The implications for use are encouraging for a variety of fields including psychology, neuroscience, and cognition research as well as all aspects of education. The battery covers a wide range of cognitive skills that are of interest across multiple disciplines. It is indeed exciting to have an automated, cross-battery assessment that includes not only long-term and short-term memories, processing speed, fluid reasoning, and visual processing but also three aspects of auditory processing along with basic reading skills. The norming group traverses the lifespan, making the test suitable for use with all ages. This, too, is a key strength of the current study. With growing emphasis on age-related cognitive decline, the test may serve as a useful adjunct to brain care and intervention with an aging population. Equally useful with children, the test can continue to serve as a valuable screening tool in the evaluation of cognitive deficits that might contribute to learning struggles and, therefore, help inform intervention decisions among clinicians and educators. There are a couple of limitations and ideas for future research worth noting. First, the study did not include a clinical sample by which to compare performance with the nonclinical norming group. Future research should evaluate the discriminant and predictive validity with clinical populations. The test has potential for clinical use, and such metrics would be a critical addition to the findings from the current study. Next, the sample for the test–retest reliability analysis was moderate (n=50). A larger study on test–retest reliability would serve to strengthen these first psychometric findings. In addition, the adult portion of the norming group had a higher percentage of females than males. This outcome was due to chance in the sampling and recruitment response. Any normative updates should consider additional recruitment methods to increase equal sampling distributions by sex. However, to adjust for this in the current normative database, we weighted the scores of the adult sample to match the demographic distribution of gender and education level of the most recent US census. Weighting minimizes the potential for bias in the sample due to disproportionate stratification and unit nonresponse. Although the current study noted a few score differences by gender in the adult sample, it is important to note that only 2% of the individual items showed differential item functioning for gender during the test development stage.29 Regardless, the current study provided multiple sources of evidence of the validity and reliability of the Gibson Test of Cognitive Skills (Version 2) for use in the general population for assessing cognition across the lifespan. The Gibson Test of Cognitive Skills (Version 2) has been translated into 20 languages and is commercially available worldwide (www.GibsonTest.com).13
  11 in total

1.  Redefining the survival of the fittest: communication disorders in the 21st century.

Authors:  R J Ruben
Journal:  Int J Pediatr Otorhinolaryngol       Date:  1999-10-05       Impact factor: 1.675

Review 2.  MicroCog: assessment of cognitive functioning.

Authors:  R W Elwood
Journal:  Neuropsychol Rev       Date:  2001-06       Impact factor: 7.444

3.  Reliability and validity of a computerized neurocognitive test battery, CNS Vital Signs.

Authors:  C Thomas Gualtieri; Lynda G Johnson
Journal:  Arch Clin Neuropsychol       Date:  2006-10-02       Impact factor: 2.813

4.  Validation of a self-administered computerized system to detect cognitive impairment in older adults.

Authors:  Samuel D Brinkman; Robert J Reese; Larry A Norsworthy; Donna K Dellaria; Jacob W Kinkade; Jared Benge; Kimberly Brown; Anna Ratka; James W Simpkins
Journal:  J Appl Gerontol       Date:  2012-09-18

5.  Supervision and computerized neurocognitive baseline test performance in high school athletes: an initial investigation.

Authors:  Andrew Warren Kuhn; Gary S Solomon
Journal:  J Athl Train       Date:  2014 Nov-Dec       Impact factor: 2.860

6.  Evidence of an early information processing speed deficit in unipolar major depression.

Authors:  G Tsourtos; J C Thompson; C Stough
Journal:  Psychol Med       Date:  2002-02       Impact factor: 7.723

Review 7.  Practitioner review: short-term and working memory impairments in neurodevelopmental disorders: diagnosis and remedial support.

Authors:  Susan E Gathercole; Tracy Packiam Alloway
Journal:  J Child Psychol Psychiatry       Date:  2006-01       Impact factor: 8.982

8.  Self-administered screening for mild cognitive impairment: initial validation of a computerized test battery.

Authors:  Jane B Tornatore; Emory Hill; Jo Anne Laboff; Mary E McGann
Journal:  J Neuropsychiatry Clin Neurosci       Date:  2005       Impact factor: 2.198

9.  Cognitive mechanisms underlying achievement deficits in children with mathematical learning disability.

Authors:  David C Geary; Mary K Hoard; Jennifer Byrd-Craven; Lara Nugent; Chattavee Numtee
Journal:  Child Dev       Date:  2007 Jul-Aug

10.  Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

Authors:  Alexandra Parbery-Clark; Dana L Strait; Samira Anderson; Emily Hittner; Nina Kraus
Journal:  PLoS One       Date:  2011-05-11       Impact factor: 3.240

View more
  1 in total

1.  Reliability Evidence for the Gibson Assessment of Cognitive Skills (GACS): A Brief Tool for Screening Cognitive Skills Across the Lifespan.

Authors:  Amy Lawson Moore; Terissa M Miller; Christina Ledbetter
Journal:  Psychol Res Behav Manag       Date:  2021-01-13
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.