Literature DB >> 32449646

Methodological issues on agreement between self-reported and central cancer registry-recorded prevalence of cancer in the Alaska EARTH study.

Mehdi Naderi1, Siamak Sabour2,3.   

Abstract

Entities:  

Keywords:  Alaska Native; health literacy; methodological issues; reliability; self-report; study cohort; tumour registry

Mesh:

Year:  2020        PMID: 32449646      PMCID: PMC7448856          DOI: 10.1080/22423982.2020.1764284

Source DB:  PubMed          Journal:  Int J Circumpolar Health        ISSN: 1239-9736            Impact factor:   1.228


× No keyword cloud information.
Dear Editor, We read with interest the article of Nash SH et al., published in the Int J Circumpolar Health 2019 Dec [1]. Determination of the agreement between self-reported and registry-recorded site-specific cancer diagnoses in a cohort of Alaska Native people [1]. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were used to calculate the agreement between self-reported and registry-recorded cancer diagnosis and finally, kappa values were calculated to differentiate between true agreement and agreement that may be expected due to chance [1]. Based on the results of their study for all-sites, as well as each common site, specificity was more than 98% for all cancer sites, whereas sensitivity, PPV and NPV for (colorectal cancer and prostate cancer), (colorectal cancer and female breast cancers) and all cancer sites were (78.6% and 100.0%), (52.4% and 84.8%) and more than 99.6%. Kappa values also varied by cancer site: values were high for female breast and prostate cancers (κ = 0.86 for both sites), and moderate for colorectal cancer (κ = 0.63). The agreement measures in strata of demographic characteristic were as follows: for cancer (all-sites), sensitivity was greater among males, those aged 18–50 years at study enrolment, those living in an urban area and those who spoke English as their primary language at home. Neither specificity nor NPV varied substantially by demographic characteristic. In contrast, higher PPV was observed among males, those aged 50+ years at study enrolment, those residing in an urban area and those reporting non-English or both as the primary language(s) spoken at home. The pattern was similar for kappa, where we observed greater values among males, those aged 50 + years at study enrolment and those residing in an urban area [1]. Reliability and validity are two completely different methodological issues. Sensitivity, specificity, (PPV), (NPV), likelihood ratios positive and negative (LR+ & LR-) are among the estimates to assess validity of a diagnostic test and have nothing to do with reliability [2,3]. The amount of kappa used to calculate reliability of qualitative and rank variables has some drawbacks that we describe below: The first problem is that the kappa value is strongly dependent on the prevalence and number of categories. Finally, another problem occurs when the two voters differ in the marginal distribution of their responses [2,4-6]. Table 1 illustrates these problems with a hypothetical example that ultimately shows the kappa value with the prevalence and the number of categories with different values (0.44 as moderate and 0.80 as very good).
Table 1.

The kappa and weighted kappa values for calculating agreement between 2 raters for more than 2 categories.

  Raters 1Sum
 Grade123 
Raters 216020181
2212418
33111125
Sum 654316124
 Estimate   
Kappa0.43   
Weighted kappa0.63   
The kappa and weighted kappa values for calculating agreement between 2 raters for more than 2 categories. The authors came to the conclusion that the good agreement is between self-reported and registry-recorded cancer history that may be the result of the high quality of care within the Alaska Tribal Health System [1]. Such a conclusion may be due to inappropriate use of the statistical test, which ultimately leads to a misleading message.
  5 in total

1.  Reliability of four different computerized cephalometric analysis programs: a methodological error.

Authors:  Siamak Sabour; Elahe Vahid Dastjerdi
Journal:  Eur J Orthod       Date:  2013-10-16       Impact factor: 3.075

2.  The validity and reliability of a signal impact assessment tool: statistical issue to avoid misinterpretation.

Authors:  Siamak Sabour; Fariba Ghassemi
Journal:  Pharmacoepidemiol Drug Saf       Date:  2016-10       Impact factor: 2.890

3.  Reliability of immunocytochemistry and fluorescence in situ hybridization on fine-needle aspiration cytology samples of breast cancers: methodological issues.

Authors:  S Sabour
Journal:  Diagn Cytopathol       Date:  2016-08-05       Impact factor: 1.582

4.  Reproducibility of diagnostic criteria associated with atypical breast cytology: A methodological issue.

Authors:  M Naderi; S Sabour
Journal:  Cytopathology       Date:  2018-05-23       Impact factor: 2.073

5.  Agreement between self-reported and central cancer registry-recorded prevalence of cancer in the Alaska EARTH study.

Authors:  Sarah H Nash; Gretchen Day; Vanessa Y Hiratsuka; Garrett L Zimpelman; Kathryn R Koller
Journal:  Int J Circumpolar Health       Date:  2019-12       Impact factor: 1.228

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.