| Literature DB >> 21852135 |
Joerg Heil1, Anne Carolus, Julia Dahlkamp, Michael Golatta, Christoph Domschke, Florian Schuetz, Maria Blumenstein, Geraldine Rauch, Christof Sohn.
Abstract
We analysed intra- and inter-rater agreement of subjective third party assessment and agreement with a semi-automated objective software evaluation tool (BCCT.core). We presented standardized photographs of 50 patients, taken shortly and one year after surgery to a panel of five breast surgeons, six breast nurses, seven members of a breast cancer support group, five medical and seven non-medical students. In two turns they rated aesthetic outcome on a four point scale. Moreover the same photographs were evaluated by the BCCT.core software. Intra-rater agreement in the panel members was moderate to substantial (k = 0.4-0.5; wk = 0.6-0.7; according to different subgroups and times of assessment). In contrast inter-rater agreement was only slight to fair (mk = 0.1-0.3). Agreement between the panel participants and the software was fair (wk = 0.24-0.45). Subjective third party assessment only fairly agree with objective BCCT.core evaluation just as third party participants do not agree well among each other.Entities:
Mesh:
Year: 2011 PMID: 21852135 DOI: 10.1016/j.breast.2011.07.013
Source DB: PubMed Journal: Breast ISSN: 0960-9776 Impact factor: 4.380