R L Kravitz1, R A Bell, C E Franz. 1. Department of Internal Medicine, and the Center for Health Services Research in Primary Care, University of California, Davis, USA. rlkravitz@ucdavis.edu
Abstract
BACKGROUND: The goal of our investigation was to facilitate research on clinical negotiation between patients and physicians by developing a reliable and valid classification system for patients' requests in office practice. METHODS: We developed the Taxonomy of Requests by Patients (TORP) using input from researchers, clinicians, and patient focus groups. To assess the system's reliability and validity, we applied TORP to audiotaped encounters between 139 patients and 6 northern California internists. Reliability was assessed with the kappa statistic as a measure of interrater agreement. Face validity was assessed through expert and patient judgment of the coding system. Content validity was examined by monitoring the incidence of unclassifiable requests. Construct validity was evaluated by examining the relationship between patient requests and patient health status; patient request fulfillment and patient satisfaction; and patient requests and physician perceptions of the visit. RESULTS: The 139 patients made 772 requests (619 requests for information and 153 requests for physician action). Average interrater agreement across a sample of 40 cases was 94% (kappa = 0.93; P <.001). Patients with better health status made fewer requests (r = -0.17; P = .048). Having more chronic diseases was associated with more requests for physician action (r = 0.32; P = .0002). Patients with more unfulfilled requests had lower visit satisfaction (r = -0.32; P <.001). More patient requests was also associated with physician reports of longer visit times (P = .016) and increased visit demands (P = .006). CONCLUSIONS: Our study provides evidence that TORP is a reliable and valid system for capturing and categorizing patients' requests in adult primary care. Further research is needed to confirm the system's validity, expand its applicability, and explore its usefulness as a tool for studying clinical negotiation.
BACKGROUND: The goal of our investigation was to facilitate research on clinical negotiation between patients and physicians by developing a reliable and valid classification system for patients' requests in office practice. METHODS: We developed the Taxonomy of Requests by Patients (TORP) using input from researchers, clinicians, and patient focus groups. To assess the system's reliability and validity, we applied TORP to audiotaped encounters between 139 patients and 6 northern California internists. Reliability was assessed with the kappa statistic as a measure of interrater agreement. Face validity was assessed through expert and patient judgment of the coding system. Content validity was examined by monitoring the incidence of unclassifiable requests. Construct validity was evaluated by examining the relationship between patient requests and patient health status; patient request fulfillment and patient satisfaction; and patient requests and physician perceptions of the visit. RESULTS: The 139 patients made 772 requests (619 requests for information and 153 requests for physician action). Average interrater agreement across a sample of 40 cases was 94% (kappa = 0.93; P <.001). Patients with better health status made fewer requests (r = -0.17; P = .048). Having more chronic diseases was associated with more requests for physician action (r = 0.32; P = .0002). Patients with more unfulfilled requests had lower visit satisfaction (r = -0.32; P <.001). More patient requests was also associated with physician reports of longer visit times (P = .016) and increased visit demands (P = .006). CONCLUSIONS: Our study provides evidence that TORP is a reliable and valid system for capturing and categorizing patients' requests in adult primary care. Further research is needed to confirm the system's validity, expand its applicability, and explore its usefulness as a tool for studying clinical negotiation.
Authors: Stephanie A Robinson; Mark S Zocchi; Dane Netherton; Arlene Ash; Carolyn M Purington; Samantha L Connolly; Varsha G Vimalananda; Timothy P Hogan; Stephanie L Shimada Journal: J Gen Intern Med Date: 2020-05-21 Impact factor: 5.128
Authors: B Mitchell Peck; Peter A Ubel; Debra L Roter; Susan Dorr Goold; David A Asch; Amy S Jeffreys; Steven C Grambow; James A Tulsky Journal: J Gen Intern Med Date: 2004-11 Impact factor: 5.128
Authors: Susan M Graham; Murugi Micheni; Bernadette Kombo; Elisabeth M Van Der Elst; Peter M Mugo; Esther Kivaya; Frances Aunon; Bryan Kutner; Eduard J Sanders; Jane M Simoni Journal: AIDS Date: 2015-12 Impact factor: 4.177
Authors: Chen-Tan Lin; Loretta Wittevrongel; Laurie Moore; Brenda L Beaty; Stephen E Ross Journal: J Med Internet Res Date: 2005-08-05 Impact factor: 5.428
Authors: José Ignacio Valenzuela; Arturo Arguello; Juan Gabriel Cendales; Carlos A Rizo Journal: J Med Internet Res Date: 2007-10-22 Impact factor: 5.428
Authors: Stephanie L Shimada; Beth Ann Petrakis; James A Rothendler; Maryan Zirkle; Shibei Zhao; Hua Feng; Gemmae M Fix; Mustafa Ozkaynak; Tracey Martin; Sharon A Johnson; Bengisu Tulu; Howard S Gordon; Steven R Simon; Susan S Woods Journal: J Am Med Inform Assoc Date: 2017-09-01 Impact factor: 4.497