Charles P Friedman1. 1. National Library of Medicine, 6705 Rockledge Drive, Suite 301, Rockledge, MD 20892, USA. friedmc1@mail.nih.gov
Abstract
OBJECTIVES: This paper argues that focused evaluation studies of community-based informational interventions conducted over the life-cycle of the project ("smallball" studies) are more informative and useful than randomized experiments conducted only at the project's conclusion ("powerball" studies). METHOD: Based on two contrasting strategies in baseball, smallball and powerball studies are compared and contrasted, emphasizing how the distinctive features of community-based interventions lend advantage to smallball approaches. RESULTS: Smallball evaluations have several important advantages over powerball evaluations: before system development, they ensure that information resources address real community needs; during deployment, they ensure that the systems are suited to the capabilities of the users and to community constraints; and, after deployment, they enable as much as possible to be learned about the effects of the intervention in environments where randomized studies are usually impossible. IMPLICATIONS: Many in informatics see powerball studies as the only legitimate form of evaluation and so expect powerball studies to be done. These expectations should be revised in favor of smallball studies.
OBJECTIVES: This paper argues that focused evaluation studies of community-based informational interventions conducted over the life-cycle of the project ("smallball" studies) are more informative and useful than randomized experiments conducted only at the project's conclusion ("powerball" studies). METHOD: Based on two contrasting strategies in baseball, smallball and powerball studies are compared and contrasted, emphasizing how the distinctive features of community-based interventions lend advantage to smallball approaches. RESULTS: Smallball evaluations have several important advantages over powerball evaluations: before system development, they ensure that information resources address real community needs; during deployment, they ensure that the systems are suited to the capabilities of the users and to community constraints; and, after deployment, they enable as much as possible to be learned about the effects of the intervention in environments where randomized studies are usually impossible. IMPLICATIONS: Many in informatics see powerball studies as the only legitimate form of evaluation and so expect powerball studies to be done. These expectations should be revised in favor of smallball studies.
Authors: C J McDonald; J M Overhage; W M Tierney; P R Dexter; D K Martin; J G Suico; A Zafar; G Schadow; L Blevins; T Glazener; J Meeks-Johnson; L Lemmon; J Warvel; B Porterfield; J Warvel; P Cassidy; D Lindbergh; A Belsito; M Tucker; B Williams; C Wodniak Journal: Int J Med Inform Date: 1999-06 Impact factor: 4.046
Authors: Lisa M Kern; Jessica S Ancker; Erika Abramson; Vaishali Patel; Rina V Dhopeshwarkar; Rainu Kaushal Journal: J Am Med Inform Assoc Date: 2011-07-31 Impact factor: 4.497
Authors: Alvin D Jeffery; Laurie L Novak; Betsy Kennedy; Mary S Dietrich; Lorraine C Mion Journal: J Am Med Inform Assoc Date: 2017-11-01 Impact factor: 4.497