| Literature DB >> 28076439 |
Joshua D Hall1, Anna B O'Connell1, Jeanette G Cook1,2.
Abstract
Many US biomedical PhD programs receive more applications for admissions than they can accept each year, necessitating a selective admissions process. Typical selection criteria include standardized test scores, undergraduate grade point average, letters of recommendation, a resume and/or personal statement highlighting relevant research or professional experience, and feedback from interviews with training faculty. Admissions decisions are often founded on assumptions that these application components correlate with research success in graduate school, but these assumptions have not been rigorously tested. We sought to determine if any application components were predictive of student productivity measured by first-author student publications and time to degree completion. We collected productivity metrics for graduate students who entered the umbrella first-year biomedical PhD program at the University of North Carolina at Chapel Hill from 2008-2010 and analyzed components of their admissions applications. We found no correlations of test scores, grades, amount of previous research experience, or faculty interview ratings with high or low productivity among those applicants who were admitted and chose to matriculate at UNC. In contrast, ratings from recommendation letter writers were significantly stronger for students who published multiple first-author papers in graduate school than for those who published no first-author papers during the same timeframe. We conclude that the most commonly used standardized test (the general GRE) is a particularly ineffective predictive tool, but that qualitative assessments by previous mentors are more likely to identify students who will succeed in biomedical graduate research. Based on these results, we conclude that admissions committees should avoid over-reliance on any single component of the application and de-emphasize metrics that are minimally predictive of student productivity. We recommend continual tracking of desired training outcomes combined with retrospective analysis of admissions practices to guide both application requirements and holistic application review.Entities:
Mesh:
Year: 2017 PMID: 28076439 PMCID: PMC5226343 DOI: 10.1371/journal.pone.0169121
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Study population descriptive statistics.
| N | N | |||
|---|---|---|---|---|
| Female | 172 | Still Enrolled | 45 | |
| Male | 108 | Graduated PhD | 195 | |
| Graduated MS | 14 | |||
| Withdrew | 26 | |||
| Asian | 30 | |||
| Black/African American | 36 | |||
| Hawaiian/Pacific Islander | 4 | 3+ | 50 | |
| Hispanic/Latina/o | 20 | 1–2 | 151 | |
| Native American | 4 | 0+ | 41 | |
| White | 179 | 0 | 38 | |
| Other/Unsure | 7 | |||
| Quantitative GRE Percentile | 72.48+/-17.47 | |||
| 2008 | 123 | Verbal GRE Percentile | 73.10+/-19.30 | |
| 2009 | 84 | Writing GRE Percentile | 54.28+/-22.15 | |
| 2010 | 73 | Undergraduate GPA | 3.52+/-0.34 | |
| Previous Research Experience (months) | 18.33+/-16.75 | |||
| Recommendation Letter Rating | 1.74+/-0.45 | |||
| One-on-one Interview Score | 1.90+/-0.38 | |||
| 280 | First-Author Publications | 1.45+/-1.40 |
Individuals included in this study were PhD students who entered the Biological and Biomedical Sciences Program (BBSP) from 2008–2010. Students were assigned to the following Publication Groups based on number of first-author publications during their graduate studies: 3+, ≥3 first-author publications; 1–2, 1 or 2 first-author publications; 0+, 0 first-author publications and at least one middle authorship; and 0, no first-author or middle-author publications.
a Students still enrolled and making progress towards degree at the time of submission.
b Only includes students with at least 3 recommendation letter ratings (n = 251)
c Data only available for students from the 2009–10 cohorts; only includes students with at least 4 faculty interview scores (n = 142)
Fig 1Graduate student application metrics vs. publication productivity.
Students from the 2008–2010 entering classes were assigned to the following groups based on number of first-author publications during their graduate studies: 3+, ≥3 first-author publications; 1–2, 1 or 2 first-author publications; 0+, 0 first-author publications and at least one middle authorship; and 0, no first or middle-author publications. (A) Quantitative GRE scores, (B) Verbal GRE scores, (C) Writing GRE scores, (D) Undergraduate GPA, and (E) previous research experience were compared among the groups of students. Each symbol represents one student, and lines and error bars represent the mean and standard deviation of each population, respectively. A Kruskal-Wallis test was used to assess differences among the populations, and p-values for comparisons among the groups in panels A, B, C, D, and E were 0.3251, 0.6165, 0.8460, 0.7625, and 0.9896, respectively.
Fig 2Recommender evaluations predict graduate student publication productivity.
Students were assigned to groups according to first-author publications as in Fig 1. (A) Ratings from recommendation letters associated with their graduate school applications were converted from the adjective selected by the recommender (from the UNC-provided options of “Exceptional”, “Outstanding”, “Very Good”, “Average”, or “Below Average”) to a numerical score (1 = Exceptional, 5 = Below Average), averaged, and compared among the groups of students. (B) To assess whether students with the highest recommender ratings were more productive, students were assigned to groups according to their mean recommender score from the three letters, and the number of first-author publications was plotted for each group. (C) As in A except that the mean score from one-on-one faculty interviews was plotted (1 = most enthusiastic, 4 = least enthusiastic). (D) To assess whether students with the highest interview scores were more productive, students were binned according to their mean one-on-one faculty interview score, and the number of first-author publications was plotted. Each symbol represents one student, and lines represent the mean and standard deviation of each population A Kruskal-Wallis test was used to assess differences among the populations, and p-values for comparisons among the groups in panels A, B, C, and D were 0.0060, 0.0050, 0.3459, and 0.3072, respectively. For comparison between specific groups, Dunn’s multiple comparisons test was performed (* p < .05, ** p < .01).