Joshua R Wortzel1, Brandon E Turner2, Brannon T Weeks3, Christopher Fragassi1, Virginia Ramos1, Thanh Truong4, Desiree Li4, Omar Sahak4, Thomas G O'Connor1. 1. Department of Psychiatry, University of Rochester, Rochester, NY, United States of America. 2. Department of Radiation Oncology, MGH, Harvard University, Boston, MA, United States of America. 3. Department of Gynecology and Obstetrics, MGH, Harvard University, Boston, MA, United States of America. 4. Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States of America.
Abstract
Whereas time trends in the epidemiologic burden of US pediatric mental health disorders are well described, little is known about trends in how these disorders are studied through clinical research. We identified how funding source, disorders studied, treatments studied, and trial design changed over the past decade in US pediatric mental health clinical trials. We identified all US pediatric interventional mental health trials submitted to ClinicalTrials.gov between October 1, 2007 and April 30, 2018 (n = 1,019) and manually characterized disorders and treatments studied. We assessed trial growth and design characteristics by funding source, treatments, and disorders. US pediatric mental health trials grew over the past decade (compound annual growth rate [CAGR] 4.1%). The number of studies funded by industry and US government remained unchanged, whereas studies funded by other sources (e.g., academic medical centers) grew (CAGR 11.3%). Neurodevelopmental disorders comprised the largest proportion of disorders studied, and Non-DSM-5 (Diagnostic and Statistical Manual-5) conditions was the only disorder category to grow (14.5% to 24.6%; first half to second half of decade). There was significant growth of trials studying non-psycho/pharmacotherapy treatments (33.8% to 49.0%) and a decline in trials studying pharmacotherapies (31.7% to 20.6%), though these trends differed by funding source. There were also notable differences in funding sources and treatments studied within each disorder category. Trials using double blinding declined (26.2% to 18.0%). Limitations include that ClinicalTrials.gov is not an exhaustive list of US clinical trials, and trends identified may in part reflect changes in trial registration rather than changes in clinical research. Nevertheless, ClinicalTrials.gov is among the largest databases available for evaluating trends and patterns in pediatric mental health research that might otherwise remain unassessable. Understanding these trends can guide researchers and funding bodies when considering the trajectory of the field.
Whereas time trends in the epidemiologic burden of US pediatric mental health disorders are well described, little is known about trends in how these disorders are studied through clinical research. We identified how funding source, disorders studied, treatments studied, and trial design changed over the past decade in US pediatric mental health clinical trials. We identified all US pediatric interventional mental health trials submitted to ClinicalTrials.gov between October 1, 2007 and April 30, 2018 (n = 1,019) and manually characterized disorders and treatments studied. We assessed trial growth and design characteristics by funding source, treatments, and disorders. US pediatric mental health trials grew over the past decade (compound annual growth rate [CAGR] 4.1%). The number of studies funded by industry and US government remained unchanged, whereas studies funded by other sources (e.g., academic medical centers) grew (CAGR 11.3%). Neurodevelopmental disorders comprised the largest proportion of disorders studied, and Non-DSM-5 (Diagnostic and Statistical Manual-5) conditions was the only disorder category to grow (14.5% to 24.6%; first half to second half of decade). There was significant growth of trials studying non-psycho/pharmacotherapy treatments (33.8% to 49.0%) and a decline in trials studying pharmacotherapies (31.7% to 20.6%), though these trends differed by funding source. There were also notable differences in funding sources and treatments studied within each disorder category. Trials using double blinding declined (26.2% to 18.0%). Limitations include that ClinicalTrials.gov is not an exhaustive list of US clinical trials, and trends identified may in part reflect changes in trial registration rather than changes in clinical research. Nevertheless, ClinicalTrials.gov is among the largest databases available for evaluating trends and patterns in pediatric mental health research that might otherwise remain unassessable. Understanding these trends can guide researchers and funding bodies when considering the trajectory of the field.
Time trend data are fundamental to epidemiological research [1], and they are widely studied in psychiatry and psychology [2, 3]. Increases in the prevalence of pediatric mental health conditions are significant and extend to multiple psychiatric disorders [4]. For example, in the 1960s, one in 2,500 children was diagnosed with autism [5], yet by 2014, this number was as high as one in every 59 children in the United States [6]. Current estimates of the prevalence of attention deficit hyperactivity disorder (ADHD) are between 4–12% in school aged children, representing a 24% increase since 2001 [7]. The number of US children diagnosed with either depression or anxiety has also increased from 5.4% in 2003 to 8.4% in 2012 [8]. The extent to which these trends reflect changes in assessment tools and diagnostic sensitivity is subject to some debate; however, in contrast to the epidemiology, little is known about accompanying time trends in pediatric mental health clinical research.A reliable source of information on time trends in clinical research is the National Institutes of Health’s ClinicalTrials.gov registry, which was created in 2000 [9]. ClinicalTrials.gov has become one of the largest registries for clinical research internationally, and it currently contains detailed information on more than 365,000 clinical studies conducted in over 210 countries. Over 300 research articles have utilized ClinicalTrials.gov to characterize trends in clinical research, including studies assessing trends in trial design, trial funding, and disorders and treatments studied [10-17].There have been several studies that have utilized the ClinicalTrials.gov registry to identify trends in mental health trials [14-17]. In their analysis of all mental health trials in the registry from 2007 to 2014, Arnow and colleagues reported that universities and hospitals funded the majority of mental health trials, that most industry-funded trials studied pharmacotherapies, and that government-funded studies targeted behavioral interventions more than pharmacotherapies [16]. Similarly, in their analysis of mental health clinical research from 2007 to 2018, Wortzel and colleagues found a significant decline in funding from industry and US government sources and a significant increase in funding from academic medical centers and hospitals. A decline in the proportion of mental health trials using blinding and oversight by data monitoring committees was also noted, which occurred in the context of an increasing proportion of trials studying behavioral and non-pharmacological interventions. In addition, there was significant growth of trials studying Non-DSM-5 (Diagnostic and Statistical Manual-5) conditions.While Wortzel and colleagues identified that over 16% of US mental health trials registered in ClinicalTrials.gov were conducted in pediatric populations, this study did not further parse trends in this patient population [17]. There have been a limited number of studies that have identified trends in pediatric mental health clinical trials, and these have focused on trends within the published literature concerning specific treatment types. For example, systematic reviews have shown that an increasing number of trials have studied mobile apps and resilience-focused school-based interventions to treat psychological distress and wellbeing in children [18, 19]. Meta-analyses have also explored the growth of trials studying mindfulness techniques in the treatment of child and adolescent mental health and cognitive disorders [20, 21]. These reviews provide valuable perspectives on clinical research developments within each of these specific subfields of pediatric mental health; however, they do not identify larger trends across all pediatric mental health clinical research, such as changes in funding, disorders studied, or how the research in each subfield has changed relative to others being studied. Considering that nearly 50% of pediatric trials are discontinued or remain unpublished within 58 months of trial completion, these analyses of published data are also likely missing important nuances in the trends of pediatric mental health clinical research being conducted [22]. Therefore, fundament gaps still exist in our understanding of current trends in pediatric mental health trials, and these might be answered through an analysis of a large, national, clinical trials registry, such as ClinicalTrials.gov.In the current study, we used a similar, established methodology implemented by Wortzel and colleagues to examine time trends in clinical trials specific to pediatric mental health [17]. We evaluated trends in the funding, disorders and treatments studied, and trial design characteristics of US pediatric mental health trials registered in ClinicalTrials.gov from 2007 to 2018 and discuss their significance.
Materials and methods
Data selection and classification
Records were downloaded on April 30, 2018 for all 274,029 trials submitted to ClinicalTrials.gov as of April 30, 2018 using the Aggregate Analysis of ClinicalTrials.gov (AACT), a relational database of publicly available ClinicalTrials.gov data [23]. Trials submitted to the registry on or after October 1, 2007 were selected to coincide with the passing of the Food and Drugs Administration Amendments Act (FDAAA) on September 27, 2007, which stipulated that all United States non-phase 1 trials involving US Food and Drug Administration (FDA) regulated drug and biological products, as well as non-feasibility trials of FDA regulated devices, were mandated to report to a clinical trials registry [10]. We further selected trials labeled interventional in the registry to correspond to clinical trials in which participants were assigned to receive interventions, pharmacological or non-pharmacological, based on a protocol [14]. A psychiatrist reviewed the list of all Medical Subject Headings (MeSH) and Disease Condition terms in the ClinicalTrials.gov registry, and those terms deemed relevant to mental health were selected and reviewed by another physician. The full list of the MeSH and Disease Condition terms used for this analysis has been published elsewhere [17]. The trials utilizing these terms were divided among six psychiatrists who manually reviewed the official title and study descriptions to (i) identify trials relevant to pediatric mental health (i.e., the trial description discussed studying ‘children’, ‘adolescents’, or patients ≤18 years old), (ii) categorize these trials according to the disorder index categories in the Section II Diagnostic Criteria and Codes of the DSM-5 [17, 24], and (iii) categorize by treatment type. A sample of 250 trials was reviewed by all six psychiatrists to ensure agreement on the labeling criteria. Trial categorizations with any ambiguity were marked and then reviewed and clarified by another psychiatrist. Because requirements for trial registration vary significantly by country, only trials with research sites exclusively within the United States were included in this analysis.
Changes to the initial protocol
We developed the original protocol for our analysis in April 2019. This protocol was not pre-registered. The original analysis was conducted in May 2019. However, after receiving reviewers’ feedback for a separate analysis of the portfolio of all mental health clinical trials registered in ClinicalTrials.gov [17], we subsequently applied these suggestions to our analysis in pediatric mental health trials and modified our protocol accordingly. These changes to the protocol are detailed in S1 Table (items 1–4). In brief, we made four changes. First, while we initially analyzed both US and international studies, we ultimately decided to limit the analysis to only US trials, as has been previously published [10, 13, 25]. This is because trial registration practices differ significantly by country, and there was concern that inclusion of international trials would confound our results (i.e., observed trends could be due to true regional differences in trial characteristics or differences in regional trial registration). Second, our initial analysis excluded the ClinicalTrials.gov funder category US Fed, as has been previously published [14], because the US Fed category comprises only 3.5% of trials in the registry. However, we subsequently combined US Fed-funded trials with NIH-funded trials to form a new funder category called ‘US Govt’ to better capture changes in US government-funded trials. Third, we initially clustered Phase 1/2 and Phase 2/3 trials under the phase category ‘Not Applicable;’ however, in our revised analysis we grouped these trials with Phase 2 and Phase 3 trials, respectively, as these trials were deemed to have ultimately reached Phase 2 and Phase 3 status. Fourth, we included the citations of Arnow and colleagues and Wortzel and colleagues in our revised protocol, as these analyses contextualized our study and motivated several changes made to the revised analysis [16, 17]. The revised protocol was created in April 2020, and the revised analysis in accordance with this protocol was completed in May 2020.After incorporating feedback from reviewers of this manuscript, we made two additional changes to the protocol in January 2021 [16, 17]. These are also detailed in S1 Table (items 5–6). First, there was concern that two of the terms used to label trial treatments were unclear and contributed to confusion when interpreting the results. The term ‘Interventional’ was changed to ‘Stimulation’ to describe electroconvulsive therapy, deep brain stimulation, and transcranial magnetic stimulation. The term ‘Alternative’ was changed to ‘Non-Psycho/Pharmacotherapy’ to describe interventions that did not fall into the categories of ‘Pharmacotherapy’, ‘Psychotherapy’, or ‘Stimulation’. Second, in the original analysis, an alpha threshold of 0.01 was used. However, a more stringent threshold of α = 0.005 was used in the revision, as has been previously published [26].
Trial characteristics
We analyzed each trial on 11 dimensions:Date of submission (dates ranged from October 1, 2007 to April 30, 2018). We divided our 127-month study period at the approximate midpoint into a 63-month ‘Early’ period (October 1, 2007 to December 31, 2012) and a 64-month ‘Late’ period (January 1, 2013 to April 30, 2018). Time of submission was assessed as a dichotomous variable using these groupings to look at proportional changes in trial characteristics. Monotonic growth trends were assessed by grouping trials by year of submission. All year-to-year analyses included only years with a full 12-month collection of data (i.e., 2008–2017).Trial primary objective (categories included ‘Treatment’, ‘Prevention’, ‘Supportive Care’, and ‘Other’). ‘Treatment’, ‘Prevention’, and ‘Supportive Care’ were categories taken directly from the categorization in ClinicalTrials.gov. ‘Other’ was generated by combining the category Other in ClinicalTrials.gov with the categories Diagnostic, Health Services Research, Screening, and Basic Science, which together comprised 10.0% of trials. ‘Treatment’ denotes trials in which one or more interventions were assessed to treat a disease, syndrome, or condition. ‘Prevention’ denotes trials in which one or more interventions were evaluated to prevent the development of a specific disease or health condition. ‘Supportive Care’ denotes trials in which one or more interventions were examined to maximize comfort, minimize side effects, or mitigate decline in participants’ health or function.Trial phase (categories included ‘Phase 1’, ‘Phase 1/2–2’, ‘Phase 2/3–3’, ‘Phase 4’, and ‘Not Applicable’). ‘Phase 1’ was generated by grouping the ClinicalTrials.gov categories Early Phase 1 and Phase 1. ‘Phase 1/2–2’ was generated by grouping the ClinicalTrials.gov categories Phase 1/2 and Phase 2. ‘Phase 2/3–3’ was generated by grouping the ClinicalTrials.gov categories Phase 2/3 and Phase 3. ‘Phase 4’ and ‘Not Applicable’ were taken directly from these corresponding categories in ClinicalTrials.gov. Of note, ‘Not Applicable’ does not refer to missing data but rather to trials without FDA-defined phases, such as trials studying devices or behavioral interventions.Number of arms (grouped by range: ‘One’, ‘Two’, or ‘≥Three’). Number of arms, as reported in ClinicalTrials.gov, were grouped and treated as nominal variables using these categories.Blinding (categories included ‘None’, ‘Single’, and ‘Double’). The category ‘Blinding’ was generated from the category Masking in ClinicalTrials.gov.Use of randomization (category included ‘Yes’). This was taken directly from the categorization in ClinicalTrials.gov.Oversight by a data monitoring committee (DMC) (category included ‘Yes’). This was taken directly from the categorization in ClinicalTrials.gov.Number of sites (categories included ‘One’, ‘Two’, ‘Three–Ten’, and ‘>Ten’). Number of sites, as reported in ClinicalTrials.gov, were grouped and treated as nominal variables using these categories.Funding source (categories included ‘Industry’, ‘United States Government [US Govt]’, and ‘Academic Medical Centers/Hospitals/Others [AMC/Hosp/Oth]’). The category ‘Industry’ was taken directly from the categorization in ClinicalTrials.gov. The category ‘US Govt’ was generated from the ClinicalTrials.gov categories NIH and US Fed, as previously described [16, 17]. The ‘Other’ category is primarily composed of academic institutions or hospitals, and the minority were charities and foundations, which is why the label ‘Academic Medical Centers/Hospitals/Others’ is used [16, 17]. We used a hierarchical funder designation, such that trials with any industry involvement were labeled ‘Industry’, any remaining trials with US government involvement were labeled ‘US Govt’, and all remaining trials were labeled ‘AMC/Hosp/Oth’. This method was used to capture the influence of industry and government on trial characteristics, as has been previously published in analyses of the registry [12–14, 17].Treatment type (categories include ‘Non-Psycho/Pharmacotherapy’, ‘Stimulation’, ‘Pharmacotherapy’, and ‘Psychotherapy’). These categories were created manually to identify the treatments studied, and they were further divided into subcategories. For ‘Pharmacotherapy’, agents tested were grouped according to drug class (e.g., stimulants, antidepressants, antipsychotics, etc.). For ‘Psychotherapy’, therapies were grouped by type (e.g., dialectical behavioral therapy, cognitive behavior therapy, etc.). Trials were labeled ‘Stimulation’ if they studied deep brain stimulation, transcranial magnetic stimulation, or electroconvulsive therapy. Trials with treatments that did not fit into these three categories were labeled ‘Non-Psycho/Pharmacotherapy’, which was broken down into subcategories including ‘Technology’ (e.g., interactive phone applications, videogames, etc.), ‘Telecommunication’ (e.g., telepsychiatry, teletherapy, etc.), ‘Community Programs’ (e.g., afterschool programs, school-wide substance use campaigns, etc.), ‘Community Outreach’ (e.g., assertive community treatment teams, integrating mental health services into pediatric outpatient primary care centers, etc.) ‘Diet and Exercise’ (e.g., nutritional supplements, exercise programs, etc.), ‘Mediation and Yoga’, and ‘Basic Science’ (e.g., fMRI, genetic profiling, biomarkers, etc.). Trials were assigned more than one treatment category (e.g., when pharmacotherapy was compared to or used as an adjunct to psychotherapy) or subcategory (e.g., a phone app with guided meditations) when appropriate, and consequently the percentages of trials by treatment category and subcategory sum to greater than 100%.Disorder categories (categories include ‘Anxiety’, ‘Bipolar’, ‘Depression’, ‘Disruptive, Impulse Control, & Conduct’, ‘Dissociative’, ‘Feeding & Eating’ ‘Gender Dysphoria’, ‘Neurocognitive’, ‘Neurodevelopment’, ‘Obsessive-Compulsive’, ‘Paraphilic’, ‘Personality’, ‘Schizophrenia Spectrum’, ‘Sexual Dysfunction’, ‘Sleep-Wake’, ‘Somatic Symptom’, ‘Substance & Addiction’, ‘Trauma & Stressor’, and ‘Non-DSM-5 Conditions’). These categories were created manually to identify the disorders studied. Trials that identified disorders by DSM-IV or–IV-R diagnostic nomenclature were reclassified using equivalent terms in the DSM-5. Trials that did not clearly match any DSM-5 categories were marked ‘Non-DSM-5 Conditions.’ Given the significant number of trials (n = 206) in this category, we further subcategorized the ‘Non-DSM-5 Conditions’ (S2 Table). Trials were labeled with as many categories as were relevant, and consequently the percent of trials by disorder category sums to greater than 100%.
Statistical analysis
We analyzed trial data using descriptive statistics. Because certain fields are optional in ClinicalTrials.gov, approximately 5% of trials had missing data, and, consequently, the total number of trials varies slightly between fields. When data were missing, these trials were excluded. The sample size for each trial characteristic is reported in the tables to note when trial number varies due to exclusion of trials with missing data. We assessed for differences between the distributions of categorical variables of trial characteristics using two-sided Pearson χ2 tests. We assessed the statistical significance of monotonic trends over time (i.e., compound annual growth rates [CAGR]) using post-hoc Mann-Kendall tests to test the null hypothesis that the number of trials did not change over time. While there are no specific reporting guidelines that have been developed for this type of analysis of trial registries, we adhered to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines for cross-sectional studies [27].Due to the number of effects explored, we focused on results which achieved statistical significance at the α = 0.005 level, as has been previously published [26]. All analyses were performed using the R statistical programming language, version 3.5.0 [28]. We used the following R packages: Tidyverse [29], Ggpubr [30], Kendall [31], and Coin [32].
Results
Study selection
There were 274,416 clinical trials registered in ClinicalTrials.gov as of April 30, 2018. We excluded 56,145 trials because they were not interventional (i.e., participants did not receive interventions based on a protocol), and we excluded 38,109 trials because they were submitted prior to October 1, 2007 (i.e., prior to the passing of the FDAAA). Of the remaining 180,162 interventional trials within this time period, we identified 11,176 trials relevant to mental health, and 6,302 of these trials were conducted within the United States. Of these trials, we identified 1,019 US interventional pediatric mental health trials, which comprised 62.7% (1,019/1,626) of global pediatric mental health interventional trials and 16.2% (1,019/6,302) of all US mental health interventional trials in the registry from October 1, 2007 to April 30, 2018 (Fig 1).
Fig 1
A flow diagram of inclusion of US pediatric interventional mental health trials registered in ClinicalTrials.gov.
Growth of trials and trial characteristics over time
From 2008 to 2017, the annual number of US pediatric mental health trials increased (CAGR 4.1%, p = 0.0030) (Fig 2A). Annual growth of US pediatric mental health trials differed by funding source (Fig 2B), and the proportion of trials also differed by funding source when the data were stratified into early (2007–2012) and late (2013–2018) time periods (Table 1). Annual growth of US government-funded trials trended downward (CAGR -2.6%, p = 0.10) and proportionally decreased between the early and later periods (172 to 157 trials, 39.0% to 27.2%, p<0.0001). Annual growth of industry-funded trials was not monotonic (CAGR -3.3%, p = 1.0), and the proportion of industry-funded trials did not change substantively (69 to 79 trials, 15.6% to 13.7%, p = 0.37). Trials funded through academic medical center/hospital/other sources grew monotonically (CAGR 11.3%, p = 0.00034) and proportionally (200 to 342 trials, 45.5% to 59.2%, p<0.0001).
Fig 2
Trends in the growth, funding, and disorders and treatments studied in US pediatric mental health trials registered in ClinicalTrials.gov from 2008 to 2017.
All year-to-year analyses included only years with a full 12-month collection of data (i.e., January 1, 2008 –December 31, 2017). (A) Overall growth of US pediatric mental health trials (CAGR 4.1%, p = 0.0030). (B) Growth of US pediatric mental health trials stratified by funder type. Industry CAGR -3.3%, p = 1.0; AMC/Hosp/Oth (Academic Medical Centers/Hospitals/Others) CAGR 11.3%, p = 0.00034; US Govt (US Government) CAGR -2.6%, p = 0.10. (C) Growth of the five most-studied disorder categories in US pediatric mental health. Anxiety (CAGR 3.9%, p = 0.085); Depression (CAGR 1.9%, p = 0.59); Neurodevelopment (CAGR 5.7%, p = 0.21); Non-DSM-5 Conditions (CAGR 11.7%, p = 0.047); Substance & Addiction (CAGR 2.7%, p = 0.46). (D) Growth of US pediatric mental health trials stratified by treatment type. Non-Psycho/Pharmacotherapy (CAGR 11.0%, p = 0.00034); Psychotherapy (CAGR 5.4%, p = 0.039); Pharmacotherapy (CAGR -8.1%, p = 0.37); Stimulation (too few to calculate a meaningful CAGR).
Table 1
Characteristics of US pediatric mental health clinical trials registered in ClinicalTrials.gov from October 1, 2007 to April 30, 2018 stratified by early (2007–2012) and late (2013–2018) time periods.
Trial Characteristics
Early
Late
Total
p-value
n (%)
n (%)
n (%)
Primary Objective
n = 432
n = 573
n = 1005
Treatment
329 (76.2)
356 (62.1)
685 (68.2)
<0.0001
Prevention
71 (16.4)
113 (19.7)
184 (18.3)
0.18
Supportive Care
8 (1.9)
28 (4.9)
36 (3.6)
0.010
Other
24 (5.6)
76 (13.3)
100 (10.0)
<0.0001
Trial Phases
n = 441
n = 578
n = 1019
Phase 1
42 (9.5)
28 (4.8)
70 (6.9)
0.0034
Phase 1/2-2
103 (23.4)
60 (10.4)
163 (16.0)
<0.0001
Phase 2/3-3
49 (11.1)
40 (6.9)
89 (8.7)
0.019
Phase 4
53 (12.0)
34 (5.9)
87 (8.5)
0.00052
Not Applicable
194 (44.0)
416 (72.0)
610 (59.9)
<0.0001
Number of Arms
n = 428
n = 576
n = 1004
One
67 (15.7)
98 (17.0)
165 (16.4)
0.57
Two
284 (66.4)
388 (67.4)
672 (66.9)
0.74
≥Three
77 (18.0)
90 (15.6)
167 (16.6)
0.32
Blinding
n = 435
n = 578
n = 1013
Double
114 (26.2)
104 (18.0)
218 (21.5)
0.0016
Single
137 (31.5)
198 (34.3)
335 (33.1)
0.36
None
184 (42.3)
276 (47.8)
460 (45.4)
0.084
Randomization
n = 436
n = 577
n = 1013
Yes
349 (80.0)
450 (78.0)
799 (78.9)
0.47
DMC
n = 419
n = 547
n = 966
Yes
203 (48.4)
221 (40.4)
424 (43.9)
0.015
Number of Sites
n = 441
n = 578
n = 1019
One
336 (76.2)
447 (77.3)
783 (76.8)
0.67
Two
37 (8.4)
57 (9.9)
94 (9.2)
0.42
Three-Ten
40 (9.1)
43 (7.4)
83 (8.1)
0.35
>Ten
28 (6.3)
31 (5.4)
59 (5.8)
0.50
Funder
n = 441
n = 578
n = 1019
Industry
69 (15.6)
79 (13.7)
148 (14.5)
0.37
AMC/Hosp/Oth
200 (45.4)
342 (59.2)
542 (53.2)
<0.0001
US Govt
172 (39.0)
157 (27.2)
329 (32.3)
<0.0001
Treatment Type
n = 441
n = 578
n = 1019
Psychotherapy
203 (46.0)
236 (40.8)
439 (43.1)
0.11
Pharmacotherapy
140 (31.7)
119 (20.6)
259 (25.4)
<0.0001
Stimulation
4 (0.9)
5 (0.9)
9 (0.9)
-
Non-Psycho/Pharmacotherapy
149 (33.8)
283 (49.0)
432 (42.4)
<0.0001
Disorder Category
n = 441
n = 578
n = 1019
Anxiety
33 (7.5)
64 (11.1)
97 (9.5)
0.068
Bipolar
19 (4.3)
24 (4.2)
43 (4.2)
1.0
Depression
54 (12.2)
50 (8.7)
104 (10.2)
0.076
Disruptive, Impulse Control, & Conduct
15 (3.4)
26 (4.5)
41 (4.0)
0.47
Dissociative
1 (0.2)
0
1 (0.1)
-
Feeding & Eating
12 (2.7)
10 (1.7)
22 (2.2)
-
Gender Dysphoria
0
2 (0.3)
2 (0.2)
-
Neurocognitive
4 (0.9)
7 (1.2)
11 (1.1)
-
Neurodevelopment
161 (36.5)
192 (33.2)
353 (34.6)
0.30
Obsessive-Compulsive
17 (3.9)
11 (1.9)
28 (2.7)
-
Paraphilic
0
1 (0.2)
1 (0.1)
-
Personality
0
0
0
-
Schizophrenia Spectrum
8 (1.8)
8 (1.4)
16 (1.6)
-
Sexual Dysfunction
0
1 (0.2)
1 (0.1)
-
Sleep-Wake
9 (2.0)
15 (2.6)
24 (2.4)
-
Somatic Symptom
1 (0.2)
5 (0.9)
6 (0.6)
-
Substance & Addiction
78 (17.7)
88 (15.2)
166 (16.3)
0.33
Trauma & Stressor
17 (3.9)
21 (3.6)
38 (3.7)
0.99
Non-DSM-5 Condition
64 (14.5)
142 (24.6)
206 (20.2)
0.00010
‘AMC/Hosp/Oth’ denotes Academic Medical Centers/Hospitals/Other. ‘US Govt’ denotes United States Government. DMC denotes oversight by a data monitoring committee. Non-DSM-5 conditions were disorders that did not clearly match any Diagnostic Statistical Manual-5 disorder categories. Of note, the total number of trials varies slightly by category, as approximately 5% of trials had missing dimensions (n provided for each category). For the disorder and treatment categories, trials were labeled with as many categories as were relevant, and consequently the total percentage of trials by disorder and treatment categories sums to greater than 100%. For the 11 diagnostic categories that had fewer than 30 trials, we did not calculate χ2 values (represented as dashes). The same was true for the treatment ‘Stimulation’, which had fewer than 30 trials. All p-values are from two-sided Pearson χ2 tests.
Trends in the growth, funding, and disorders and treatments studied in US pediatric mental health trials registered in ClinicalTrials.gov from 2008 to 2017.
All year-to-year analyses included only years with a full 12-month collection of data (i.e., January 1, 2008 –December 31, 2017). (A) Overall growth of US pediatric mental health trials (CAGR 4.1%, p = 0.0030). (B) Growth of US pediatric mental health trials stratified by funder type. Industry CAGR -3.3%, p = 1.0; AMC/Hosp/Oth (Academic Medical Centers/Hospitals/Others) CAGR 11.3%, p = 0.00034; US Govt (US Government) CAGR -2.6%, p = 0.10. (C) Growth of the five most-studied disorder categories in US pediatric mental health. Anxiety (CAGR 3.9%, p = 0.085); Depression (CAGR 1.9%, p = 0.59); Neurodevelopment (CAGR 5.7%, p = 0.21); Non-DSM-5 Conditions (CAGR 11.7%, p = 0.047); Substance & Addiction (CAGR 2.7%, p = 0.46). (D) Growth of US pediatric mental health trials stratified by treatment type. Non-Psycho/Pharmacotherapy (CAGR 11.0%, p = 0.00034); Psychotherapy (CAGR 5.4%, p = 0.039); Pharmacotherapy (CAGR -8.1%, p = 0.37); Stimulation (too few to calculate a meaningful CAGR).‘AMC/Hosp/Oth’ denotes Academic Medical Centers/Hospitals/Other. ‘US Govt’ denotes United States Government. DMC denotes oversight by a data monitoring committee. Non-DSM-5 conditions were disorders that did not clearly match any Diagnostic Statistical Manual-5 disorder categories. Of note, the total number of trials varies slightly by category, as approximately 5% of trials had missing dimensions (n provided for each category). For the disorder and treatment categories, trials were labeled with as many categories as were relevant, and consequently the total percentage of trials by disorder and treatment categories sums to greater than 100%. For the 11 diagnostic categories that had fewer than 30 trials, we did not calculate χ2 values (represented as dashes). The same was true for the treatment ‘Stimulation’, which had fewer than 30 trials. All p-values are from two-sided Pearson χ2 tests.Of the five most-studied disorders (Fig 2C; Table 1), only Non-DSM-5 conditions trended towards monotonic growth (CAGR 11.7%, p = 0.047) and grew proportionally (14.5% to 24.6%, p = 0.00010). None of the other conditions grew monotonically (Anxiety CAGR 3.9%, p = 0.085; DepressionCAGR 1.9%, p = 0.59; Neurodevelopment CAGR 5.7%, p = 0.21; Substance CAGR 2.7%, p = 0.46) or grew proportionally (all: p>0.005). Growth of pediatric mental health trials differed by treatment type as well (Fig 2D; Table 1). Trials studying non-psycho/pharmacotherapy treatments grew monotonically (CAGR 11.0%, p = 0.00034) and psychotherapy treatments trended towards growth (CAGR 5.4%, p = 0.039). The overall proportion of trials studying psychotherapy trended downward (46.0% to 40.8%, p = 0.11) while the proportion of trials studying non-psycho/pharmacotherapy treatments grew significantly (33.8% to 49.0%, p<0.0001). The proportion of trials studying pharmacotherapy declined between the early and late periods (31.7% to 20.6%, p<0.0001), though this trend was not monotonic (CAGR -8.1%, p = 0.37). Only nine stimulation trials were conducted during this time. There were too few trials to calculate a meaningful CAGR, and there was no proportional change.There were multiple changes in trial design characteristics between the early and late time periods (Table 1). Trial objectives shifted away from treatment (76.2% to 62.1%, p<0.0001) and trended towards Supportive Care (1.9% to 4.9%, p = 0.010) and towards Other objectives (5.6% to 13.3%, p<0.0001). There was a significant increase in the proportion of trials with ‘Not Applicable’ phase designation (44.0% to 72.0%, p<0.0001), and there was a decline in trials with double blinding (26.2% to 18.0%; p = 0.0016) and trend towards decline of oversight by a data monitoring committee (DMC, 48.4% to 40.4%; p = 0.015). There were no significant changes in the proportions of trials with multiple arms, the number of study sites, and trials using randomization (all: p>0.005).
Disorders studied
The top five disorder categories studied (neurodevelopment, substance & addiction, depression, anxiety, and Non-DSM-5 conditions) comprised 90.8% of pediatric mental health trials (Fig 3A). The remaining 14 disorder categories were studied in 23.0% of trials. Trials were labeled with as many disorder categories as were relevant, and consequently the total percentage of trials by disorder category sums to greater than 100%. There were marked differences in the proportions of treatment types studied in each disorder category (Fig 3B). For example, trials studying neurodevelopment (Pharm 40.3%, Psycho 28.8%, Non-Psycho/Pharm 30.4%), bipolar (Pharm 52.2%, Psycho 21.7%, Non-Psycho/Pharm 26.1%), and Obsessive-Compulsive (Pharm 40.0%, Psycho 48.6%, Non-Psycho/Pharm 11.4%) disorders studied relatively high proportions of pharmacotherapies. Conversely, trials studying disorder categories such as substance & addiction (Pharm 8.7%, Psycho 43.9%, Non-Psycho/Pharm 47.4%), depression (Pharm 12.0%, Psycho 44.4%, Non-Psycho/Pharm 37.6%), anxiety (Pharm 13.3%, Psycho 53.3%, Non-Psycho/Pharm 33.3%), disruptive, impulse control, & conduct (Pharm 4.1%, Psycho 53.1%, Non-Psycho/Pharm 42.9%), trauma & stressor disorders (Pharm 6.4%, Psycho 51.1%, Non-Psycho/Pharm 42.6%) and Non-DSM-5 conditions (Pharm 9.4%, Psycho 33.2%, Non-Psycho/Pharm 54.3%) studied relatively few pharmacotherapies compared to psychotherapy and non-psycho/pharmacotherapy treatments. Stimulation trials comprised ≤6% of trials across all disorder categories.
Fig 3
US pediatric mental health trials registered in ClinicalTrials.gov from October 1, 2007 to April 30, 2018 stratified by disorder category.
(A) Number (percentage) of US pediatric mental health trials by disorder category. Trials were labeled with as many disorder categories as were relevant, and consequently the total percentage of trials by disorder category sums to greater than 100%. (B) Number (percentage) of treatments studied in US pediatric mental health trials by disorder category. Percentages were calculated for the proportion of treatments studied for each disorder category; therefore, each category sums to 100%. (C) Number (percentage) of funders of US pediatric mental health trials by disorder categories. Percentages were calculated for the proportion of funders for each disorder category; therefore, each category sums to 100%.
US pediatric mental health trials registered in ClinicalTrials.gov from October 1, 2007 to April 30, 2018 stratified by disorder category.
(A) Number (percentage) of US pediatric mental health trials by disorder category. Trials were labeled with as many disorder categories as were relevant, and consequently the total percentage of trials by disorder category sums to greater than 100%. (B) Number (percentage) of treatments studied in US pediatric mental health trials by disorder category. Percentages were calculated for the proportion of treatments studied for each disorder category; therefore, each category sums to 100%. (C) Number (percentage) of funders of US pediatric mental health trials by disorder categories. Percentages were calculated for the proportion of funders for each disorder category; therefore, each category sums to 100%.There were marked differences in the proportions of disorder categories studied by each funding source (Fig 3C). Academic medical centers/hospitals/others funded the largest proportion of trials in almost all disorder categories. Industry funded the smallest proportion of trials in all disorder categories except for neurodevelopment (Ind 29.2%, AMC/Hosp/Oth 49.6%, US Govt 21.2%). The emphasis each funder placed on studying each disorder type also differed (Table 2). For example, industry devoted the majority of its trials to studying neurodevelopment (69.6%), which was a significantly larger proportion compared to the other funders (Ind 69.6%, AMC/Hosp/Oth 32.3%, US Govt 22.8%; p<0.0001). Academic medical centers/hospitals/others funded the largest proportion of trials studying Non-DSM-5 conditions (Ind 8.1%, AMC/Hosp/Oth 23.4%, US Govt: 20.4%; p = 0.00021) and anxiety disorders (Ind 2.7%, AMC/Hosp/Oth 11.4%, US Govt 9.4%; p = 0.0058), and it devoted the largest proportion of its trials to neurodevelopment (32.3%). The US government devoted the largest proportion of trials to studying substance & addiction (Ind 4.7%, AMC/Hosp/Oth 14.4%, US Govt 24.6%; p<0.0001), and it devoted a large proportion of its trials to Non-DSM-5 conditions (20.4%) and neurodevelopment (22.8%).
Table 2
Disorders and treatments studied in US pediatric mental health clinical trials registered in ClinicalTrials.gov from October 1, 2007 to April 30, 2018 stratified by funder type.
Trial Characteristics
Industry n (%)
AMC/Hosp/Oth n (%)
US Govt n (%)
p-value
Disorder Category Studied
n = 148
n = 542
n = 329
Anxiety
4 (2.7)
62 (11.4)
31 (9.4)
0.0058
Bipolar
4 (2.7)
27 (5.0)
12 (3.6)
0.39
Depression
8 (5.4)
54 (10.0)
42 (12.8)
0.047
Disruptive, Impulse Control, & Conduct
1 (0.7)
26 (4.8)
14 (4.3)
0.075
Dissociative
1 (0.7)
0
0
-
Feeding & Eating
0
16 (3.0)
6 (1.8)
-
Gender Dysphoria
0
0
2 (0.6)
-
Neurocognitive
1 (0.7)
8 (1.5)
2 (0.6)
-
Neurodevelopment
103 (69.6)
175 (32.3)
75 (22.8)
<0.0001
Obsessive-Compulsive
5 (3.4)
16 (3.0)
7 (2.1)
-
Paraphilic
0
0
1 (0.3)
-
Personality
0
0
0
-
Schizophrenia Spectrum
3 (2.0)
5 (0.9)
8 (2.4)
-
Sexual Dysfunction
0
1 (0.2)
0
-
Sleep-Wake
6 (4.1)
12 (2.2)
6 (1.8)
-
Somatic Symptom
0
4 (0.7)
2 (0.6)
-
Substance & Addiction
7 (4.7)
78 (14.4)
81 (24.6)
<0.0001
Trauma & Stressor
1 (0.7)
22 (4.1)
15 (4.6)
0.098
Non-DSM-5 Conditions
12 (8.1)
127 (23.4)
67 (20.4)
0.00021
Treatment Type Studied
n = 148
n = 542
n = 329
Psychotherapy
12 (8.1)
255 (47.0)
172 (52.3)
<0.0001
Pharmacotherapy
115 (77.7)
96 (17.7)
48 (14.6)
<0.0001
Stimulation
1 (0.7)
5 (0.9)
3 (0.9)
-
Non-Psycho/Pharmacotherapy
34 (23.0)
246 (45.4)
152 (46.2)
<0.0001
‘AMC/Hosp/Oth’ denotes Academic Medical Centers/Hospitals/Other. ‘US Govt’ denotes United States Government. Non-DSM-5 conditions were disorders that did not clearly match any Diagnostic Statistical Manual-5 disorder categories. Trials were labeled with as many categories as were relevant, and consequently the total percentage of trials by disorder and treatment categories sums to greater than 100%. For the 11 diagnostic categories that had fewer than 30 trials, we did not calculate χ2 values (represented as dashes). The same was true for the treatment ‘Stimulation’, which had fewer than 30 trials. All p-values are from two-sided Pearson χ2 tests.
‘AMC/Hosp/Oth’ denotes Academic Medical Centers/Hospitals/Other. ‘US Govt’ denotes United States Government. Non-DSM-5 conditions were disorders that did not clearly match any Diagnostic Statistical Manual-5 disorder categories. Trials were labeled with as many categories as were relevant, and consequently the total percentage of trials by disorder and treatment categories sums to greater than 100%. For the 11 diagnostic categories that had fewer than 30 trials, we did not calculate χ2 values (represented as dashes). The same was true for the treatment ‘Stimulation’, which had fewer than 30 trials. All p-values are from two-sided Pearson χ2 tests.
Treatments studied
Fig 4A shows that non-psycho/pharmacotherapy and psychotherapy treatments were studied roughly equally (42.4% and 43.1%, respectively), pharmacotherapies were studied in 25.4% of trials, and stimulation treatments comprised only 0.9% of trials. Treatment types were further broken down into subcategories for non-psycho/pharmacotherapy, pharmacotherapy, and psychotherapy treatment categories (Fig 4B–4D). The largest proportion of non-psycho/pharmacotherapy treatments were community interventions (Community Programs: 21.1%, e.g., afterschool programs and substance use prevention campaigns; Community Outreach– 7.4%, e.g., assertive community treatment teams and integration of mental health services into primary care) and technological interventions (Technology: 16.8%; e.g., interactive phone applications and video games; Telecommunication: 10.5%, e.g. telepsychiatry/teletherapy), which together comprised nearly two-thirds of the category (Fig 4B). Stimulants comprised the largest subcategory of pharmacotherapies studied (26.3%), followed by antipsychotics (14.3%) and antidepressants (12.7%). Educational and behavioral interventions comprised the majority of the psychotherapy interventions (59.7%), followed by cognitive behavioral therapy (23.7%).
Fig 4
US pediatric mental health trials registered in ClinicalTrials.gov from October 1, 2007 to April 30, 2018 stratified by treatment type.
(A) US pediatric mental health trials stratified by treatment type. Trials were labeled with as many treatment categories as were relevant, and consequently the total percentage of trials by treatment category sums to greater than 100%. (B-D) Treatment categories (i.e., non-psycho/pharmacotherapy, pharmacotherapy, and psychotherapy) for US pediatric mental health trials stratified into subtypes. Trials were labeled with as many treatment subtypes as were relevant, and consequently the total percentage of trials by subcategory sums to greater than 100%. Acronyms: AChEI (acetylcholinesterase inhibitors), CBT (cognitive behavioral therapy), DBT (dialectical behavioral therapy). (E) Treatment types studied in US pediatric mental health trials stratified by funder type. Percentages were calculated for the proportion of funders for each treatment category; therefore, each category sums to 100%.
US pediatric mental health trials registered in ClinicalTrials.gov from October 1, 2007 to April 30, 2018 stratified by treatment type.
(A) US pediatric mental health trials stratified by treatment type. Trials were labeled with as many treatment categories as were relevant, and consequently the total percentage of trials by treatment category sums to greater than 100%. (B-D) Treatment categories (i.e., non-psycho/pharmacotherapy, pharmacotherapy, and psychotherapy) for US pediatric mental health trials stratified into subtypes. Trials were labeled with as many treatment subtypes as were relevant, and consequently the total percentage of trials by subcategory sums to greater than 100%. Acronyms: AChEI (acetylcholinesterase inhibitors), CBT (cognitive behavioral therapy), DBT (dialectical behavioral therapy). (E) Treatment types studied in US pediatric mental health trials stratified by funder type. Percentages were calculated for the proportion of funders for each treatment category; therefore, each category sums to 100%.The types of treatments studied also differed by funding source (Fig 4E). Overall, industry funded the largest percentage of trials studying pharmacotherapies (Ind 44.4%, AMC/Hosp/Oth 37.1%, US Govt 18.5%), and academic medical centers/hospitals/others funded the largest percentage of trials studying psychotherapy (Ind 2.7%, AMC/Hosp/Oth 58.1%, US Govt 39.2%) and non-psycho/pharmacotherapy treatments (Ind 7.9%, AMC/Hosp/Oth 56.9%, US Govt 35.2%). There were too few stimulation trials to make a meaningful comparison of funder types. The emphasis each funder placed on studying each treatment type also differed (Table 2). Industry devoted a substantially larger proportion of its trials to studying pharmacotherapy compared to the other funders (Ind 77.7%, AMC/Hosp/Oth 17.7%, US Govt 14.6%; p<0.0001). Conversely, academic medical center/hospital/other and US government funders devoted a larger proportion of their trials to psychotherapies (Ind 8.1%, AMC/Hosp/Oth 47.0%, US Govt 52.3%; p<0.0001) and non-psycho/pharmacotherapy treatments (Ind 23.0%, AMC/Hosp/Oth 45.5%, US Govt 46.2%; p<0.0001) compared to industry (Table 2). Stimulation trials comprised a similar proportion of all three funder types (Ind 0.7%, AMC/Hosp/Oth 0.9%, US Govt 0.9%).
Discussion
This study described the landscape, and changes, in contemporary US pediatric mental health trials in the ClinicalTrials.gov registry over the past decade. There were multiple primary findings. US pediatric mental health trials grew over the past decade. The number of academic medical center/hospital/other funded trials grew, while the number of industry and US government-funded trials remained unchanged. Neurodevelopmental disorders comprised the largest proportion of disorders studied; trials studying Non-DSM-5 conditions comprised the only disorder category to grow. The disorders studied differed by funding source. There was significant growth of trials studying non-psycho/pharmacotherapy treatments during this time period, with proportional decline of trials studying pharmacotherapies. Trial characteristics also changed, with a decline in trials using double blinding.From 2007 to 2018, pediatric mental health trials grew (CAGR 4.1%) at approximately twice the rate of all mental health clinical research (CAGR 2.2%) [17]. Growth of both may have been driven by increasing numbers of academic medical centers and hospitals pursuing philanthropic support [33], but the slower growth of all mental health research can be attributed to a general decline in industry and US government funding that did not occur in trials studying pediatric populations [17]. There are several possible explanations for this preserved US government and industry funding for pediatric mental health research. Despite a 42% decrease in the NIMH total budget from 2005 to 2015 [34], the US government started the Autism Center of Excellence and devoted new funds to researching eating disorders and substance use disorders during this time [35, 36]. The government’s emphasis on studying pediatric mental health disorders impacted priorities in industry as well by creating incentives for industry to develop medications for children [37]. Most recently there have been a number of industry-funded pediatric mental health trials studying long-acting stimulants to treat ADHD [38]. This has occurred while industry has significantly divested from researching psychotropic agents to treat disorders such as depression, bipolar disorder, and schizophrenia in adult populations [39].The preponderance of pediatric mental health trials studying neurodevelopmental disorders (Total 34.6%; Ind 69.6%, AMC/Hosp/Oth 32.3%, US Govt 22.8%) may reflect a response to rising rates of diagnosing autism and ADHD in children [6, 8]; however, it may also be a reaction to industry’s development of new stimulants, which we identified comprised 26.3% of all pediatric medication trials during this time period. Between 2014 and 2018, industry spent over $11 million marketing stimulants to psychiatrists [40]. This coincided with a doubling of stimulant prescriptions from 2006 to 2016 [41], and as of 2013, stimulants comprised the largest grossing class of medications for children [42]. It is worth considering how much of our focus on researching neurodevelopmental disorders is driven by patient need versus market forces. For example, 0.5% to 3.0% of children have obsessive-compulsive disorder (OCD) [43], which is at least as prevalent as autism, yet trials studying OCD comprised only 2.7% of our sample.Non-DSM-5 conditions comprised the only disorder category to show proportional growth (14.5% to 24.6%) over the past decade. This mirrors what was observed across all mental health clinical research from 2007 to 2018 [17]. One possible explanation for this trend is the growing adoption of the NIMH Research Domain Criteria (RDoC) initiative, which was started in 2009 but only became a component of NIMH grant reviews starting in 2012 to 2013 [44]. This coincides with the observed growth of trials studying Non-DSM-5 conditions starting in 2014 (Fig 2C). RDoC has been an effort by the NIMH to move away from studying categorical, DSM diagnoses and towards studying dimensional brain systems and endophenotypes, which often cross traditional diagnostic boundaries [45, 46]. RDoC’s focus on exploring neural networks and basic biology may also help explain the trend for pediatric mental health trials to focus less on treatment and more on other primary goals, including basic science [47]. If RDoC is indeed causing this growth of Non-DSM-5 conditions, there is evidence to suggest that RDoC has had a significant impact on the characterization of US pediatric mental health diagnoses in clinical research as of 2014.Another notable trend over the past decade has been growth in the number of pediatric mental health trials studying non-psycho/pharmacotherapy treatments (CAGR 11.0%), as well as the trend towards studying more psychotherapy treatments (CAGR 5.4%). These changes were also observed across all mental health trials during this time period [17]. Further parsing of these two treatment categories shows that the majority of pediatric mental health trials studied community-based, technology, or educational/behavioral interventions (Fig 4B–4D). This emphasis on both non-psycho/pharmacotherapy and psychotherapy treatments may reflect research priorities of the US government and US surgeon general, who have identified “natural settings,” particularly schools, as the most effective places to provide mental health treatment and preventative services to children and adolescents [48, 49]. This is because school-based mental health providers can observe patients in their natural milieu, have easier access to teachers for collateral and psychoeducation, are more accessible to children with limited transportation or support from family, and are felt to be less stigmatizing [50]. Funding for school and community-based trials have received significant support on the state level. For example, from 2007–2010, the Minnesota Department of Human Services alone apportioned over $10 million to develop the infrastructure for school-based mental health services [51]. For similar reasons, there has been increasing investment in studies of technological interventions, such as phone apps for telepsychiatry and teletherapy, to facilitate access to mental health care for adolescents in the community [52]. Community outreach programs and technological interventions have proven essential for treating mental health disorders during the COVID-19 global pandemic [53], and these modalities will likely continue growing in importance and prevalence in pediatric mental health clinical research.The decline in trials using double blinding (26.2% to 18.0%) and the trend towards a decline in use of DMCs (48.4% to 40.4%) likely has to do with the proportional increase in trials studying non-psycho/pharmacotherapy therapies. It is often not possible to double- or even single-blind these interventions, and US regulations only require DMC oversight in trials testing new drugs/biologics/devices, in double-blinded studies with considerable risk to patients, or in research with vulnerable populations [54]. A decrease in double-blinded trials and use of DMCs has also been observed in the registry across all mental health clinical research [17].It is interesting to consider why there are differences among disorder categories regarding the proportions of pharmacotherapy vs non-pharmacotherapy treatments studied in children. For example, in certain disorder categories, such as neurodevelopment, a significantly larger proportion of pharmacotherapies were studied (Pharm 40.3% vs Psycho 28.8% and Non-Psycho/Pharm 30.4%), whereas in other disorders, such as substance & addiction, a larger proportion of trials studied psychotherapy and non-psycho/pharmacotherapy treatments (Pharm 8.7% vs Psycho 43.9% and Non-Psycho/Pharm 47.4%). This may reflect differences in the accepted effectiveness of certain treatments for different disorders. It is also possible that patient and parent preferences may be driving a shift towards studying psychotherapies for certain conditions. Analyses across diverse clinical settings show that mental health patients, particularly younger patients, express a three-fold preference for psychotherapies over medications [55]. This may be especially true for disorders in which pharmacotherapy and psychotherapy have equal effectiveness [56, 57]. It is also interesting to consider why so few pediatric mental health trials studied stimulation therapies. Pilot studies using transcranial magnetic stimulation to treat ADHD, autism, and depression in children have been promising, and it is likely that there will be growth of trials studying this treatment modality in children over the coming decades [58].Because ClinicalTrials.gov is a unique and valuable resource to study trends in clinical research, it is important to consider ways it could be modified to improve its usefulness and efficiency as a research tool. For example, it would be helpful if trials reported the monetary contributions of all funding sources, as this would help establish each funder’s relative contribution and influence on trial design and focus. It would be beneficial for trials to identify relevant fields of medicine (e.g., mental health, oncology, cardiology, etc.), as currently this information needs to be gleaned through use of MeSH and Disease Condition terms and manual review of the study titles and descriptions. This process is time consuming, has the potential for error, and contributes to variation among study results. Lastly, it would be helpful for ClinicalTrials.gov to require all data fields to be mandatory, as missing data can potentially introduce bias into analyses of the registry.Our study has several possible limitations. First, ClinicalTrials.gov is not an exhaustive list of all US clinical trials [13]. Phase 1 trials and trials studying non-pharmacologic interventions were not subject to the FDAAA or the Final Rule, so these trials may be underrepresented in the registry [9]. There may be other unknown norms and incentives that also bias the registration of certain trial types. Therefore, trends identified in the registry may at least in part reflect changes in trial registration rather than changes in clinical research. Analyses of the registry are also descriptive in nature and include many comparisons that do not account for potential unknown confounders. As a result, causal relationships cannot be drawn from these data. Nevertheless, analyses of ClinicalTrials.gov have allowed many medical specialties to assess trends in clinical research that might otherwise remain unassessable [12–14, 16, 17], as nearly half of trials run by large sponsors go unpublished [59]. Second, while significant efforts were made to review all key words, titles, and study descriptions to confirm trials’ relevance to mental health, some trials may have been excluded due to missing or mislabeled keywords in the registry. Finally, we looked exclusively at US trials registered in ClinicalTrials.gov. International regulations for trial registration differ by country and were thought to likely confound trends if non-US trials were included in our sample. Consequently, 37.3% of the pediatric mental health trials registered in ClinicalTrials.gov from October 1, 2007 to April 30, 2018 were excluded, and our results cannot be generalized beyond the United States.In conclusion, this study aims to help provide a mirror to the pediatric mental health community to identify where its clinical research efforts have been and where its efforts appear to be heading. By observing these trends, researchers and funding bodies may gain an additional perspective to help shape the priorities and resources devoted to future pediatric mental health research to provide new treatments to better meet patients’ needs.
Changes to the initial protocol.
(DOCX)Click here for additional data file.
Non-DSM-5 subcategorization.
(DOCX)Click here for additional data file.
Study protocol.
(DOCX)Click here for additional data file.5 Jan 2021PONE-D-20-34754Trends in US pediatric mental health clinical trials: An analysis of ClinicalTrials.gov from 2007 – 2018PLOS ONEDear Dr. Wortzel,Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.Please submit your revised manuscript by Feb 19 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.Please include the following items when submitting your revised manuscript:A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocolsWe look forward to receiving your revised manuscript.Kind regards,Claudio GentiliAcademic EditorPLOS ONEJournal Requirements:When submitting your revision, we need you to address these additional requirements.1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found athttps://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf andhttps://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf2.We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.In your revised cover letter, please address the following prompts:a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.We will update your Data Availability statement on your behalf to reflect the information you provide.3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.Additional Editor Comments (if provided):Please consider carefully the issues raised by the two reviewers. Particularly, both highlighted that a the moment the manuscritpt did not fullfil the PLOS one policy for data sharing. I strongly invite the authors to fulfil the policy requirements submitting the revised version of the manuscript[Note: HTML markup is below. Please do not edit.]Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: PartlyReviewer #2: Yes**********2. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: YesReviewer #2: Yes**********3. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.Reviewer #1: NoReviewer #2: No**********4. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: Yes**********5. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)Reviewer #1: Florian Naudet, MD, PhD, Rennes 1 UniversitySorry for the delay in answering, mostly due to my clinical activity.This is a very interesting paper. I have the following remarks.Abstract:- add some limitations to avoid any spin;Introduction:- add a systematic overview of overlapping projects on adults/child, including in other countries (e.g. a table summarizing evidence prior this study);Methods:- was there a pre-registration? If, no please make it explicit.- Please state in the text the date of the protocol / the date of the analysis / and describe any change to the protocol in a dedicated paragraph;- Please provide more detail about the supportive care category in the method section (and make sure that one can understand how it differs from treatment);- How were managed missing data on clinicaltrials.gov (e.g. related to study phase)? Please add some details?- When it comes to interventions tested, please replace the category “interventional” by another word (e.g. “stimulation”) … “Interventional” in my opinion rather refers to an interventional study (versus observational studies) and it could be somewhat misleading (e.g. it is used with this meaning page 5 line 73);- No reason is given for the threshold of alpha=0.01. This would apply in case there are 5 primary outcomes following a Bonferroni correction. It must be justified more adequately.In addition, and, to be provocative, I’m not sure that most of these p-values are needed (as you are describing the complete population, statistical testing does not really make sense in my view).- We need somewhere the details of the “non-DSM” category (in addition, it is the most represented category at some points, e.g. the last years, a list in the methods or a figure in the results may be helpful);Results:- Please add a flow charts and a paragraph about study selection;- Please detail the interrater agreement for data extraction;- Table 1: see my comments about statistical testing.- Please clarify this sentence as the numbers (90.8+23) do not add up to 100 %. I understand that there can be some overlap but it should be explained: “The top five disorder categories studied (neurodevelopment, substance & addiction, depression, anxiety, and Non-DSM-5 conditions) comprised 90.8% of pediatric mental health trials (Fig 2A). The remaining 14 disorder categories were studied in 23.0% of trials.”- Figure 2 is really nice. Congratulation for this figure. Please consider moving “non-DSM” Disorders to the second line (and to order the graph by frequencies) ;- Figure 3 is also a really nice one. For panels B, C and D, I would suggest however to use the same scale for the y-axis. I understand that the number are very different from one category to the other. So perhaps, you can use a log scale. Second possibility, you can use percentages (the numbers being given in panel A).Discussion:- Please insist on the descriptive nature of the study.- As there are many comparisons without any adjustment of confounder. Please warn explicitly about the fact that no causal interpretations are possible, discuss the issue of confounding toroughly.Overall:I acknowledge that there are no appropriate reporting guidelines for such meta-research study. But please have a look at equator network and follow the most appropriate guideline (e.g. STROBE?).Please share the data, code and any other information in an adequate repository (e.g. Dryad, https://datadryad.org/stash).Reviewer #2: Major issues:Data availability:The authors should make the dataset used for the analysis available, it is not enough to just say where and how it could be retrieved. The authors might have used criteria or filters or other operations on a large dataset. Readers need to be able to replicate the main analyses. Is there a reason why the authors have opted not to share the extracted data?How does this dataset differ the authors’ previous publication on all mental health research in the same journal (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0233996)? Weren’t pediatric trials a subset of that? What is the rationale of presenting a separate analysis on that cohort? At least a brief discussion of overlap and the rationale of the more detailed analysis on pediatric trials should be included.Is there any reason for not pre-registering the study protocol? Submission with manuscript, while useful, does not afford the opportunity to assess what was pre-planned and what not.My main objection is about treatment type categories, which appear rather vague and counter-intuitive, mixing type of intervention with delivery type or setting. The authors could consider reorganizing them. For instance, aren’t psychotherapy and pharmacotherapy also interventional? Also “alternative” is an unfortunate label that has a different meaning for many people, as in “alternative” to “conventional” biomedicine. I suggest you replace “Interventional” with “Physical”, as these treatments are often denominated. The “alternative” category is rather heterogenous, mixing types of interventions with modes of delivery or settings. I am also not sure how subcategories fit together, for example there are legitimate, evidence-based technology psychotherapies, whereas yoga or diet have been less studied. Moreover, it also overlaps with psychotherapy, often delivered via the Internet or in other electronic ways. Finally, where do prevention interventions enter?What is the justification for the alpha threshold? Either the authors employ a correction for multiple comparisons, which is preferable, or at least use the most stringent threshold proposed (p=0.005, see https://www.nature.com/articles/s41562-017-0189-z and associated discussion).I am also not sure all the comparisons in Table 2 make sense or have any relevance, particularly for categories with few trials. It might be more useful to just include statistical comparisons for a few pre-selected categories where there is a reasonable number of included trials, and just present the others descriptively.Please list the relevant R packages used and also consider sharing the code for the analysis, to ensure reproducibility of findings.Minor issues:In the introduction, some discussion needs to be included about the fact that increases in prevalence might reflect changes in assessment criteria and instruments.Please replace references 44 and 45 with more updated and comprehensive meta-analyses about the similar effectiveness of psychotherapy and pharmacotherapy. There are recent network meta-analyses, individual patient data meta-analyses and so on.**********6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: Yes: FLORIAN NAUDETReviewer #2: Yes: Ioana A. Cristea[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.17 Feb 2021Editor/reviewer comments are noted with an "**" before the comment. The responses are in the paragraph(s) below each comment.Editor’s Comments:**Comment 1: Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at, https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf.Thank you for your guidance with this. We have reviewed these documents and believe that our manuscript now meets the PLOS ONE style and formatting requirements.**Comment 2: We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.Thank you. We have downloaded our data and code onto a repository, and they are now freely available.**Comment 3: In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide.We have now made our data and code freely available on an online repository. There are no restrictions to access this deidentified data. We have included the URL below as well as in the revised cover letter: https://datadryad.org/stash/share/8riF8rerFkmSGssL3DaMv4qb07I9W7CC4m3w387zseE. Of note, the DOI (10.5061/dryad.t4b8gtj1d) is currently being curated, but it should be made available shortly).**Comment 4: Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.Thank you for your guidance with this. We have placed our two supplemental tables with table titles and legends at the end of the manuscript under a new section labeled ‘Supporting Information’.Reviewer 1’s Comments:** Comment 1: (Abstract) Add some limitations to avoid any spin.Thank you for this feedback. We added two sentences to the abstract to address potential limitations (Tracked version, Lines 23-27).**Comment 2: (Introduction) Add a systematic overview of overlapping projects on adults/child, including in other countries (e.g., a table summarizing evidence prior this study).We appreciated this recommendation. In addition to the paragraph that summarizes overlapping projects in adult populations (Tracked version, Lines 52-64), we added a paragraph to the introduction that itemizes the systematic reviews and meta-analyses we identified that addressed trends in pediatric mental health (Tracked version, Lines 66-84). These studies were limited to reporting trends in the investigation of particular treatment types. We did not find reviews on larger, systemic trends in pediatric mental health research across diagnostic categories or across treatment types, and so we have not provided a table. In this paragraph we also added a brief discussion of why trends drawn exclusively from published research may provide a biased perspective on what clinical research is being conducted. A prior analysis of ClinicalTrials.gov showed that nearly 50% of pediatric trials do not reach publication within 4.5 years of study completion [1]. We hope the addition of this information to the introduction adds helpful background that identifies where gaps remain in identifying trends in pediatric mental health clinical research and the need for the analysis described in this study.**Comment 3: (Methods) Was there a pre-registration? If, no please make it explicit.This study was not pre-registered. We added a sentence stating this explicitly in the Materials and Methods section (Tracked version, Line 120).**Comment 4: (Methods) Please state in the text the date of the protocol / the date of the analysis / and describe any change to the protocol in a dedicated paragraph.Thank you for your guidance with this. We have added two paragraphs to the Materials and Methods section detailing the 6 changes that were made to the original protocol, why they were made, the dates of when these changes were made, and when the analysis was conducted (Tracked version, Lines 119-153). We have also created a supplemental table (S1 Table) that summarizes these changes to the protocol, and we have uploaded a version of the protocol in which these changes have been tracked. You will see that four of the changes were due to recommendations we received from reviewers regarding our first manuscript (Wortzel et al., 2020) that were relevant to this analysis as well [2]. The most notable of these was confining our analysis to US trials, which removed a significant potential source of bias from the analysis. The other two changes to the protocol were made in response to reviewer comments for this manuscript. These included changing the alpha threshold for statistical significance to 0.005 (please see our response to your Comment 8 for an explanation of this change), and we changed the name of two of the intervention categories (‘Interventional’ to ‘Stimulation’ and ‘Alternative’ to ‘Non-Psycho/Pharmacotherapy’). We hope our additions help clarify these issues.**Comment 5: (Methods) Please provide more detail about the supportive care category in the method section (and make sure that one can understand how it differs from treatment).Thank you for this suggestion. We have added a description of the ‘Supportive Care’ category. We also added definitions of the ‘Treatment’ and ‘Prevention’ categories to provide clarity of what constituted these trials’ ‘Primary Objectives’ (Tracked version, Lines 168-173).**Comment 6: (Methods) How were managed missing data on clinicaltrials.gov (e.g., related to study phase)? Please add some details?We appreciated this feedback. We added three sentences to address this issue. First, we added a sentence to clarify that the ‘N/A’ designation for ‘Study Phase’ does not refer to missing data but to trials that did not meet criteria for study phase designation (i.e., this is a category within ClinicalTrials.gov that primary investigators could choose if their trials did not have FDA-defined phases, such as trials studying devices or behavioral interventions) (Tracked version, Lines 179-181). We also added two sentences clarifying that, when data were missing for certain variables, trials with these missing data were excluded (Tracked version, Lines 242-244). The sample size used to assess each trial characteristic is reported in Table 1 to clarify when the sample size differed for each variable due to missing data.**Comment 7: (Methods) When it comes to interventions tested, please replace the category “interventional” by another word (e.g., “stimulation”) … “Interventional” in my opinion rather refers to an interventional study (versus observational studies) and it could be somewhat misleading (e.g., it is used with this meaning page 5 line 73).Thank you – we agree – we appreciate how the term ‘Interventional’ can be confusing. We have changed the category ‘Interventional’ to ‘Stimulation’ as you suggested throughout the manuscript.**Comment 8: (Methods) No reason is given for the threshold of alpha=0.01. This would apply in case there are 5 primary outcomes following a Bonferroni correction. It must be justified more adequately. In addition, and, to be provocative, I’m not sure that most of these p-values are needed (as you are describing the complete population, statistical testing does not really make sense in my view).Thank you for your guidance with this. This was also feedback that we received from Reviewer 2. Reviewer 2 recommended that we use a more stringent alpha=0.005, as has been previously published [3]. We appreciate your point that these are data describing the complete population of mental health trials registered in ClinicalTrials.gov, and we considered not calculating p-values for this reason. However, given that ClinicalTrials.gov likely does not contain 100% of the mental health trials conducted in the US (i.e., there are other registries in which trials could be registered, and technically not all types of trials are required to register [4]), we thought it was best to still consider these data as a sample of the entire pediatric mental health clinical trials portfolio, and to, therefore, report findings with p-values. Throughout the manuscript we also report information on percentage changes, which provide context for the effect sizes for the analyses.**Comment 9: (Methods) We need somewhere the details of the “non-DSM” category (in addition, it is the most represented category at some points, e.g., the last years, a list in the methods or a figure in the results may be helpful).We greatly appreciated your suggestion to delve into the ‘Non-DSM-5’ category further. We think that this adds clarity to a what is being studied in this category. We re-reviewed the titles and trial descriptions for these 206 trials, and we assigned them to 9 subcategories of ailments that are not defined in the DSM-5. We have created a supplemental table (S2 Table) that shows the breakdown of ‘Non-DSM-5’ trials within these 9 subcategories).**Comment 10: (Results) Please add a flow chart and a paragraph about study selection.Thank you for this suggestion. We have created a flowchart showing how trials were selected, and we added this figure to the Results section. The origins of the number of trials used to calculate the ratios of US to global pediatric mental health trials (1,019/1,626) and pediatric to total US mental health trials (1,019/6,302) was previously unclear, and we think this flowchart provides needed clarity. We also dedicated a new paragraph (it appears as two paragraphs in the tracked revisions) in the results section to describe how trials were selected for inclusion in this analysis (Tracked version, Lines 260-270).**Comment 11: (Results) Please detail the interrater agreement for data extraction.There was no formal interrater agreement or interrater reliability statistic calculated for this study; instead, we applied consensus rating. We plan to add this to our protocol in future analyses of the ClinicalTrials.gov database. However, we did have a regimented process by which raters were trained in how to label trials. The 6 psychiatrist raters reviewed the list of all of the disorders that fall under each of the DSM-5’s Section II Diagnostic Criteria and Codes (S2 Table in Wortzel et al., 2020)[2]. All 6 psychiatrists then reviewed the same sample of 250 trials to ensure agreement on the labeling criteria. When raters identified any ambiguity when labeling trials, these trials were flagged and reviewed by another psychiatrist. We explain this process in the Materials and Methods section (Tracked version, Lines 103-116). We hope that this description provides clarity to readers of how trials were labeled and how efforts were made to ensure interrater consistency.**Comment 12: (Results) Table 1: see my comments about statistical testing.We chose to keep p-values as a means of interpreting the significance of differences among groups; however, we are now utilizing a more stringent alpha=0.005, as has been recommended in the literature and was suggested by Reviewer 2 [3]. We limit the use of the term ‘significant’ to describing findings with p-values ≤0.005.**Comment 13: (Results) Please clarify this sentence as the numbers (90.8+23) do not add up to 100 %. I understand that there can be some overlap, but it should be explained: “The top five disorder categories studied (neurodevelopment, substance & addiction, depression, anxiety, and Non-DSM-5 conditions) comprised 90.8% of pediatric mental health trials (Fig 2A). The remaining 14 disorder categories were studied in 23.0% of trials.”Thank you. We added a sentence after the sentence you highlighted to hopefully provide clarity to why the category percentages sum to greater than 100% (Tracked version, Lines 348-350).**Comment 14: (Results) Figure 2 is really nice. Congratulations for this figure. Please consider moving “non-DSM” Disorders to the second line (and to order the graph by frequencies).Thank you – that’s very kind of you to say. We appreciate your advice to move the ‘Non-DSM-5’ category so that the categories are organized from largest to smallest frequency. We made this change.**Comment 15: (Results) Figure 3 is also a really nice one. For panels B, C and D, I would suggest however to use the same scale for the y-axis. I understand that the number are very different from one category to the other. So perhaps, you can use a log scale. Second possibility, you can use percentages (the numbers being given in panel A).We have changed the figure so that the y-axis scales for panels Fig 3B-D are now the same. We opted to plot trial frequency for each category rather than percent, as we thought this improved ease of data visualization and interpretation. We decided to keep the y-axis scale linear rather than logarithmic for this reason as well.**Comment 16: (Discussion) Please insist on the descriptive nature of the study.We appreciate this feedback. We have added a sentence to the limitations section of the discussion that we hope drives home that this analysis is descriptive in nature (Tracked version, Lines 586-588).**Comment 17: (Discussion) As there are many comparisons without any adjustment of confounders, please warn explicitly about the fact that no causal interpretations are possible – discuss the issue of confounding thoroughly.Building off of the sentence added to the limitations section to address your Comment 16, we added an additional sentence reinforcing that there are many comparisons made without adjustments for potential confounders, and causal relationships cannot be drawn from these data (Tracked version, Lines 586-589). We hope that these two sentences together, in addition to the text that comes directly before these added sentences adequately address this concern. The full section addressing the descriptive nature of the analysis, the possibility for confounding, and the inability to draw causal relationships is now as follows (Tracked version Lines 581-591):“Our study has several possible limitations. First, ClinicalTrials.gov is not an exhaustive list of all US clinical trials [5]. Phase 1 trials and trials studying non-pharmacologic interventions were not subject to the FDAAA or the Final Rule, so these trials may be underrepresented in the registry [6]. There may be other unknown norms and incentives that also bias the registration of certain trial types. Therefore, trends identified in the registry may at least in part reflect changes in trial registration rather than changes in clinical research. Analyses of the registry are also descriptive in nature and include many comparisons that do not account for potential unknown confounders. As a result, causal relationships cannot be drawn from these data. Nevertheless, analyses of ClinicalTrials.gov have allowed many medical specialties to assess trends in clinical research that might otherwise remain unassessable [2, 5, 7-9], as nearly half of trials run by large sponsors go unpublished [10].”**Comment 18: (Overall) I acknowledge that there are no appropriate reporting guidelines for such meta-research study. But please have a look at equator network and follow the most appropriate guideline (e.g., STROBE?).Thank you for this guidance. We have reviewed the STROBE guidelines and have added a statement in the Materials and Methods section stating that we adhered to the STROBE reporting guidelines for cross-sectional studies (Tracked version, Lines 248-251).**Comment 19: (Overall) Please share the data, code and any other information in an adequate repository (e.g., Dryad, https://datadryad.org/stash).We have uploaded the data and code onto an open-source repository. It can now be found at the following URL: https://datadryad.org/stash/share/8riF8rerFkmSGssL3DaMv4qb07I9W7CC4m3w387zseEReview 2’s Comments:**Comment 1: Data availability: The authors should make the dataset used for the analysis available, it is not enough to just say where and how it could be retrieved. The authors might have used criteria or filters or other operations on a large dataset. Readers need to be able to replicate the main analyses. Is there a reason why the authors have opted not to share the extracted data?Thank you for this feedback. We have now made our data and code available so that the main analysis can be replicated (https://datadryad.org/stash/share/8riF8rerFkmSGssL3DaMv4qb07I9W7CC4m3w387zseE). We were previously unfamiliar with the means by which we could upload this information onto a free server. There is no reason why we cannot share extracted data. Thank you for your guidance with this.**Comment 2: How does this dataset differ the authors’ previous publication on all mental health research in the same journal (https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0233996)? Weren’t pediatric trials a subset of that? What is the rationale of presenting a separate analysis on that cohort? At least a brief discussion of overlap and the rationale of the more detailed analysis on pediatric trials should be included.We have added a paragraph to the introduction that explains the extent to which our prior analysis assessed pediatric trials (Tracked version, Lines 66-84). Namely, we previously identified the percentage of US mental health trials that were conducted in the pediatric population, but we did not explore any trends in this trial population. In addition to analyzing trends in specifically pediatric mental health trials, we also manually parsed treatment types in this analysis in a manner more detailed than was performed in the prior analysis in all US mental health trials. In this new paragraph, we also included a review of the literature of prior studies assessing trends in pediatric mental health. We found that, while there have been systematic reviews and meta-analyses of trends in specific treatments in pediatric mental health, there have not been analyses looking at overall trends in pediatric mental health as we set out to do in this study. We hope that this addition to the introduction helps clarify how this study differs from our prior publication (i.e., we think there is little overlap), as well as better identifies the gap this analysis hopes to fill in the literature about trends in pediatric mental health clinical research.**Comment 3: Is there any reason for not pre-registering the study protocol? Submission with manuscript, while useful, does not afford the opportunity to assess what was pre-planned and what not.We agree. Moving forward, we will plan to pre-register our protocols. This seems to be the best practice for ensuring the communication of what was pre-planned and what was changed in our analysis. Unfortunately, that is something that is not possible for the current paper. We do not view this as a major limitation, however, because the focus of our study was largely to review available data for a descriptive analysis of trends rather than primarily an interrogation of a priori hypotheses (for which pre-registration seems more critical). We have made our best effort to detail the changes that were made to our original protocol in two new paragraphs in the Materials and Methods section (Tracked version, Lines 119-153), and we also created a supplemental table (S1 Table) to summarize these changes. In our resubmission we have also included a version of the protocol in which the changes made to the original protocol have been annotated/tracked. We hope that these additions to our submission help provide transparency about our protocol despite the lack of protocol pre-registration.**Comment 4: My main objection is about treatment type categories, which appear rather vague and counter-intuitive, mixing type of intervention with delivery type or setting. The authors could consider reorganizing them. For instance, aren’t psychotherapy and pharmacotherapy also interventional? Also “alternative” is an unfortunate label that has a different meaning for many people, as in “alternative” to “conventional” biomedicine. I suggest you replace “Interventional” with “Physical”, as these treatments are often denominated. The “alternative” category is rather heterogenous, mixing types of interventions with modes of delivery or settings. I am also not sure how subcategories fit together, for example there are legitimate, evidence-based technology psychotherapies, whereas yoga or diet have been less studied. Moreover, it also overlaps with psychotherapy, often delivered via the Internet or in other electronic ways. Finally, where do prevention interventions enter?We greatly appreciated your feedback regarding how to address treatment types. First, we agree that the treatment category “Interventional” is a confusing term. Reviewer 1 suggested that we use the term ‘Stimulation’ instead of ‘Interventional’, and we made this change. Hopefully by removing the term ‘Interventional’ we have eliminated the issue concerning whether psychotherapy and pharmacotherapy are also interventional. In a similar way, we agree that the term ‘Alternative’ caries a connotation of being less conventional/likely less rigorously studied. We have changed this category name to ‘Non-Psycho/Pharmacotherapy’ to more accurately represent the treatments included in this category and to remove this potential connotation. Hopefully now this category is also now more acutely appreciated as a ‘catch-all’ category for interventions that were not medications, psychotherapy, or stimulation.We appreciate your point that the new category ‘Non-Psycho/Pharmacotherapy’ includes treatments that are unique modes of delivery rather than strictly unique interventions. For example, telepsychiatry might deliver psychotherapy to a patient that, other than being conducted over Zoom in a patient’s home, is no different than the therapy a patient would receive in a clinic. Likewise, a community intervention, such as an assertive community treatment (ACT) program, might provide psychopharmacology treatment within patients’ homes that is similar to that which would be delivered in an intensive outpatient clinic setting. However, these unique modes of delivery can fundamentally change a treatment’s effectiveness. For example, there is evidence that telemedicine interventions for severely mentally illpatients may improve medication adherence and reduce symptom severity and hospitalizations for these patients compared to standard in-person treatment [11]. Psychotic patient enrolled in forensic assertive community treatment programs have significantly fewer criminal convictions and spend less time in jail and in hospitals compared to patients receiving standard outpatient care [12]. Therefore, we think there is merit in identifying these mode of treatment (i.e., telepsychiatry and community programs) as unique treatment subcategories, as they confer distinct treatment outcomes that differ from their conventional in-clinic counterparts.However, you brought to our attention a fundamental flaw in how we grouped trials studying treatments involving technology and community. In the original manuscript, the category ‘Technology’ combined trials testing technological treatments, such as videogames and apps using artificial intelligence, with trials that used technology as a mode of treatment (e.g., telepsychiatry). Similarly, the category ‘Community’ grouped trials that tested community programs (e.g., student-led anti-suicide campaigns and afterschool programs) with programs that involved psychiatric care that occurred in the community (e.g., therapists in schools and ACT programs). It is necessary that these types of treatment be distinguished to communicate these differences in how technology and community interventions are used.To do this, we reviewed all ‘Technology’ trials and resorted them into ‘Technology’ (i.e., trials using technology as a specific treatment, such as a videogame or interactive app) and ‘Telecommunication’ (i.e., telepsychiatry, etc.). Similarly, we reviewed all ‘Community’ trials and sorted them into ‘Community Programs’ (i.e., afterschool programs, community center activities, etc.) and ‘Community Outreach’ (i.e., ACT programs, integration of mental health into primary care, etc.). We provided an explanation of these new subcategories in the Materials and Methods section (Tracked version, Lines 211-220), and we parse these subcategories in the Results section when they are highlighted (Tracked version, Lines 411-418).To answer your question regarding the potential for overlap between technology and psychotherapy (e.g., a trial in which CBT is delivered over Zoom), this trial would now be labeled in our study as using both ‘Psychotherapy’ and ‘Telecommunication’. Trials were labeled with as many treatment types as were deemed relevant to what was being studied. In this instance, we deemed that the trial was testing both a psychotherapy treatment as well as a telecommunication treatment.To answer your question regarding prevention designation, we determined that ‘prevention’ referred to a trial’s ‘Primary Objective’ (a separate variable provided by ClinicalTrials.gov (Tracked version, Lines 164-173) rather than its treatment type. For example, a trial whose primary objective was to prevent teens from vaping by testing a school-based prevention campaign would have a treatment designation of ‘Non-Psycho/Pharmacotherapy’ with a treatment subcategorization of ‘Community Programs’. We hope this helps answer this question.In summary, we think that changing the two primary treatment categories to ‘Stimulation’ and ‘Non-Psycho/Pharmacotherapy’, and the addition of two new subcategories under ‘Non-Psycho/Pharmacotherapy’ to parse technology and community treatments, were important alterations to the analysis that strengthen the paper.**Comment 5: What is the justification for the alpha threshold? Either the authors employ a correction for multiple comparisons, which is preferable, or at least use the most stringent threshold proposed (p=0.005, see https://www.nature.com/articles/s41562-017-0189-z and associated discussion).We greatly appreciated your guidance with this and for bringing to our attention the study by Benjamin and colleagues [3]. We updated our protocol and paper to utilize the more stringent alpha threshold of α=0.005. We limit the use of the term ‘significant’ for trends with p-values ≤0.005.**Comment 6: I am also not sure all the comparisons in Table 2 make sense or have any relevance, particularly for categories with few trials. It might be more useful to just include statistical comparisons for a few pre-selected categories where there is a reasonable number of included trials, and just present the others descriptively.Thank you for this point. We agree that comparisons with too few trials makes statistical comparisons within these categories difficult to interpret. For the nine diagnostic categories with fewer than 30 trials, we removed the chi-squared p-values (denoted with dashes in Table 2). We also removed the p-value for the treatment type ‘Stimulation’ in this table, which had fewer than 30 trials. This issue was relevant to the data presented in Table 1, where we also made these changes. We added an explanation in the legends of both tables explaining why p-values are not calculated for these data. As you suggested, we left the data for these categories in the table for descriptive purposes, as these percentage breakdowns are not shown elsewhere in the paper.**Comment 7: Please list the relevant R packages used and also consider sharing the code for the analysis, to ensure reproducibility of findings.We have added the R packages used in the analysis to the Materials and Methods section (Tracked version, Lines 255-256), and we have shared the code for our analysis on a freely accessible server.**Comment 8: In the introduction, some discussion needs to be included about the fact that increases in prevalence might reflect changes in assessment criteria and instruments.We appreciate the need for that clarification. We added to the first paragraph of the introduction to provide the caveat that changes in these disorder prevalences may reflect, at least in part, changes in assessment tools and diagnostic accuracy (Tracked version, Lines 39-41).**Comment 9: Please replace references 44 and 45 with more updated and comprehensive meta-analyses about the similar effectiveness of psychotherapy and pharmacotherapy. There are recent network meta-analyses, individual patient data meta-analyses and so on.We have replaced these citations with more recent meta-analyses. Thank you for this recommendation.References:1. Pica N, Bourgeois F. Discontinuation and Nonpublication of Randomized Clinical Trials Conducted in Children. Pediatrics. 2016;138(3):e20160223. doi: 10.1542/peds.2016-0223. PubMed PMID: 27492817.2. Wortzel JR, Turner BE, Weeks BT, Fragassi C, Ramos V, Truong T, et al. Trends in mental health clinical research: Characterizing the ClinicalTrials.gov registry from 2007–2018. PLOS ONE. 2020;15(6):e0233996. doi: 10.1371/journal.pone.0233996.3. Benjamin DJ, Berger JO, Johannesson M, Nosek BA, Wagenmakers EJ, Berk R, et al. Redefine statistical significance. Nature Human Behaviour. 2018;2(1):6-10. doi: 10.1038/s41562-017-0189-z.4. Tse T, Fain KM, Zarin DA. How to avoid common problems when using ClinicalTrials.gov in research: 10 issues to consider. BMJ. 2018;361. doi: 10.1136/bmj.k1452.5. Liu X, Zhang Y, Tang L, et al. Characteristics of radiotherapy trials compared with other oncological clinical trials in the past 10 years. JAMA Oncology. 2018. doi: 10.1001/jamaoncol.2018.0887.6. Tse T, Fain KM, Zarin DA. How to avoid common problems when using ClinicalTrials.gov in research: 10 issues to consider. Bmj. 2018;361:k1452. Epub 2018/05/29. doi: 10.1136/bmj.k1452. PubMed PMID: 29802130; PubMed Central PMCID: PMCPMC5968400 declaration of interests and declare the following interests: none.7. Pasquali SK, Lam WK, Chiswell K, Kemper AR, Li JS. Status of the pediatric clinical trials enterprise: an analysis of the US ClinicalTrials.gov registry. Pediatrics. 2012;130(5):e1269-77. Epub 2012/10/03. doi: 10.1542/peds.2011-3565. PubMed PMID: 23027172; PubMed Central PMCID: PMCPMC4074644.8. Arnow KD, King AC, Wagner TH. Characteristics of mental health trials registered in ClinicalTrials.gov. Psychiatry Res. 2019;281:112552. Epub 2019/10/19. doi: 10.1016/j.psychres.2019.112552. PubMed PMID: 31627072.9. Califf RM, Zarin DA, Kramer JM, Sherman RE, Aberle LH, Tasneem A. Characteristics of clinical trials registered in clinicaltrials.gov, 2007-2010. JAMA. 2012;307(17):1838-47. doi: 10.1001/jama.2012.3424.10. Iacobucci G. Nearly half of all trials run by major sponsors in past decade are unpublished. BMJ. 2016;355:i5955. doi: 10.1136/bmj.i5955.11. Lawes-Wickwar S, McBain H, Mulligan K. Application and Effectiveness of Telehealth to Support Severe Mental Illness Management: Systematic Review. JMIR Ment Health. 2018;5(4):e62. Epub 21.11.2018. doi: 10.2196/mental.8816. PubMed PMID: 30463836.12. Lamberti JS, Weisman RL, Cerulli C, Williams GC, Jacobowitz DB, Mueser KT, et al. A Randomized Controlled Trial of the Rochester Forensic Assertive Community Treatment Model. Psychiatric Services. 2017;68(10):1016-24. doi: 10.1176/appi.ps.201600329.Submitted filename: Itemized Response to Reviewers_21.2.17.docxClick here for additional data file.8 Mar 2021Trends in US pediatric mental health clinical trials: An analysis of ClinicalTrials.gov from 2007 – 2018PONE-D-20-34754R1Dear Dr. Wortzel,We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.Kind regards,Claudio GentiliAcademic EditorPLOS ONEAdditional Editor Comments (optional):Reviewers' comments:Reviewer's Responses to QuestionsComments to the Author1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.Reviewer #1: All comments have been addressedReviewer #2: All comments have been addressed**********2. Is the manuscript technically sound, and do the data support the conclusions?The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.Reviewer #1: YesReviewer #2: Yes**********3. Has the statistical analysis been performed appropriately and rigorously?Reviewer #1: YesReviewer #2: Yes**********4. Have the authors made all data underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.Reviewer #1: YesReviewer #2: Yes**********5. Is the manuscript presented in an intelligible fashion and written in standard English?PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.Reviewer #1: YesReviewer #2: Yes**********6. Review Comments to the AuthorPlease use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)Reviewer #1: Only comment : as far as it is possible move the paragraph CHANGE TO THE INITIAL PROTOCOL at the end of the method section or at the start of the results sectionReviewer #2: There are some spelling errors in the sections added, e.g. "systemic" instead of "systematic" in discussing the systematic reviews in the Introduction.**********7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: Yes: Florian NAUDETReviewer #2: Yes: Ioana A. Cristea22 Mar 2021PONE-D-20-34754R1Trends in US pediatric mental health clinical trials: An analysis of ClinicalTrials.gov from 2007 – 2018Dear Dr. Wortzel:I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.If we can help with anything else, please email us at plosone@plos.org.Thank you for submitting your work to PLOS ONE and supporting open access.Kind regards,PLOS ONE Editorial Office Staffon behalf ofProfessor Claudio GentiliAcademic EditorPLOS ONE
Authors: Scott E Hadland; Magdalena Cerdá; Joel J Earlywine; Maxwell S Krieger; Timothy S Anderson; Brandon D L Marshall Journal: JAMA Pediatr Date: 2020-04-01 Impact factor: 16.193
Authors: Rebecca H Bitsko; Joseph R Holbrook; Reem M Ghandour; Stephen J Blumberg; Susanna N Visser; Ruth Perou; John T Walkup Journal: J Dev Behav Pediatr Date: 2018-06 Impact factor: 2.225
Authors: Jon Baio; Lisa Wiggins; Deborah L Christensen; Matthew J Maenner; Julie Daniels; Zachary Warren; Margaret Kurzius-Spencer; Walter Zahorodny; Cordelia Robinson Rosenberg; Tiffany White; Maureen S Durkin; Pamela Imm; Loizos Nikolaou; Marshalyn Yeargin-Allsopp; Li-Ching Lee; Rebecca Harrington; Maya Lopez; Robert T Fitzgerald; Amy Hewitt; Sydney Pettygrove; John N Constantino; Alison Vehorn; Josephine Shenouda; Jennifer Hall-Lande; Kim Van Naarden Braun; Nicole F Dowling Journal: MMWR Surveill Summ Date: 2018-04-27