Literature DB >> 36006866

Exploring the impact of trait number and type on functional diversity metrics in real-world ecosystems.

Timothy Ohlert1, Kaitlin Kimmel2,3, Meghan Avolio3, Cynthia Chang4, Elisabeth Forrestel5, Benjamin Gerstner1, Sarah E Hobbie6, Kimberly Komastu7, Peter Reich8,9,10, Kenneth Whitney1.   

Abstract

The use of trait-based approaches to understand ecological communities has increased in the past two decades because of their promise to preserve more information about community structure than taxonomic methods and their potential to connect community responses to subsequent effects of ecosystem functioning. Though trait-based approaches are a powerful tool for describing ecological communities, many important properties of commonly-used trait metrics remain unexamined. Previous work in studies that simulate communities and trait distributions show consistent sensitivity of functional richness and evenness measures to the number of traits used to calculate them, but these relationships have yet to be studied in actual plant communities with a realistic distribution of trait values, ecologically meaningful covariation of traits, and a realistic number of traits available for analysis. Therefore, we propose to test how the number of traits used and the correlation between traits used in the calculation of functional diversity indices impacts the magnitude of eight functional diversity metrics in real plant communities. We will use trait data from three grassland plant communities in the US to assess the generality of our findings across ecosystems and experiments. We will determine how eight functional diversity metrics (functional richness, functional evenness, functional divergence, functional dispersion, kernel density estimation (KDE) richness, KDE evenness, KDE dispersion, Rao's Q) differ based on the number of traits used in the metric calculation and on the correlation of traits when holding the number of traits constant. Without a firm understanding of how a scientist's choices impact these metric, it will be difficult to compare results among studies with different metric parametrization and thus, limit robust conclusions about functional composition of communities across systems.

Entities:  

Mesh:

Year:  2022        PMID: 36006866      PMCID: PMC9409596          DOI: 10.1371/journal.pone.0272791

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Trait-based diversity measures have advanced the field of community ecology by increasing our understanding of both community assembly and diversity impacts on ecosystem functions [1, 2]. Functional diversity metrics allow researchers to quantify multiple facets of diversity, place an emphasis on mechanisms of community assembly, and provide a ‘common currency’ by which communities can be compared across sites and ecosystems [3, 4].Traditional measures for characterizing communities, such as species richness and species ordinations, use species’ taxonomic classifications as discrete units, but functional diversity metrics can preserve more information about community assembly and function by including traits of species organized on continuous axes [5, 6]. Several aspects of functional and taxonomic diversity have been extensively studies. Scientists have probed functional diversity’s correlation with species richness [7, 8] and ecosystem functioning [4], the importance of intraspecific trait variation for diversity [4, 9, 10], and the ecological hypotheses that functional diversity metrics can test, such as optimal strategies or functional turnover [6, 11]. Many taxonomic measures of community diversity have been extensively studied for their mathematical properties to allow these metrics to be comparable across sites and ecosystems, such as Shannon’s diversity and Simpson’s evenness that have mathematical characteristics linked to species number [12, 13]. Similarly, functional diversity metrics have mathematical characteristics that may cause the number or type of traits used to calculate the metric to impact the measure. For example, multidimensional metrics are calculated with additional dimensions for each additional trait included, and the correlation between traits affects the importance of each dimension to the metric. Therefore, functional diversity could differ among replicate plots or sites simply because of the number or types of traits used to calculate the metric without any underlying ecological basis. Though single-trait indices are an effective tool for linking trait diversity to specific ecosystem processes [14, 15], indices based on multiple traits may better match ecological theories of community assembly around multidimensional niche space [16-18]. As use of multi-trait functional diversity increases, it is important to determine the conditions under which they reflect ecological processes as opposed to mathematical patterns. Studies using simulated communities have tested whether the number and correlation of traits used in functional diversity metrics can impact the magnitude of the metric [7, 19]. Using simulated data, Legras et al. [19] showed that functional richness and functional divergence metrics decreased with increased trait number, but functional evenness metrics were not responsive to increasing trait numbers. Also using simulated data, Cornwell et al. [7] showed that convex hull volume (commonly referred to as “functional richness”) tended to decrease with increasing correlation among traits included in the metric calculation, and that the decrease was greater in more species-rich communities. The limitations of functional diversity metrics described in these studies with simulated community data could be exacerbated when applied in natural communities. Calculating functional diversity measures in natural communities poses additional challenges both ecological and practical. Real plant communities are non-random assemblages of species which are influenced by competitive interactions, coexistence, mutualisms, niche partitioning, and environmental filtering among many other processes of community assembly [20-25]. Functional diversity metrics are likely to exhibit patterns due to ecologically meaningful correlation of traits in real communities, in particular, among suites of traits typically used in community ecology such as the leaf economic spectrum and root economic spectrum [26, 27]. Moreover, real data collection introduces constraints on trait data, such as realistic numbers of traits collected given limited resources and missing trait data, particularly for rare species. Functional diversity metrics, therefore, are most often calculated with fewer traits and fewer species than those in studies based on simulated communities. The field lacks clear guidelines for researchers to follow when choosing the number and types of traits to include when calculating functional diversity metrics. Decisions are often based on researcher intuition and the practices of similar studies, but such intuition and interpretation of trait selection can be improved by rigorous exploration of the impact of trait selection on diversity metrics [4, 28, 29]. These decisions can fall along a spectrum of options ranging from selecting the minimum number of traits needed to calculate a metric to using every trait available. For example, some studies suggest that researchers use a small number of traits related to certain ecosystem properties or other topics of interest (e.g., [8]), regardless of how correlated they may be. Other studies use all available traits in order to maximize the dimensions of diversity being studied in an effort to comprehensively assess the niche space that species and communities occupy (e.g., [30]). Choosing traits that are highly correlated can result in an underrepresentation of the diversity of functions present by overemphasizing groups of traits which describe similar processes, such as traits involved in the leaf economics spectrum [31]. Further, functional diversity metric calculation in high dimensional space can require dimensionality reduction–another decision that can impact the value of the metric calculated. However, few studies scrutinize how these decisions can impact conclusions when using functional diversity metrics to characterize communities. Here, we aim to understand how the number of traits and correlation between traits impact functional diversity values. We will focus on eight measures of functional diversity that express principal facets of community trait composition (see for more details on each metric): functional richness (FRich), functional evenness (FEve), functional divergence (FDiv), functional dispersion (FDis), Rao’s Q, kernel density estimation (KDE) richness, KDE evenness, and KDE dispersion [31-34]. We will use trait data from real (natural/intact and experimental) plant communities, which will allow us to understand how these metrics respond to a realistic spread of traits and species richness. In this study, we will use trait data collected from three U.S. grasslands, which range from tallgrass prairie to desert grassland, to test impacts of trait number and identity in functional diversity metric values. Our dataset includes plant traits collected on location at these three sites that include both naturally assembled and planted communities. Specifically, we ask: Do functional richness, functional evenness, and functional dispersion vary with respect to the number and correlation of traits used? Based on findings from [19], we expect functional richness, KDE richness, functional dispersion, and functional divergence to decrease with increasing numbers of traits, but for Rao’s Q to increase [35] and functional evenness to be unresponsive to the number of traits. We do not have a priori hypotheses for KDE evenness and KDE dispersion since properties of these metrics have yet to be explicitly studied. Based on [7], we expect that functional richness will be greater when traits are less correlated. However, we do not have directional hypotheses for the rest of the metrics. Is metric sensitivity to trait number/type consistent across sites and experiments? If metric sensitivity is consistent across sites, it will be easier to standardize functional diversity metrics across different studies. If sensitivity is not consistent across sites, further investigation will be necessary to understand the consequences of this when comparing functional diversity across sites.

Methods

Site descriptions

Here we will use data from three grassland sites across the United States that span a range of climate (MAP 250mm—866 mm, MAT 6°C—15°C) and species diversity. We will use two sites with naturally assembled communities and one with a planted community in order to be representative of the state of grassland studies where some use naturally assembled communities while others use planted communities. Cedar Creek Ecosystem Science Reserve (East Bethel, Minnesota, USA) is in central Minnesota and classified as a tallgrass prairie. According to Koppen and Geiger classification, the climate is characterized as cold continental with hot summer, but without a dry season [Peel 2007]. The mean growing season (May–August) precipitation is approximately 420 mm, mean minimum growing season temperature is 12°C, and mean maximum growing season temperature is 25°C (1982–2016 period; http://www.cedarcreek.umn.edu/research/data). Soils at Cedar Creek are characterized as nutrient-poor entisols derived from a glacial outwash sand plain [36]. The study from Cedar Creek consists of artificially planted communities. Konza Prairie Biological Station (Manhattan, Kansas, USA) is in eastern Kansas in the Flint Hills ecoregion. Konza is classified as a tallgrass prairie, and much of the site has remained unplowed throughout its history [37]. Konza’s growing season extends from roughly May-October, with annual precipitation averaging 835 mm and average July air temperature of 27C [37]. The Sevilleta National Wildlife Refuge is in central New Mexico at the northern edge of the Chihuahuan Desert. The Sevilleta includes desert grasslands, and the climate is characterized as cold semi-arid according to the Koppen and Geiger classification [36]. The growing season is characterized by two rainy periods (March—May and July—September) split by a dry period. The mean monsoon growing season precipitation is approximately 150 mm and the mean monsoon growing season temperature is 22C.

Community composition data

We will use one to four studies at each site (n = 7 studies total) within one year to characterize the functional diversity of grassland plant communities. At Cedar Creek, we will use community composition data from all 16-species plots in a biodiversity, CO2, and nitrogen addition experiment (BioCON, n = 48). All 16-species plots were originally planted with the same mixture of species (Achillea millefolium, Amorpha canescens, Andropogon gerardii, Anemone cylindrica, Asclepias tuberosa, Bouteloua gracilis, Bromus inermis, Elymus repens, Koeleria cristata, Lespedeza capitata, Lupinus perennis, Petalostemum villosum, Poa pratensis, Schizachyrium scoparium, Solidago rigida, and Sorghastrum nutans) such that all species were seeded at the same density in 1997. Plots are weeded every year to remove invading species. Through time, the plots can lose species (and regain those), but could never gain new species. Further, species abundances shifted from the equal proportion planted in the first year. Every August, species abundances were visually estimated in a 1 m2 permanent plot. Here, we used data from 2020—the most recent year species abundances are available. At both Konza and Sevilleta, we will use several studies as representative plant communities. This ensures that we will at least have one study per site if we need to drop observations because trait coverage is too low (see below for discussion). At Konza, we will use community composition data from 4 watershed transects with different burn frequencies and grazing patterns. Konza is dominated by a few C4 grass species (Andropogon gerardii, Schizachyrium scoparium, Sorghastrum nutans), with the bulk of species diversity made up by C3 grasses and forbs [37]. Specifically, we will use one watershed that was burned annually but never grazed, one that was burned annually and grazed, one that was burned every 20 years but never grazed, and one that was burned every 20 years and grazed. Cover was estimated in permanent 1x1m plots twice per year. We will use the maximum cover of the species between these two sampling times to get a cover estimate per species. We will use data from the 2010 sampling because it was the same year that trait data were collected at Konza. At Sevilleta, we used community composition data from two observational sites, one in a Great Plains grassland ecosystem and the other in a desert grassland ecosystem. The Great Plains grassland is dominated by Bouteloua gracilis (blue grama), a long-lived, caespitose, C4 perennial grass common throughout much of the United States and Canada. The desert grassland is dominated by Bouteloua eriopoda (black grama), a stoloniferous C4 perennial grass common in the southwestern United States and Mexico. These two dominant perennial grasses account for about 80% of vegetative cover in their respective ecosystems. Each site has 30 1x1m quadrats which were assayed in September of 2018, at the peak of the post-monsoon growing season and around the same time that trait data were collected. In each quadrat, plants were identified to species and their percent ground cover was visually estimated.

Trait data

Trait data were collected for the individuals found at each of the different sites. Thus, our trait data are representative of the traits actually found in the given community and not just an average independent of location. Traits include measurements from leaves (e.g. specific leaf area), stems (e.g. stem dry matter content), roots (e.g. root dry matter content), whole-plant (e.g. height), and ecological attributes (e.g. amount of nitrogen in monoculture). Including traits across these measurement categories provides a more-complete representation of community assemblages [38-41]. For detailed descriptions of trait collection protocols at each site, see S1 File. At Cedar Creek, we will use trait data collected in the monoculture plots of the BioCON experiment. We will use trait data from monoculture plots that correspond to the CO2 and N treatments to match with 16-species community plots. Data were collected between 1998 and 2020. Some traits were collected over multiple years whereas others were only collected once. In total, there were 10 distinct traits: specific leaf area (SLA), I* (the amount of light at the soil surface in monoculture), R* (the amount of nitrogen in monoculture), root %C, root %N, total root biomass, shoot %N, shoot %C, and seed mass. At Konza, we will use trait data collected in a watershed that was burned annually and had no grazers. In total there were 12 distinct traits: plant height, leaf area, specific leaf area, leaf dry matter content, stomatal length, stomatal density, stomatal pore area index, leaf %N, leaf %C, d13C, photosynthetic pathway, and growth form. At Sevilleta, we will use trait data collected primarily from September to November of 2017 on individuals growing under ambient conditions near permanent ambient plots used to monitor plant communities. The full suite of traits were often measured on the same individuals, up to 10 individuals per species. In total there were 10 distinct traits: maximum plant height, leaf dry matter content, specific leaf area, d15N, d13C, leaf %N, leaf %C, stem dry matter content, root dry matter content, and photosynthetic pathway. For each trait at each site, we will calculate an average trait value based on all the measurements for the given species and trait. We acknowledge that this obscures variation within a given trait (intraspecific variation) for a species; such variation can be quite important for some questions [10, 42–44]. The impacts of intraspecific variation in this study are minimized by only using trait values collected at each site, but sufficient data were not collected for each trait of each species to include intraspecific variation into our analysis. Before analysis, we will remove species that have less than 100% trait coverage. We will, however, make sure that the communities are still represented by at least 80% of species abundance–this approach de-emphasizes the importance of rare species, but is a logistical constraint faced by many researchers doing trait analyses. This will ensure that we are representing the community to the best of our ability with the given trait data.

Brief background on functional diversity metrics

We will focus our analyses on eight common functional diversity metrics: functional richness (FRich) [8], functional evenness (FEve) [9], functional dispersion (FDis), functional divergence (FDiv), Rao’s Q, kernel density estimation (KDE) richness, KDE evenness, and KDE dispersion [33]. FRich is the multidimensional equivalent of a range [8]. It is calculated as the convex hull volume that is made from all trait values for up to n traits in the community. The number of dimensions used to calculate the final volume can be reduced from the total trait number [45]. FEve is the minimum spanning tree to quantify the regularity of branch lengths and the evenness in trait relative abundances. For each branch, l, of the minimum spanning tree, the weighted evenness (EW) is calculated as where i and j are species, and w is the relative abundance of species i. Then, the partial weighted evenness (PEW) is then calculated for each branch as , where S is the total number of species in the community. FEve is then defined as [9]. FDis is the weighted mean distance between species and a weighted-centroid. It is calculated as where a is the relative abundance of species j and z is the distance species j is from the weighted centroid [33]. FDiv is a relative abundance-weighted spread of traits along a trait axis independent of functional richness and is calculated as where is the mean distance of species to the weighted-centroid and Δd is the sum of relative abundance-weighted deviances from the weighted-centroid [9]. Rao’s Q measures the pairwise differences in traits between species in a community and is calculated as where S is the number of species in the community, dij is the functional difference between the i-th and j-th species, and p is a vector of relative abundance values [46]. These five functional diversity metrics commonly incorporate distance measures by reducing dimensionality using principal coordinates analysis (PCoA) to return PCoA axes which are used to calculate the functional diversity metrics. However, we will avoid this dimensionality reduction when possible (for all metrics except FRich, see discussion in Functional Diversity Calculations section). n-dimensional hypervolumes use Gaussian kernel density estimation (KDE) to create a relative abundance-weighted probability distribution of traits in multidimensional space [34]. All KDE-based functional diversity metrics will be calculated using the hypervolume and bat packages in R [34, 47]. KDE richness is the total volume of the n-dimensional hypervolume created from unweighted trait values present in the community. KDE evenness is the overlap between the abundance-weighted n-dimensional hypervolume and a similar hypervolume in which all traits and abundances are distributed evenly. KDE dispersion is the average distance between random points within the n-dimensional hypervolume and the hypervolume centroid.

Functional diversity calculations

For each site, we will follow the same protocol for calculating functional diversity metrics. We will calculate FRich, FEve, and FDis, FDiv, and Rao’s Q using the FD package in R [45] using both Gower and Euclidean dissimilarity as the distance measure, along with using the hypervolume package in R to calculate KDE n-dimensional hypervolumes which are passed to the bat package to create KDE richness, KDE evenness, and KDE dispersion [34, 47]. Gower dissimilarity has the capacity to calculate distances with categorical traits, though Euclidean dissimilarity is better for continuous traits. Functional diversity metrics from the FD package and kernel density estimation are among the most-used metrics for quantifying trait-based diversity within communities due to both ease of use and ecological relevance [34, 45]. We will use dimensionality reduction where necessary in our analyses. First, PCoA is done on the species-species matrix for each set of traits we consider. The categorical variables are taken into consideration in the creation of the distance matrix which is done using Gower dissimilarity. Thus, the PCoA is done on the continuous distance values rather than on the raw traits. Similar to Legras et al. [19], we are going to hold the number of dimensions equal to 2 for only our calculation of functional richness (FRich) as the other metrics do not require dimensionality reduction (note: FRich does not either if all traits are continuous, but we have several categorical trats in our dataset). We will then conduct a sensitivity analysis to determine if holding the number of dimensions equal to 3, 4, and the maximum (dimensions = number of traits when using all continuous traits or dimensions = number of traits-1 when including categorical traits) produce similar results. Each metric uses species presence/absence or relative abundance in a plot along with its associated trait metrics. We will calculate each metric using all possible combinations of two traits up to all possible combinations of the maximum number of traits at each site. For example, at Sevilleta there are 10 different traits so there are 45 2-trait calculations, 120 3-trait calculations, 210 4-trait calculations, and so forth up to 10 9-trait calculations and 1 10-trait calculation. To measure the effects of trait correlation on functional diversity, we will focus on metrics calculated with 4 traits only to standardize between sites. We will calculate the minimum, maximum, and mean correlation between the traits at each site.

Statistical analyses

For each site separately, we will run mixed effects models to test the dependence of the three functional trait metrics on trait number and on trait correlation using the lme function from the nlme package in R [48]. To examine how trait number impacts the values of a given functional trait metric, we will run two models: Metric ~ trait number for 2–10 unique traits (the max number of traits at Cedar Creek and Sevilleta) and Metric ~ trait number for all traits possible to make sure our inferences are not impacted by excluding combinations of 11 and 12 traits at Konza. To examine how trait identity impacts the values of a given functional diversity metric, we will run 3 models for each site: Metric ~ min trait correlation, Metric ~ max trait correlation, Metric ~ mean trait correlation. We will explore which functional form of the predictor variables best fit the spread of the functional metric data by fitting linear, quadratic, cubic, and quartic fits. We will determine the models with the best fit using AIC values. We will account for repeated samples within plots by fitting plot as a random effect and an autoregressive correlation structure. We will account for multiple comparisons by adjusting our p-values using a Benjamini-Hochberg procedure [49].

Timeline

All trait and community data have already been collected that will be used in this study, but none of the authors have analyzed any subset of the data in this way before. We expect to finish cleaning data within 4 weeks of acceptance of the Registered Report Protocol. We will then complete the rest of the analyses and create figures over the following 6 weeks. We will finish writing the manuscript in another 6 weeks after data analysis is completed. All code used for analyses will be uploaded to one of the author’s OSF site before the second review stage. (DOCX) Click here for additional data file. 15 Sep 2021
PONE-D-21-22835
Exploring the impact of trait number and type on functional diversity metrics in real-world ecosystems
PLOS ONE Dear Dr. Ohlert, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ==============================
Both reviewers agree that the idea described in the Registered Protocol to analyze methods using field data combining traits and scales of organization is interesting and novel and I also agree with this. The approach proposed is very interesting and very much needed yet both reviewers have raised some issues regarding the approach and methods planned to be used in the subsequent manuscript. Reviewers request a more thorough discussion on the limitations of the approach, which I agree with, e.g. comparing functional richness among studies with different sampling efforts and scale and their potential impacts on the outcome or the use of Gower distances that could be influenced by the inclusion of categorical traits or the use of the FD package (two-dimensionless) instead of alternative methods that allow to include multiple dimensions in the trait space, among other issues. There is also the potential within the manuscript to discuss why the authors think the FD package is the best methodology for the proposed study. ============================== Please submit your revised manuscript by Oct 29 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Iván Prieto Aguilar, Ph.D. Academic Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for stating the following financial disclosure: “KW - DEB-1257965 National Science Foundation Division of Environmental Biology https://www.nsf.gov/div/index.jsp?div=DBI SH - DBI- 1725683, DEB-1753859, DEB- 1831944 National Science Foundation Division of Environmental Biology, National Science foundation Division of Biological Infrastructure https://www.nsf.gov/div/index.jsp?div=DBI PR - DBI- 1725683, DEB-1753859 National Science Foundation Division of Environmental Biology, National Science foundation Division of Biological Infrastructure https://www.nsf.gov/div/index.jsp?div=DBI EF - DEB-0841917 National Science Foundation Division of environmental biology https://www.nsf.gov/div/index.jsp?div=D” Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript." If this statement is not correct you must amend it as needed. Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf. 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions? The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field. Reviewer #1: Partly Reviewer #2: Partly ********** 2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses? The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory. Reviewer #1: Partly Reviewer #2: Partly ********** 3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors described where all data underlying the findings will be made available when the study is complete? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics. You may also provide optional suggestions and comments to authors that they might find helpful in planning their study. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The idea to analyse the existing methods using real- world data on different combinations of traits and/or scales of organisation sound interesting and currently demanding (i.e. Mammola et al. 2021). The main concern about this article is that the approach seems too simplistic for me both from theoretical and methodological approaches. From a methodological perspective, it is true that FD package is one of the most famous tools to calculate FD, but in my opinion, it is due to the simplicity of the tool which makes that many researchers routinely use this method, without many theorical and/or mathematical consideration. The improve of the theoretical development of FD metrics has been accompanied by a proliferation of methods. In this regard, new tools have been developed to improve FD package. The FD package has many constraints that the authors overlook in the article and, from my point of view, should be taken into account or at least discussed, especially for a methodological article. For example, as the authors explain, the number of dimensions depends on the number of selected traits, however, If I´m not wrong, the FD package only uses two dimensions to calculate FD metrics, in this regard new tools have been developed to make a selection of the dimensions based on their redundancy (see de Bello et al. 2016; Maire et al. 2015; Gutiérrez-Cánovas et al. 2020). In addition, functional richness often increases as new organisms are included in a group, if we compare Frich among studies, we are assuming the same sampling effort or the same sampling protocol, which I think is not the case in this article. Thus, to compare among studies a randomization could be desirable (Mammola et al. 2021). Other constraint is the use of Gower distances with the FD package, as de Bello et al. 2020 explains “the Gower distance can however produce a multi-trait dissimilarity with a disproportional contribution of certain traits, particularly categorical traits and bundle of correlated traits reflecting similar ecological functions. Hence categorical traits will contribute more to the multi-trait dissimilarity”. In de Bellos´s article is well explained the constraints of use the Gower distance with FD package. In fact, I don´t really understand why the authors use Gower and not Euclidian distances because If I´m right there is only one categorical trait (growth form in Konza), why don´t remove it? From a theorical perspective, in the last five years a plethora of more sophisticated methods has been developed to represent the functional diversity, I don´t really know why to use FD package is the best option I really miss a strong dissertation about why the selected FD indices are the best choice. For instance, Frich are very sensitive to outliers, the space within extreme values of a convex hull is assumed to be homogeneous and also strongly depends of the number of species and traits, so why Frich should be a good index to compare Functional diversity among different studies or for meta-analyses? Why Functional divergence and not Rao or functional dispersion? In summary, the introduction needs more strong theoretical framework and less generalizations (such as Lines 59-60, 62-63, 74-75). References de Bello, F., Botta‐Dukát, Z., Lepš, J., & Fibich, P. (2021). Towards a more balanced combination of multiple traits when computing functional differences between species. Methods in Ecology and Evolution, 12(3), 443-448. de Bello, F., Carmona, C. P., Lepš, J., Szava‐Kovats, R., & Pärtel, M. (2016). Functional diversity through the mean trait dissimilarity: resolving shortcomings with existing paradigms and algorithms. Oecologia, 180, 933–940. https://doi.org/10.1007/s00442‐016‐3546‐0 Gutiérrez-Cánovas, C., Sánchez-Fernández, D., González-Moreno, P., Mateos-Naranjo, E., Castro-Díez, P., & Vilà, M. (2020). Combined effects of land-use intensification and plant invasion on native communities. Oecologia, 192(3), 823-836. Maire, E., Grenouillet, G., Brosse, S., & Villeger, S. (2015) How many dimensions are needed to accurately assess functional diversity? A pragmatic approach for assessing the quality of functional spaces. Global Ecology and Biogeography, 24, 728-740. Mammola, S., Carmona, C. P., Guillerme, T., & Cardoso, P. (2020). Concepts and applications in functional diversity. Functional Ecology. Reviewer #2: General Comments This is a Registered Report Protocol, which is a new publication type for me. The proposed work rests heavily on simulation-like trait calculations in three different sites to test how functional diversity metrics respond to the number and correlation of traits used in different grassland ecosystems. I have some fairly straightforward stats concerns listed in the methods comments below. Largely, I am concerned with treating the sites together in the statistical modeling, given different scales of vegetation monitoring; different surveyors estimating the ever subjective visual cover / abundance; and the different sets of traits. I think if each site was separated and analyzed only, more nuanced could be derived from each site result in how the trait selection and overall community structure may influence functional diversity metrics. The main question is about trends in diversity metrics based on trait number, and that could be answered while keeping separate models. In addition, this is a simulation-based study, and I feel that it is well-positioned to explore a wider set of questions given the rich data behind it. In particular, Lines 186 – 187 highlight an important component that would be fairly straightforward to code and include, and would dig concretely into some of the uncertainty around functional metrics. Without it, the study feels a little limited. Introduction Lines 30-31: I think lots of attention has been given to assumptions in this space, which you highlight in the next few sentences. I suggest removing this sentence or toning down the language. Methods The scale of monitoring differs between data sets, with Konza at 10m2 and the other two at 1m2. Does this likely impact diversity estimates and outcomes of this study? It would certainly impact species richness estimates. How have similar studies dealt with the issue (perhaps species diversity studies have dealt with this explicitly somehow)? Because you model all of the sites together in a single model, I find this concerning. Konza is uniquely large, so including site as a fixed effect does not cover scale of monitoring. Given that you are looking for trends in the metrics within a site, perhaps splitting them into separate models, or standardizing the response values, may deal with this potential variation. Modeling them separately would also allow you to explore the different effect size of the trait number predictor between sites, the variation at different trait number levels within sites, and the role that your different sets of traits may play in influencing the trends. I also feel that it is hard to compare visual cover, as it is such a subjective metric and can vary so highly, especially between dry and mesic communities. The methods mention that in at least evenness, relative abundance is used. Can it also be used in the FDis calculations? If the models are split by site, this might not be an issue. Lines 214 – 216: I’m not sure what “characterize trait type” means here. Lines 226 – 228: Models, ideally, should be defined a priori or at least based on clear ecological reasoning. Is there a particular ecological reason that you chose to test linear, log, and quadratic fits? Lines 228 – 229: I must have misread, but I thought each plot only had one value? (Lines 142; 153-155; 163). ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 16 Nov 2021 We have provided a detailed response to the reviewer's comments as a separate file labeled Response to Reviewers as instructed. Submitted filename: FDiv_response to reviewers.docx Click here for additional data file. 22 Feb 2022
PONE-D-21-22835R1
Exploring the impact of trait number and type on functional diversity metrics in real-world ecosystems
PLOS ONE Dear Dr. Ohlert, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== ACADEMIC EDITOR: 
============================== The manuscritp was sent out for review to two reviewers that did not revise the previous version. Both reviewers again agree that the idea described in the Registered Protocol to analyze methods using field data combining traits and scales of organization is interesting and novel but one reviewer raised some issues regarding the cross site comparison and that many of the traits come from different treatments and different sets of traits are used in each site. The discussion on why using indexes in the FD package is now clearer and the authors propose to use alternative methods to Gower distances and including n-dimension hypervolumes (please check reviewer's 2 comments on this specific point). I have recommended minor revisions at this stage but please keep in mind when revising the manuscripts that reviewer's suggestions, specifically reviewer's 2 suggestions should be incorporated in full. Please submit your revised manuscript by Apr 08 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Iván Prieto Aguilar, Ph.D. Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments: The manuscritp was sent out for review to two reviewers that did not revise the previous version. Both reviewers again agree that the idea described in the Registered Protocol to analyze methods using field data combining traits and scales of organization is interesting and novel but one reviewer raised some issues regarding the cross site comparison and that many of the traits come from different treatments and different sets of traits are used in each site. The discussion on why using indexes in the FD package is now clearer and the authors propose to use alternative methods to Gower distances and including n-dimension hypervolumes (please check reviewer's 2 comments on this specific point). I have recommended minor revisions at this stage but please keep in mind when revising the manuscripts that reviewer's suggestions, specifically reviewer's 2 suggestions should be incorporated in full. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Does the manuscript provide a valid rationale for the proposed study, with clearly identified and justified research questions? The research question outlined is expected to address a valid academic problem or topic and contribute to the base of knowledge in the field. Reviewer #3: Yes Reviewer #4: Partly ********** 2. Is the protocol technically sound and planned in a manner that will lead to a meaningful outcome and allow testing the stated hypotheses? The manuscript should describe the methods in sufficient detail to prevent undisclosed flexibility in the experimental procedure or analysis pipeline, including sufficient outcome-neutral conditions (e.g. necessary controls, absence of floor or ceiling effects) to test the proposed hypotheses and a statistical power analysis where applicable. As there may be aspects of the methodology and analysis which can only be refined once the work is undertaken, authors should outline potential assumptions and explicitly describe what aspects of the proposed analyses, if any, are exploratory. Reviewer #3: Yes Reviewer #4: No ********** 3. Is the methodology feasible and described in sufficient detail to allow the work to be replicable? Reviewer #3: Yes Reviewer #4: No ********** 4. Have the authors described where all data underlying the findings will be made available when the study is complete? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception, at the time of publication. The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #3: Yes Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #3: Yes Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above and, if applicable, provide comments about issues authors must address before this protocol can be accepted for publication. You may also include additional comments for the author, including concerns about research or publication ethics. You may also provide optional suggestions and comments to authors that they might find helpful in planning their study. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: I did not review the original version of the proposal, but the current version seems fine. I have a few minor comments: Introduction L57-60 That is one argument (in addition to tractability) for using single trait indices, rather than multi trait indices. See Butterfield and Suding 2009 JEcol for one example. L75-89 It seems like somewhere in the intro their ought to be a discussion of dimensionality reduction, which is standard practice in calculating many indices, e.g. in the FD library in R, and is noted in the Methods. Where does this fit in WRT the questions being addressed in this study? Methods L188 dC13? L194 Reference? E.g. Siefert et al. 2015 EcoLetts or Jung et al. 2010 JEcol? L208-209 extra ‘then’ Reviewer #4: Overall: Right now I think you have some methodological hiccups that need to be ironed out, but those are all fixable and will become super obvious once you start coding. The writing needs to go into a bit more detail, particularly on the ecology of what the metrics mean and what they can be useful for. You also need to go into detail on what new insight this brings (why doing this on a non-simulated dataset is important). But I think the big problem with this is that your traits selection is confounded with site, limiting any cross-site comparisons. This might not be such a big deal if you had more sites, but with only 3 it seems like you won’t be able to say anything about the experimental vs natural systems. I worry that after all of your work all you’ll be left with is a statement saying that adding more traits doesn’t matter much if those traits are correlated. I do think the overall goals of the study are valuable, but I’m not convinced that this is the right match of question and dataset (at least as it stands). Abstract -Abstract could be improved (vague) Intro: -Some vagueness. -Could bolster number of refs for key points (not needed of course) -I think more emphasis could be placed on the importance of doing this work in real communities. I think this is a really cool selling point of this work and I think that the comparisons with some of the other work using e.g. simulations will make this a very useful and compelling study. As such, maybe devote a paragraph or so to this (perhaps between the current lines 74 and 75?) -Measures of functional diversity: a bit more context linking the metrics you use with what they capture (in terms of ecology) could be useful. Perhaps a table? Methods: -3 sites (2 natural, 1 experimental) -Glad to see that the trait data were recorded at each site -Many of the traits come from different treatments than they are being used as proxies for. Definitely problematic. -Different sets of traits at each site. Also problematic. -Traits include individual-level (or organ level?), species-level, and population-level(?) but aggregated to species level. - “Before analysis, we will drop traits with less 198 than 80% coverage of species by abundance in the community”: won’t you need to drop any trait with less than 100% coverage? Or else you can drop species without 100% trait coverage. These distance metrics require a complete set of traits. -There are some issues and errors in the section on metrics (200-225). --You give the abbreviated names (e.g. FDis) but not the full names (E.g. Functional Dispersion). You also don’t say what any of these metrics means from an ecological point of view. --The five metrics don’t use PCoA (although you can certainly do them on a PCoA’ed data set). But you can also use them with z-scaled data or any other distances you’d like. I would think it would be preferable for this study to omit the PCoA, though. Or perhaps do it both ways. --Your description of hypervolume calculations rests on one particular type of hypervolume. I agree that KDE is a good choice, but be careful not to equate hypervolumes with only one particular method. Also, it would be useful to explain why you choose KDE (probably one sentence), as well as why you prefer abundance weighted (presumably for consistency with other methods). -Doesn’t PCoA require continuous values? -Lines 226 - 239: In the previous paragraph you mention PCoA, but here you seem to be focusing on particular traits. Are you planning on applying a PCoA to each set of traits? Or am I missing something? Perhaps some quick clarification is needed. Timeline: -Might be a bit optimistic on data cleaning timeline (speaking from personal experience ) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #3: No Reviewer #4: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
22 Jul 2022 Dear PLOS ONE editor, Re: Manuscript ID: PONE-D-21-22835R1 We thank you for your continued interest in our registered report and we are grateful for the opportunity to resubmit a revised manuscript. We thank you and the two additional reviewers for providing suggestions that will improve this manuscript. Both reviewers agreed that this study will be a valuable resource for ecological studies using trait-based functional diversity measures and that by increasing the scope of this study, we can make this contribution even more valuable. In our revision, we addressed all comments raised by reviewers. In particular, both reviewers asked for greater clarity regarding the metrics being used including the dimensionality to be used when calculating these metrics. We have added considerable text explaining the metrics and parameters with which they will be calculated. In addition, we have added a table that organizes the metrics, abbreviations, ecological meaning, and their use in the literature. We believe this table adds necessary background and improves the logic for readers. We also responded to the other comments from the reviewers in the text that follows. We hope that this revised manuscript answers your concerns, and we are grateful for the helpful feedback. We think that the revised manuscript is a great improvement. Thank you in advance for considering our resubmission. Sincerely, Timothy Ohlert and Kaitlin Kimmel Reviewer #3: I did not review the original version of the proposal, but the current version seems fine. I have a few minor comments: Introduction L57-60 That is one argument (in addition to tractability) for using single trait indices, rather than multi trait indices. See Butterfield and Suding 2009 JEcol for one example. We agree with the importance of single trait indices and have added text in the introduction to reflect this (lines 69-72). L75-89 It seems like somewhere in the intro their ought to be a discussion of dimensionality reduction, which is standard practice in calculating many indices, e.g. in the FD library in R, and is noted in the Methods. Where does this fit in WRT the questions being addressed in this study? We agree that dimensionality is an important factor to consider when calculating FD metrics - and one that is often not discussed in methods sections when the FD package is deployed. We have now added text about this in lines 109-220: “Further, FD metric calculation in high dimensional space can require dimensionality reduction – another decision that can impact the value of the metric calculated” In our study, we are calculating metrics using two to twelve traits. Similar to Legras et al 2020, we are going to hold the number of dimensions equal to 2 for our analyses. This will only impact the calculation of functional richness (FRich) as the other metrics do not require dimensionality reduction (note: FRich does not either if all traits are continuous, but we have several categorical traits in our dataset). We will then conduct a sensitivity analysis to determine if holding the number of dimensions equal to 3, 4, and the maximum (dimensions = number of traits when using all continuous traits or dimensions = number of traits -1 when including categorical traits) produce similar results. We now explicitly mention this in lines 280-288 Methods L188 dC13? Yes. We have made that correction. L194 Reference? E.g. Siefert et al. 2015 EcoLetts or Jung et al. 2010 JEcol? We have added these references along with Bolnick et al. 2011 (Trends in ecology & evolution) and Westerband et al. 2021 (Annals of botany) on intraspecific variation. L208-209 extra ‘then’ We have made that correction Reviewer #4: Overall: Right now I think you have some methodological hiccups that need to be ironed out, but those are all fixable and will become super obvious once you start coding. The writing needs to go into a bit more detail, particularly on the ecology of what the metrics mean and what they can be useful for. You also need to go into detail on what new insight this brings (why doing this on a non-simulated dataset is important). But I think the big problem with this is that your traits selection is confounded with site, limiting any cross-site comparisons. This might not be such a big deal if you had more sites, but with only 3 it seems like you won’t be able to say anything about the experimental vs natural systems. I worry that after all of your work all you’ll be left with is a statement saying that adding more traits doesn’t matter much if those traits are correlated. I do think the overall goals of the study are valuable, but I’m not convinced that this is the right match of question and dataset (at least as it stands). In a previous version, another reviewer correctly pointed out that statistical tests involving just these three sites would be inappropriate for reasons that likely concern reviewer #4. Our goal is not to attempt a thorough metanalysis, since site-specific data for this number of traits in similar ecosystems has yet to be compiled on a global scale. Instead, we are taking a smaller step by applying analysis of FD metrics to real ecosystems and trait data, and adding confidence to our conclusions with the addition of experiments with high-quality data. Abstract -Abstract could be improved (vague) We have updated the abstract to be more specific on the goals of our studies, where it fits within the existing literature, and our methods. We, of course, will update the abstract once we have results for the next phase of the Registered Report. Intro: -Some vagueness. We have added more detail specifically around ecological theory and the relevance of analyzing real community and trait data (lines 84-95 and throughout introduction). -Could bolster number of refs for key points (not needed of course) We have increased the number of references overall, including for some of our main points for which we also added additional text. -I think more emphasis could be placed on the importance of doing this work in real communities. I think this is a really cool selling point of this work and I think that the comparisons with some of the other work using e.g. simulations will make this a very useful and compelling study. As such, maybe devote a paragraph or so to this (perhaps between the current lines 74 and 75?) We agree that the focus on real trait data from real communities is an important aspect of this study. We have added text in the introduction to highlight this (lines 84-95). “Calculating functional diversity measures in natural communities poses additional challenges both ecological and practical. Real plant communities are non-random assemblages of species which are influenced by competitive interactions, coexistence, mutualisms, niche partitioning, and environmental filtering among many other processes of community assembly [20,21,22,23,24,25]. Functional diversity metrics are likely to exhibit patterns due to ecologically meaningful correlation of traits in real communities, in particular, among suites of traits typically used in community ecology such as the leaf economic spectrum and root economic spectrum [26,27]. Moreover, real data collection introduces constraints on trait data, such as realistic numbers of traits collected given limited resources and missing trait data, particularly for rare species. Functional diversity metrics, therefore, are most often calculated with fewer traits and fewer species than those in studies based on simulated communities.” -Measures of functional diversity: a bit more context linking the metrics you use with what they capture (in terms of ecology) could be useful. Perhaps a table? We have taken this recommendation and added a table including information about the list of functional diversity indices. Methods: -3 sites (2 natural, 1 experimental) -Glad to see that the trait data were recorded at each site -Many of the traits come from different treatments than they are being used as proxies for. Definitely problematic. The traits from Sevilleta are collected under ambient conditions (see lines 2020-2022) and the communities are also in ambient conditions (lines 190-192). The traits from Konza come from a watershed that is burned annually, but we do not think that the potential difference in these traits will impact the relationship between number or correlation of traits and metric magnitude. The traits from Cedar Creek come from monocultures that are subject to the same CO2 and nitrogen treatments as the communities being analyzed. -Different sets of traits at each site. Also problematic. The reviewer is correct that it would be inappropriate to include all sites in a single model given differences in traits, however, each site will be handled independently as three separate analyses. The objective of this study is not to directly compare the magnitude of FD metrics between sites but rather, to look for patterns in the direction of the slope as the number of traits included in the metric increases or as traits become more correlated. Thus, our analyses are agnostic of which traits are used to calculate the FD metrics other than their correlation with each other. The purpose of using three sites is to see if there are similar patterns within sites rather than drawing conclusions from just one site. -Traits include individual-level (or organ level?), species-level, and population-level(?) but aggregated to species level. We do include traits collected at these different levels - this is typically of analyses in the trait literature (Frenette-Dussault et al. 2012 Jecol, Biswas & Mallik 2010, Kimmel et al. 2019) and is even preferable in order to encompass the dimensionality of plant form and function (Laughlin 2013). We have added text to clarify this under the Trait Data section (lines 210-211). - “Before analysis, we will drop traits with less 198 than 80% coverage of species by abundance in the community”: won’t you need to drop any trait with less than 100% coverage? Or else you can drop species without 100% trait coverage. These distance metrics require a complete set of traits. It seems our wording of this section has caused confusion. We are dropping species that do not have 100% trait coverage, but we still want to make sure that what coverage we do have is at least 80% of the total species abundance. Thus, we may have a reduced community, but still have 100% coverage by species. We don’t have a priori knowledge of what percentage of species will be dropped from the analysis because we have not looked at the data yet. We will report this number in the final draft after analysis is done. The reviewer is correct that 100% trait coverage is important for calculating these metrics and, therefore, it is common in such studies to focus on common species with full trait coverage and exclude rare species which often have incomplete trait coverage. Though this approach de-emphasizes the contribution of rare species to functional diversity, most of the metrics we use are abundance weighted anyway which also de-emphasizes rare species. We will be characterizing the dominant species in the community - so we may be misrepresenting the community where rare species play an important part in ecosystem function (e.g. Dee et al 2019 TREE). However, when ecologists are performing trait analyses on data by combining the TRY dataset with their local communities, there may not be full trait coverage. We are grappling with this disparity and an unfortunate reality of trait analyses by removing traits that do not characterize 80% or more of the community. -There are some issues and errors in the section on metrics (200-225). --You give the abbreviated names (e.g. FDis) but not the full names (E.g. Functional Dispersion). You also don’t say what any of these metrics means from an ecological point of view. We now put the abbreviations in table 1 (new), in the introduction paragraph, and then provide the full names the first time we mention them in the methods. In addition, table 1 briefly explains the ecological relevance of each metric and provides examples of usage in the literature. --The five metrics don’t use PCoA (although you can certainly do them on a PCoA’ed data set). But you can also use them with z-scaled data or any other distances you’d like. I would think it would be preferable for this study to omit the PCoA, though. Or perhaps do it both ways. While FRich, FDis, FDiv, FEve, and Rao’s Q do NOT require dimensionality reduction via PCoA, but the FD package does reduce the dimensionality via PCoA. Except for FRich, we can set the number of dimensions equal to the number of traits to preserve the trait axes when using this function. We understand that this is another layer of complexity we are adding into our study, but it is an important aspect of conducting trait analyses that we do not wish to gloss over. We explain this further below. --Your description of hypervolume calculations rests on one particular type of hypervolume. I agree that KDE is a good choice, but be careful not to equate hypervolumes with only one particular method. Also, it would be useful to explain why you choose KDE (probably one sentence), as well as why you prefer abundance weighted (presumably for consistency with other methods). We have clarified which KDE metrics we are testing (see table 100, improved the nomenclature (line 240), and added details on how these metrics are calculated (lines 278-280). Additionally, we have added text explaining why KDE hypervolumes, along with metrics from the FD package, are the best choice (lines 264-270). “All KDE-based functional diversity metrics will be calculated using the hypervolume and bat packages in R [34,48]. KDE richness is the total volume of the n-dimensional hypervolume created from unweighted trait values present in the community. KDE evenness is the overlap between the abundance-weighted n-dimensional hypervolume and a similar hypervolume in which all traits and abundances are distributed evenly. KDE dispersion is the average distance between random points within the n-dimensional hypervolume and the hypervolume centroid.“ -Doesn’t PCoA require continuous values? The PCoA is done on the species-species distance matrix, and the categorical variables are taken into consideration in the creation of the distance matrix which is done using Gower dissimilarity. Thus, the PCoA is done on the continuous distance values rather than on the raw traits. We add this detail in line 282-290. -Lines 226 - 239: In the previous paragraph you mention PCoA, but here you seem to be focusing on particular traits. Are you planning on applying a PCoA to each set of traits? Or am I missing something? Perhaps some quick clarification is needed. We will conduct a PCoA on each set of traits as if we were characterizing our community on those traits, so on each set of traits. In our study, we are calculating metrics using two to twelve traits. See lines 282-290. “The categorical variables are taken into consideration in the creation of the distance matrix which is done using Gower dissimilarity. Thus, the PCoA is done on the continuous distance values rather than on the raw traits. Similar to Legras et al. [19], we are going to hold the number of dimensions equal to 2 for only our calculation of functional richness (FRich) as the other metrics do not require dimensionality reduction (note: FRich does not either if all traits are continuous, but we have several categorical trats in our dataset). We will then conduct a sensitivity analysis to determine if holding the number of dimensions equal to 3, 4, and the maximum (dimensions = number of traits when using all continuous traits or dimensions = number of traits-1 when including categorical traits) produce similar results.” Timeline: -Might be a bit optimistic on data cleaning timeline (speaking from personal experience) We have created a new timeline given that both lead authors have taken new jobs and will need more time to complete analyses. Submitted filename: FDiv_response to reviewers_2.docx Click here for additional data file. 27 Jul 2022 Exploring the impact of trait number and type on functional diversity metrics in real-world ecosystems PONE-D-21-22835R2 Dear Dr. Ohlert, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Iván Prieto Aguilar, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): I would like to remark that the authors have done a great job incorporating reviewer's comments and adjusting timelines for having the full manuscript ready. The trait data collection is impresive and, althgouh comparing sites will be a challenge, the incorporation of new sites in the future will probably open doors to this comparison. Reviewers' comments: 2 Aug 2022 PONE-D-21-22835R2 Exploring the impact of trait number and type on functional diversity metrics in real-world ecosystems Dear Dr. Ohlert: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Iván Prieto Aguilar Academic Editor PLOS ONE
Table 1
Functional diversity metricAbbreviationEcological relevanceExamples of usageCitations
Functional richnessFRichFunctional space filled by the communityDe Vries and Bardgett 2016 [50]De la Riva et al. 2018 [51]Lourenco Jr. et al. 2021 [52]Cornwell et al. 2006 [7], Villeger et al. 2008 [8]
Kernel density richnessKDE richnessFunctional space filled by the communitySoares et al. 2022 [53]Piano et al. 2020 [54]Pavlek & Mammola 2020 [55]Blonder 2018 [47], Mammola and Cardosso 2020 [34]
Functional evennessFEveThe similarity trait abundances within the communityDe bello et al. 2012 [56]Niu et al. 2016 [57]Biswas et al. 2019 [58]Villeger et al. 2008 [8]
Kernel density evennessKDE evennessSimilarity of trait abundances within the communitySoares et al. 2022 [53]Piano et al. 2020 [54]Mammola and Cardosso 2020 [34]
Functional dispersionFDisAverage trait difference between individuals within the communityZuo et al. 2021 [59]Shovon et al. 2019 [60]Griffin-Nolan et al. 2019 [61]Laliberte and Legendre 2010 [33]
Functional divergenceFDivAverage trait difference between individuals within the communityJanschke et al. 2019 [62]Ebeling et al. 2017 [63]Thakur & Chawla 2019 [64]Villeger et al. 2008 [8]
Rao’s quadratic entropyRao’s QAverage trait difference between individuals within the communityDe Bello et al. 2009 [65]Ebeling et al. 2014 [66]Pillar et al. 2013 [67]Wang et al. 2018 [68]Rao 1982 [70], Botta-Dukat 2005 [46]
Kernel density dispersionKDE dispersionAverage trait difference between individuals within the communityPiano et al. 2020 [54]Greenop et al. 2021 [69]Mammola and Cardoso 2020 [34]
  24 in total

1.  Niche tradeoffs, neutrality, and community structure: a stochastic theory of resource competition, invasion, and community assembly.

Authors:  David Tilman
Journal:  Proc Natl Acad Sci U S A       Date:  2004-07-08       Impact factor: 11.205

2.  New multidimensional functional diversity indices for a multifaceted framework in functional ecology.

Authors:  Sébastien Villéger; Norman W H Mason; David Mouillot
Journal:  Ecology       Date:  2008-08       Impact factor: 5.499

3.  Functional-diversity indices can be driven by methodological choices and species richness.

Authors:  Mark S Poos; Steven C Walker; Donald A Jackson
Journal:  Ecology       Date:  2009-02       Impact factor: 5.499

4.  Testing the Holy Grail framework: using functional traits to predict ecosystem change.

Authors:  Katharine N Suding; Leah J Goldstein
Journal:  New Phytol       Date:  2008       Impact factor: 10.151

5.  Global leaf trait relationships: mass, area, and the leaf economics spectrum.

Authors:  Jeanne L D Osnas; Jeremy W Lichstein; Peter B Reich; Stephen W Pacala
Journal:  Science       Date:  2013-03-28       Impact factor: 47.728

6.  Linking multidimensional functional diversity to quantitative methods: a graphical hypothesis--evaluation framework.

Authors:  Kate S Boersma; Laura E Dee; Steve J Miller; Michael T Bogan; David A Lytle; Alix I Gitelman
Journal:  Ecology       Date:  2016-03       Impact factor: 5.499

7.  Plant traits alone are poor predictors of ecosystem properties and long-term ecosystem functioning.

Authors:  Fons van der Plas; Thomas Schröder-Georgi; Alexandra Weigelt; Kathryn Barry; Sebastian Meyer; Adriana Alzate; Romain L Barnard; Nina Buchmann; Hans de Kroon; Anne Ebeling; Nico Eisenhauer; Christof Engels; Markus Fischer; Gerd Gleixner; Anke Hildebrandt; Eva Koller-France; Sophia Leimer; Alexandru Milcu; Liesje Mommer; Pascal A Niklaus; Yvonne Oelmann; Christiane Roscher; Christoph Scherber; Michael Scherer-Lorenzen; Stefan Scheu; Bernhard Schmid; Ernst-Detlef Schulze; Vicky Temperton; Teja Tscharntke; Winfried Voigt; Wolfgang Weisser; Wolfgang Wilcke; Christian Wirth
Journal:  Nat Ecol Evol       Date:  2020-10-05       Impact factor: 15.460

8.  The ecological importance of intraspecific variation.

Authors:  Simone Des Roches; David M Post; Nash E Turley; Joseph K Bailey; Andrew P Hendry; Michael T Kinnison; Jennifer A Schweitzer; Eric P Palkovacs
Journal:  Nat Ecol Evol       Date:  2017-12-04       Impact factor: 15.460

9.  Disentangling community functional components in a litter-macrodetritivore model system reveals the predominance of the mass ratio hypothesis.

Authors:  Karolína Bílá; Marco Moretti; Francesco Bello; André Tc Dias; Gianni B Pezzatti; Arend Raoul Van Oosten; Matty P Berg
Journal:  Ecol Evol       Date:  2014-01-20       Impact factor: 2.912

10.  Nonlinearity of root trait relationships and the root economics spectrum.

Authors:  Deliang Kong; Junjian Wang; Huifang Wu; Oscar J Valverde-Barrantes; Ruili Wang; Hui Zeng; Paul Kardol; Haiyan Zhang; Yulong Feng
Journal:  Nat Commun       Date:  2019-05-17       Impact factor: 14.919

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.