Literature DB >> 35459230

Machine learning-based global maps of ecological variables and the challenge of assessing them.

Hanna Meyer1, Edzer Pebesma2.   

Abstract

Entities:  

Mesh:

Year:  2022        PMID: 35459230      PMCID: PMC9033849          DOI: 10.1038/s41467-022-29838-9

Source DB:  PubMed          Journal:  Nat Commun        ISSN: 2041-1723            Impact factor:   17.694


× No keyword cloud information.
Fields such as ecology or geosciences have seen a strong increase of studies that apply machine learning methods to produce global maps of environmental variables (prominent examples are, e.g., the global tree restoration potential[1], global soil nematode abundances[2], or global soil maps[3]) with the aim of increasing our knowledge about the environment, and of supporting decisions. These maps are often distributed as open data, allowing other researchers to use them as input to compute indicators of all kinds or as input to map yet other variables. Quality measures reported by the authors are impressive but often contradict with experts’ opinions (e.g., see comments to Bastin et al.[1] or discussions in Wyborn and Evens[4]). Ploton et al.[5] attribute this contradiction to the use of validation strategies that ignore spatial autocorrelation in the data, and argue in favor of using spatial cross-validation methods. Wadoux et al.[6] argue that spatial cross-validation is not the right way to evaluate map accuracy. Meyer and Pebesma[7] argue that the practice of using sparse and non-representative reference data makes model assessment impossible for areas with conditions that are very different from the training data. Here, we try to unravel some of these arguments by focusing on the data, the methods used, and the limits to our ability to assess spatial predictions.

Global reference data used in machine learning applications

In common global predictive mapping tasks (described in, e.g., Van den Hoogen et al.[8]), models are trained using reference data from field sampling. These data are then spatially matched with predictor variables with global coverage. A machine learning model (often Random Forest) is then fitted (trained) and applied to the predictors to obtain a global map with predicted values of the target variable. Most machine learning methods as well as common validation strategies assume that the reference data are independent and identically distributed, which is in the spatial mapping context for instance guaranteed when they were obtained as a simple random sample from the target area. It is, however, hard to imagine that a global, spatially random sample will ever be collected when it involves taking in situ samples (e.g., collecting soil parameters, or counting soil nematodes). None of the global studies mentioned above is based on data collected as a probability sample; most of them are based on creating a database by merging all data available from different sources. As a consequence, these data are strongly concentrated, e.g., in Europe and Northern America, and within these regions, they are extremely clustered around areas that received intense research. We are aware that large gaps in geographic space do not always imply large gaps in feature space, but it is the former that most concerns accuracy of the maps of focus here, as we will discuss. For three publicly available datasets that were used for global mapping, Fig. 1A–C compares the distributions of the spatial distances of reference data to their nearest neighbor (pink) with the distribution of distances from all points of the global land surface to the nearest reference data point (prediction locations, blue). The difference between the two distributions reflects the degree of spatial clustering in the reference data: Fig. 1D shows the distributions for a simulated spatially random sample of the same size as Fig. 1C. The clustered pattern has certain consequences and raises challenges for accuracy assessment that we will discuss in the following.
Fig. 1

Spatial distance distributions in global mapping studies.

Spatial distribution (left; equal Earth projection) and distribution of nearest neighbor distances (right; sample-to-sample distance in pink, prediction-location-to-sample distance in blue) for three different publicly available datasets: cation exchange capacity in the soil from the WoSIS database[23] as used for global soil mapping[3] (A), specific leaf area from the Try database[24] as used for the global mapping in Moreno-Martinez et al. (2018)[25] (B), and the nematodes dataset compiled by Van den Hoogen et al. (2019)[2] (C). For comparison, the fourth dataset is a simulated completely spatially random sample of the same size as the nematode dataset (D). Distance distributions were calculated and visualized using the R package “CAST”[26].

Spatial distance distributions in global mapping studies.

Spatial distribution (left; equal Earth projection) and distribution of nearest neighbor distances (right; sample-to-sample distance in pink, prediction-location-to-sample distance in blue) for three different publicly available datasets: cation exchange capacity in the soil from the WoSIS database[23] as used for global soil mapping[3] (A), specific leaf area from the Try database[24] as used for the global mapping in Moreno-Martinez et al. (2018)[25] (B), and the nematodes dataset compiled by Van den Hoogen et al. (2019)[2] (C). For comparison, the fourth dataset is a simulated completely spatially random sample of the same size as the nematode dataset (D). Distance distributions were calculated and visualized using the R package “CAST”[26].

Map quality: global or local assessment?

The quality of global maps can be assessed in different ways. One way is global assessment where a single statistic is chosen to summarize the quality of the entire map: the map accuracy. For a categorical variable, this can be the probability that for a randomly chosen location on the map, the map value corresponds to the true value. For a continuous variable, it can be the RMSE, describing for a randomly chosen location on the map the expected difference between the mapped value and the true value. When a probability sample, such as a completely spatially random sample, is available for the area for which a global assessment is needed, then map accuracy can be estimated model-free (also called design-based, e.g., by using the unweighted sample mean in case of a completely spatially random sample). This circumvents modeling of spatial correlation because observations are independent by design[6,9]. This approach is called model-free because no model needs to be assumed about the distribution or correlation of the data: the only source of randomness is the random selection of sample units from a target population. If a probability sample is not available this approach cannot be used, and automatically the accuracy assessment approach becomes model-based[10], which involves modeling a spatial process by assuming distributions and taking spatial correlations into account, and choosing estimation methods accordingly. Using naive random n-fold or leave-one-out cross-validation methods (or a simple random train-test split) to assess global model quality (usually equated with map accuracy) makes sense when the data are independent and identically distributed. When this is not the case, dependencies between nearby samples, e.g., in a spatial cluster, are ignored and result in biased, overly optimistic model assessment, as shown in, e.g., Ploton et al.[5]. Alternative cross-validation approaches such as spatial cross-validation[5,11] that control for such dependencies are the only way to overcome this bias. Different spatial cross-validation strategies have been developed in the past few years, all aiming at creating independence between cross-validation folds[5,11-13]. Cross-validation creates prediction situations artificially by leaving out data points and predicting their value from the remaining points. If the aim is to assess the accuracy of a global map, the prediction situations created need to resemble those encountered while predicting the global map from the reference data (see Fig. 1 and discussions in Milà et al.[14]). This occurs naturally when reference data were obtained by (completely spatially random) probability sampling, but in other cases, this has to be forced for instance by controlling spatial distances (spatial cross-validation). Such forcing, however, is only possible when the distances in space that need to be resembled are available in the reference data. In the extreme case where all reference data come from a single cluster, this is impossible. When all reference data come from a small number of clusters, larger distances are available between clusters but do not provide substantial independent information about variation associated with these distances. Lack of information about larger distances means that we cannot assess the quality of predictions associated with such distances and cannot properly estimate global quality measures. Alternative approaches such as experiments with synthetic data[15] or a validation using independent data at a higher level of integration[16] would then be options to support confidence in the predictions. Another way of accuracy assessment is local assessment: for every location, a quality measure is reported, again as probability or prediction error. Such a local assessment predicts how close the map value is to newly observed values at particular locations. If the measurement error is quantified explicitly, a smoother, measurement-error-free value may be predicted[10]. If the model accounts for change of support[10,17], predictions errors may refer to average values over larger areas such as 1 × 1, 5 × 5, or 10 × 10 km grid cells. Examples of local assessment in the context of global ecological mapping are modeled prediction errors using Quantile Regression Forests[18] or mapped variance of predictions made by ensembles[1,2]. Neither of these examples quantifies spatial correlation or measurement error, or addresses change of support, as it is known from other modeling frameworks[19]. By omitting to model the spatial process, the local accuracy estimates as presented in the global studies that motivated this comment are disputable. The difference between global and local assessment is striking, in particular for global maps. A global, single number averages out all variability in prediction errors, and obscures any differences, e.g., between continents or climate zones. It is of little value for interpreting the quality of the map for particular regions.

Limits to accuracy assessment

Maps, and in particular global maps, create a strong feeling of satisfaction, suggesting we now know it all. They are however also used, enlarged, torn apart, read in detail, and may form the basis for local decisions of all kinds, or even form the inputs for follow-up models. If a global map does not come with clear instructions about its value, like a prescription for subsequent use, it is easy to abuse it. Wyborn and Evans[4] rightly ask about “what changes are global maps, and their creators, trying to bring about in the world?”, and suggest a re-engagement with empirical studies of local and regional contexts while seeking co-construction with those having local knowledge. The fact that creating global maps of anything nowadays is so easy does not mean these maps are always useful. Technically, a trained Random Forest (or other) model can be applied globally as long as global predictors are available. Predictions far beyond reference data, however, often lead to extrapolation situations in the predictor space and models produce typically meaningless predictions when provided with predictor values that do not resemble the training data. The same applies to local accuracy estimates when based on the variance of predictions[7]. A good coverage of training data in the predictor space is hence required to produce globally applicable predictions. Since distances in geographic space often go along with distances in the feature space, it can be assumed that this is not given for many prediction models that are based on sparse and clustered reference data. In Meyer and Pebesma[7], we suggest a procedure to limit spatial predictions to the area of applicability of the model: global maps would need to gray out areas where predictor values are too different from values in the training data—the areas for which we cannot assess the quality of predictions. Similar approaches have been suggested and discussed, e.g., by Jung et al.[16]. Limiting predictions to the area of applicability of the model is not only relevant to avoid wrong conclusions about prediction patterns but also to avoid propagation of large errors: many global maps of environmental variables used the global soil maps produced by Hengl et al.[3] as input predictors[1,2,20]. The global soil maps by Hengl et al.[3] in turn used other modeled maps as an input (e.g., WorldClim[21]). If the latter maps had labeled locations with predictions for which quality cannot be assessed, or for which quality was really low, the follow-up study could have benefited from it. Without that information, both WorldClim and the soil layers were taken as if they contained true values. We argue that showing predicted values on global maps without reliable indication of global and local prediction errors or the limits of the area of applicability, and distributing these for reuse, is not congruent with basic scientific integrity. Reusing such global maps while ignoring prediction errors amplifies this problem, hence more transparency and clear indication about the limitations of predictions is required. Global maps are being distributed digitally and could be used for purposes of decision making, e.g., in the context of nature conservation[22]. We call for global maps of ecological variables to be published only when they are accompanied by properly derived local and global accuracy measures.
  9 in total

1.  The global tree restoration potential.

Authors:  Jean-Francois Bastin; Yelena Finegold; Claude Garcia; Danilo Mollicone; Marcelo Rezende; Devin Routh; Constantin M Zohner; Thomas W Crowther
Journal:  Science       Date:  2019-07-05       Impact factor: 47.728

2.  Soil nematode abundance and functional group composition at a global scale.

Authors:  Johan van den Hoogen; Stefan Geisen; Devin Routh; Howard Ferris; Walter Traunspurger; David A Wardle; Ron G M de Goede; Byron J Adams; Wasim Ahmad; Walter S Andriuzzi; Richard D Bardgett; Michael Bonkowski; Raquel Campos-Herrera; Juvenil E Cares; Tancredi Caruso; Larissa de Brito Caixeta; Xiaoyun Chen; Sofia R Costa; Rachel Creamer; José Mauro da Cunha Castro; Marie Dam; Djibril Djigal; Miguel Escuer; Bryan S Griffiths; Carmen Gutiérrez; Karin Hohberg; Daria Kalinkina; Paul Kardol; Alan Kergunteuil; Gerard Korthals; Valentyna Krashevska; Alexey A Kudrin; Qi Li; Wenju Liang; Matthew Magilton; Mariette Marais; José Antonio Rodríguez Martín; Elizaveta Matveeva; El Hassan Mayad; Christian Mulder; Peter Mullin; Roy Neilson; T A Duong Nguyen; Uffe N Nielsen; Hiroaki Okada; Juan Emilio Palomares Rius; Kaiwen Pan; Vlada Peneva; Loïc Pellissier; Julio Carlos Pereira da Silva; Camille Pitteloud; Thomas O Powers; Kirsten Powers; Casper W Quist; Sergio Rasmann; Sara Sánchez Moreno; Stefan Scheu; Heikki Setälä; Anna Sushchuk; Alexei V Tiunov; Jean Trap; Wim van der Putten; Mette Vestergård; Cecile Villenave; Lieven Waeyenberge; Diana H Wall; Rutger Wilschut; Daniel G Wright; Jiue-In Yang; Thomas Ward Crowther
Journal:  Nature       Date:  2019-07-24       Impact factor: 49.962

3.  The global distribution and environmental drivers of aboveground versus belowground plant biomass.

Authors:  Haozhi Ma; Lidong Mo; Thomas W Crowther; Daniel S Maynard; Johan van den Hoogen; Benjamin D Stocker; César Terrer; Constantin M Zohner
Journal:  Nat Ecol Evol       Date:  2021-06-24       Impact factor: 15.460

4.  TRY plant trait database - enhanced coverage and open access.

Authors:  Jens Kattge; Gerhard Bönisch; Sandra Díaz; Sandra Lavorel; Iain Colin Prentice; Paul Leadley; Susanne Tautenhahn; Gijsbert D A Werner; Tuomas Aakala; Mehdi Abedi; Alicia T R Acosta; George C Adamidis; Kairi Adamson; Masahiro Aiba; Cécile H Albert; Julio M Alcántara; Carolina Alcázar C; Izabela Aleixo; Hamada Ali; Bernard Amiaud; Christian Ammer; Mariano M Amoroso; Madhur Anand; Carolyn Anderson; Niels Anten; Joseph Antos; Deborah Mattos Guimarães Apgaua; Tia-Lynn Ashman; Degi Harja Asmara; Gregory P Asner; Michael Aspinwall; Owen Atkin; Isabelle Aubin; Lars Baastrup-Spohr; Khadijeh Bahalkeh; Michael Bahn; Timothy Baker; William J Baker; Jan P Bakker; Dennis Baldocchi; Jennifer Baltzer; Arindam Banerjee; Anne Baranger; Jos Barlow; Diego R Barneche; Zdravko Baruch; Denis Bastianelli; John Battles; William Bauerle; Marijn Bauters; Erika Bazzato; Michael Beckmann; Hans Beeckman; Carl Beierkuhnlein; Renee Bekker; Gavin Belfry; Michael Belluau; Mirela Beloiu; Raquel Benavides; Lahcen Benomar; Mary Lee Berdugo-Lattke; Erika Berenguer; Rodrigo Bergamin; Joana Bergmann; Marcos Bergmann Carlucci; Logan Berner; Markus Bernhardt-Römermann; Christof Bigler; Anne D Bjorkman; Chris Blackman; Carolina Blanco; Benjamin Blonder; Dana Blumenthal; Kelly T Bocanegra-González; Pascal Boeckx; Stephanie Bohlman; Katrin Böhning-Gaese; Laura Boisvert-Marsh; William Bond; Ben Bond-Lamberty; Arnoud Boom; Coline C F Boonman; Kauane Bordin; Elizabeth H Boughton; Vanessa Boukili; David M J S Bowman; Sandra Bravo; Marco Richard Brendel; Martin R Broadley; Kerry A Brown; Helge Bruelheide; Federico Brumnich; Hans Henrik Bruun; David Bruy; Serra W Buchanan; Solveig Franziska Bucher; Nina Buchmann; Robert Buitenwerf; Daniel E Bunker; Jana Bürger; Sabina Burrascano; David F R P Burslem; Bradley J Butterfield; Chaeho Byun; Marcia Marques; Marina C Scalon; Marco Caccianiga; Marc Cadotte; Maxime Cailleret; James Camac; Jesús Julio Camarero; Courtney Campany; Giandiego Campetella; Juan Antonio Campos; Laura Cano-Arboleda; Roberto Canullo; Michele Carbognani; Fabio Carvalho; Fernando Casanoves; Bastien Castagneyrol; Jane A Catford; Jeannine Cavender-Bares; Bruno E L Cerabolini; Marco Cervellini; Eduardo Chacón-Madrigal; Kenneth Chapin; F Stuart Chapin; Stefano Chelli; Si-Chong Chen; Anping Chen; Paolo Cherubini; Francesco Chianucci; Brendan Choat; Kyong-Sook Chung; Milan Chytrý; Daniela Ciccarelli; Lluís Coll; Courtney G Collins; Luisa Conti; David Coomes; Johannes H C Cornelissen; William K Cornwell; Piermaria Corona; Marie Coyea; Joseph Craine; Dylan Craven; Joris P G M Cromsigt; Anikó Csecserits; Katarina Cufar; Matthias Cuntz; Ana Carolina da Silva; Kyla M Dahlin; Matteo Dainese; Igor Dalke; Michele Dalle Fratte; Anh Tuan Dang-Le; Jirí Danihelka; Masako Dannoura; Samantha Dawson; Arend Jacobus de Beer; Angel De Frutos; Jonathan R De Long; Benjamin Dechant; Sylvain Delagrange; Nicolas Delpierre; Géraldine Derroire; Arildo S Dias; Milton Hugo Diaz-Toribio; Panayiotis G Dimitrakopoulos; Mark Dobrowolski; Daniel Doktor; Pavel Dřevojan; Ning Dong; John Dransfield; Stefan Dressler; Leandro Duarte; Emilie Ducouret; Stefan Dullinger; Walter Durka; Remko Duursma; Olga Dymova; Anna E-Vojtkó; Rolf Lutz Eckstein; Hamid Ejtehadi; James Elser; Thaise Emilio; Kristine Engemann; Mohammad Bagher Erfanian; Alexandra Erfmeier; Adriane Esquivel-Muelbert; Gerd Esser; Marc Estiarte; Tomas F Domingues; William F Fagan; Jaime Fagúndez; Daniel S Falster; Ying Fan; Jingyun Fang; Emmanuele Farris; Fatih Fazlioglu; Yanhao Feng; Fernando Fernandez-Mendez; Carlotta Ferrara; Joice Ferreira; Alessandra Fidelis; Bryan Finegan; Jennifer Firn; Timothy J Flowers; Dan F B Flynn; Veronika Fontana; Estelle Forey; Cristiane Forgiarini; Louis François; Marcelo Frangipani; Dorothea Frank; Cedric Frenette-Dussault; Grégoire T Freschet; Ellen L Fry; Nikolaos M Fyllas; Guilherme G Mazzochini; Sophie Gachet; Rachael Gallagher; Gislene Ganade; Francesca Ganga; Pablo García-Palacios; Verónica Gargaglione; Eric Garnier; Jose Luis Garrido; André Luís de Gasper; Guillermo Gea-Izquierdo; David Gibson; Andrew N Gillison; Aelton Giroldo; Mary-Claire Glasenhardt; Sean Gleason; Mariana Gliesch; Emma Goldberg; Bastian Göldel; Erika Gonzalez-Akre; Jose L Gonzalez-Andujar; Andrés González-Melo; Ana González-Robles; Bente Jessen Graae; Elena Granda; Sarah Graves; Walton A Green; Thomas Gregor; Nicolas Gross; Greg R Guerin; Angela Günther; Alvaro G Gutiérrez; Lillie Haddock; Anna Haines; Jefferson Hall; Alain Hambuckers; Wenxuan Han; Sandy P Harrison; Wesley Hattingh; Joseph E Hawes; Tianhua He; Pengcheng He; Jacob Mason Heberling; Aveliina Helm; Stefan Hempel; Jörn Hentschel; Bruno Hérault; Ana-Maria Hereş; Katharina Herz; Myriam Heuertz; Thomas Hickler; Peter Hietz; Pedro Higuchi; Andrew L Hipp; Andrew Hirons; Maria Hock; James Aaron Hogan; Karen Holl; Olivier Honnay; Daniel Hornstein; Enqing Hou; Nate Hough-Snee; Knut Anders Hovstad; Tomoaki Ichie; Boris Igić; Estela Illa; Marney Isaac; Masae Ishihara; Leonid Ivanov; Larissa Ivanova; Colleen M Iversen; Jordi Izquierdo; Robert B Jackson; Benjamin Jackson; Hervé Jactel; Andrzej M Jagodzinski; Ute Jandt; Steven Jansen; Thomas Jenkins; Anke Jentsch; Jens Rasmus Plantener Jespersen; Guo-Feng Jiang; Jesper Liengaard Johansen; David Johnson; Eric J Jokela; Carlos Alfredo Joly; Gregory J Jordan; Grant Stuart Joseph; Decky Junaedi; Robert R Junker; Eric Justes; Richard Kabzems; Jeffrey Kane; Zdenek Kaplan; Teja Kattenborn; Lyudmila Kavelenova; Elizabeth Kearsley; Anne Kempel; Tanaka Kenzo; Andrew Kerkhoff; Mohammed I Khalil; Nicole L Kinlock; Wilm Daniel Kissling; Kaoru Kitajima; Thomas Kitzberger; Rasmus Kjøller; Tamir Klein; Michael Kleyer; Jitka Klimešová; Joice Klipel; Brian Kloeppel; Stefan Klotz; Johannes M H Knops; Takashi Kohyama; Fumito Koike; Johannes Kollmann; Benjamin Komac; Kimberly Komatsu; Christian König; Nathan J B Kraft; Koen Kramer; Holger Kreft; Ingolf Kühn; Dushan Kumarathunge; Jonas Kuppler; Hiroko Kurokawa; Yoko Kurosawa; Shem Kuyah; Jean-Paul Laclau; Benoit Lafleur; Erik Lallai; Eric Lamb; Andrea Lamprecht; Daniel J Larkin; Daniel Laughlin; Yoann Le Bagousse-Pinguet; Guerric le Maire; Peter C le Roux; Elizabeth le Roux; Tali Lee; Frederic Lens; Simon L Lewis; Barbara Lhotsky; Yuanzhi Li; Xine Li; Jeremy W Lichstein; Mario Liebergesell; Jun Ying Lim; Yan-Shih Lin; Juan Carlos Linares; Chunjiang Liu; Daijun Liu; Udayangani Liu; Stuart Livingstone; Joan Llusià; Madelon Lohbeck; Álvaro López-García; Gabriela Lopez-Gonzalez; Zdeňka Lososová; Frédérique Louault; Balázs A Lukács; Petr Lukeš; Yunjian Luo; Michele Lussu; Siyan Ma; Camilla Maciel Rabelo Pereira; Michelle Mack; Vincent Maire; Annikki Mäkelä; Harri Mäkinen; Ana Claudia Mendes Malhado; Azim Mallik; Peter Manning; Stefano Manzoni; Zuleica Marchetti; Luca Marchino; Vinicius Marcilio-Silva; Eric Marcon; Michela Marignani; Lars Markesteijn; Adam Martin; Cristina Martínez-Garza; Jordi Martínez-Vilalta; Tereza Mašková; Kelly Mason; Norman Mason; Tara Joy Massad; Jacynthe Masse; Itay Mayrose; James McCarthy; M Luke McCormack; Katherine McCulloh; Ian R McFadden; Brian J McGill; Mara Y McPartland; Juliana S Medeiros; Belinda Medlyn; Pierre Meerts; Zia Mehrabi; Patrick Meir; Felipe P L Melo; Maurizio Mencuccini; Céline Meredieu; Julie Messier; Ilona Mészáros; Juha Metsaranta; Sean T Michaletz; Chrysanthi Michelaki; Svetlana Migalina; Ruben Milla; Jesse E D Miller; Vanessa Minden; Ray Ming; Karel Mokany; Angela T Moles; Attila Molnár; Jane Molofsky; Martin Molz; Rebecca A Montgomery; Arnaud Monty; Lenka Moravcová; Alvaro Moreno-Martínez; Marco Moretti; Akira S Mori; Shigeta Mori; Dave Morris; Jane Morrison; Ladislav Mucina; Sandra Mueller; Christopher D Muir; Sandra Cristina Müller; François Munoz; Isla H Myers-Smith; Randall W Myster; Masahiro Nagano; Shawna Naidu; Ayyappan Narayanan; Balachandran Natesan; Luka Negoita; Andrew S Nelson; Eike Lena Neuschulz; Jian Ni; Georg Niedrist; Jhon Nieto; Ülo Niinemets; Rachael Nolan; Henning Nottebrock; Yann Nouvellon; Alexander Novakovskiy; Kristin Odden Nystuen; Anthony O'Grady; Kevin O'Hara; Andrew O'Reilly-Nugent; Simon Oakley; Walter Oberhuber; Toshiyuki Ohtsuka; Ricardo Oliveira; Kinga Öllerer; Mark E Olson; Vladimir Onipchenko; Yusuke Onoda; Renske E Onstein; Jenny C Ordonez; Noriyuki Osada; Ivika Ostonen; Gianluigi Ottaviani; Sarah Otto; Gerhard E Overbeck; Wim A Ozinga; Anna T Pahl; C E Timothy Paine; Robin J Pakeman; Aristotelis C Papageorgiou; Evgeniya Parfionova; Meelis Pärtel; Marco Patacca; Susana Paula; Juraj Paule; Harald Pauli; Juli G Pausas; Begoña Peco; Josep Penuelas; Antonio Perea; Pablo Luis Peri; Ana Carolina Petisco-Souza; Alessandro Petraglia; Any Mary Petritan; Oliver L Phillips; Simon Pierce; Valério D Pillar; Jan Pisek; Alexandr Pomogaybin; Hendrik Poorter; Angelika Portsmuth; Peter Poschlod; Catherine Potvin; Devon Pounds; A Shafer Powell; Sally A Power; Andreas Prinzing; Giacomo Puglielli; Petr Pyšek; Valerie Raevel; Anja Rammig; Johannes Ransijn; Courtenay A Ray; Peter B Reich; Markus Reichstein; Douglas E B Reid; Maxime Réjou-Méchain; Victor Resco de Dios; Sabina Ribeiro; Sarah Richardson; Kersti Riibak; Matthias C Rillig; Fiamma Riviera; Elisabeth M R Robert; Scott Roberts; Bjorn Robroek; Adam Roddy; Arthur Vinicius Rodrigues; Alistair Rogers; Emily Rollinson; Victor Rolo; Christine Römermann; Dina Ronzhina; Christiane Roscher; Julieta A Rosell; Milena Fermina Rosenfield; Christian Rossi; David B Roy; Samuel Royer-Tardif; Nadja Rüger; Ricardo Ruiz-Peinado; Sabine B Rumpf; Graciela M Rusch; Masahiro Ryo; Lawren Sack; Angela Saldaña; Beatriz Salgado-Negret; Roberto Salguero-Gomez; Ignacio Santa-Regina; Ana Carolina Santacruz-García; Joaquim Santos; Jordi Sardans; Brandon Schamp; Michael Scherer-Lorenzen; Matthias Schleuning; Bernhard Schmid; Marco Schmidt; Sylvain Schmitt; Julio V Schneider; Simon D Schowanek; Julian Schrader; Franziska Schrodt; Bernhard Schuldt; Frank Schurr; Galia Selaya Garvizu; Marina Semchenko; Colleen Seymour; Julia C Sfair; Joanne M Sharpe; Christine S Sheppard; Serge Sheremetiev; Satomi Shiodera; Bill Shipley; Tanvir Ahmed Shovon; Alrun Siebenkäs; Carlos Sierra; Vasco Silva; Mateus Silva; Tommaso Sitzia; Henrik Sjöman; Martijn Slot; Nicholas G Smith; Darwin Sodhi; Pamela Soltis; Douglas Soltis; Ben Somers; Grégory Sonnier; Mia Vedel Sørensen; Enio Egon Sosinski; Nadejda A Soudzilovskaia; Alexandre F Souza; Marko Spasojevic; Marta Gaia Sperandii; Amanda B Stan; James Stegen; Klaus Steinbauer; Jörg G Stephan; Frank Sterck; Dejan B Stojanovic; Tanya Strydom; Maria Laura Suarez; Jens-Christian Svenning; Ivana Svitková; Marek Svitok; Miroslav Svoboda; Emily Swaine; Nathan Swenson; Marcelo Tabarelli; Kentaro Takagi; Ulrike Tappeiner; Rubén Tarifa; Simon Tauugourdeau; Cagatay Tavsanoglu; Mariska Te Beest; Leho Tedersoo; Nelson Thiffault; Dominik Thom; Evert Thomas; Ken Thompson; Peter E Thornton; Wilfried Thuiller; Lubomír Tichý; David Tissue; Mark G Tjoelker; David Yue Phin Tng; Joseph Tobias; Péter Török; Tonantzin Tarin; José M Torres-Ruiz; Béla Tóthmérész; Martina Treurnicht; Valeria Trivellone; Franck Trolliet; Volodymyr Trotsiuk; James L Tsakalos; Ioannis Tsiripidis; Niklas Tysklind; Toru Umehara; Vladimir Usoltsev; Matthew Vadeboncoeur; Jamil Vaezi; Fernando Valladares; Jana Vamosi; Peter M van Bodegom; Michiel van Breugel; Elisa Van Cleemput; Martine van de Weg; Stephni van der Merwe; Fons van der Plas; Masha T van der Sande; Mark van Kleunen; Koenraad Van Meerbeek; Mark Vanderwel; Kim André Vanselow; Angelica Vårhammar; Laura Varone; Maribel Yesenia Vasquez Valderrama; Kiril Vassilev; Mark Vellend; Erik J Veneklaas; Hans Verbeeck; Kris Verheyen; Alexander Vibrans; Ima Vieira; Jaime Villacís; Cyrille Violle; Pandi Vivek; Katrin Wagner; Matthew Waldram; Anthony Waldron; Anthony P Walker; Martyn Waller; Gabriel Walther; Han Wang; Feng Wang; Weiqi Wang; Harry Watkins; James Watkins; Ulrich Weber; James T Weedon; Liping Wei; Patrick Weigelt; Evan Weiher; Aidan W Wells; Camilla Wellstein; Elizabeth Wenk; Mark Westoby; Alana Westwood; Philip John White; Mark Whitten; Mathew Williams; Daniel E Winkler; Klaus Winter; Chevonne Womack; Ian J Wright; S Joseph Wright; Justin Wright; Bruno X Pinho; Fabiano Ximenes; Toshihiro Yamada; Keiko Yamaji; Ruth Yanai; Nikolay Yankov; Benjamin Yguel; Kátia Janaina Zanini; Amy E Zanne; David Zelený; Yun-Peng Zhao; Jingming Zheng; Ji Zheng; Kasia Ziemińska; Chad R Zirbel; Georg Zizka; Irié Casimir Zo-Bi; Gerhard Zotz; Christian Wirth
Journal:  Glob Chang Biol       Date:  2019-12-31       Impact factor: 10.863

5.  Conservation needs to break free from global priority mapping.

Authors:  Carina Wyborn; Megan C Evans
Journal:  Nat Ecol Evol       Date:  2021-10       Impact factor: 15.460

6.  SoilGrids250m: Global gridded soil information based on machine learning.

Authors:  Tomislav Hengl; Jorge Mendes de Jesus; Gerard B M Heuvelink; Maria Ruiperez Gonzalez; Milan Kilibarda; Aleksandar Blagotić; Wei Shangguan; Marvin N Wright; Xiaoyuan Geng; Bernhard Bauer-Marschallinger; Mario Antonio Guevara; Rodrigo Vargas; Robert A MacMillan; Niels H Batjes; Johan G B Leenaars; Eloi Ribeiro; Ichsani Wheeler; Stephan Mantel; Bas Kempen
Journal:  PLoS One       Date:  2017-02-16       Impact factor: 3.240

7.  Random forest as a generic framework for predictive modeling of spatial and spatio-temporal variables.

Authors:  Tomislav Hengl; Madlene Nussbaum; Marvin N Wright; Gerard B M Heuvelink; Benedikt Gräler
Journal:  PeerJ       Date:  2018-08-29       Impact factor: 2.984

  9 in total
  1 in total

1.  Ways forward for Machine Learning to make useful global environmental datasets from legacy observations and measurements.

Authors: 
Journal:  Nat Commun       Date:  2022-09-07       Impact factor: 17.694

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.