Literature DB >> 34608322

Avoiding a replication crisis in deep-learning-based bioimage analysis.

Romain F Laine1,2,3, Ignacio Arganda-Carreras4,5,6, Ricardo Henriques1,2,7, Guillaume Jacquemet8,9,10.   

Abstract

Entities:  

Mesh:

Year:  2021        PMID: 34608322      PMCID: PMC7611896          DOI: 10.1038/s41592-021-01284-3

Source DB:  PubMed          Journal:  Nat Methods        ISSN: 1548-7091            Impact factor:   28.547


× No keyword cloud information.

Introduction

Microscopy is a leading technology to gain fundamental insight for biological research. Today, a typical microscopy session may generate hundreds to thousands of images, generally requiring computational analysis to extract meaningful results from them. Over the last few years, deep learning (DL) has increasingly become one of the gold standards for high-performance microscopy image analysis [1,2]. DL has been shown to perform a wide range of image analysis very efficiently, such as image classification [3,4], object detection [5,6], image segmentation [7-9], image restoration [10,11], super-resolution microscopy [10,12-15], object tracking [16,17], image registration [18] and the prediction of fluorescence images from label-free imaging modalities [19]. For image analysis, DL usually uses algorithms called artificial neural networks (ANNs). Unlike classical algorithms, before using an ANN, it first needs to be trained (Figure 1). During training, the ANN is presented with a range of data from which it attempts to learn how to perform a specific task (i.e. denoising). More specifically, the ANN builds a model of the mathematical transformation that needs to be applied to data to obtain the desired output. Here, the model parameters (called weights) can be seen as the instructions to carry out the learned task. Once the weights of a model are optimised, it can be used to perform the task, a step called inference or prediction. Therefore, ANNs can be considered non-linear transformation machines, performing sequential mathematical operations on the input data. As we inspect deeper into these sequences of operations, it becomes difficult to understand what features of the original images are used. For that reason, they are often thought of as “black boxes” since, for most users, only the input images and output predictions are readily available.
Figure 1

Using classical or DL algorithms to analyse microscopy images.

This figure illustrates the critical steps required when using classical or DL-based algorithms to analyse microscopy images, using denoising as an example. When using a classical algorithm, the researchers’ efforts are put into designing mathematical formulae that can then be directly applied to the images. When using a DL algorithm, first, a model needs to be trained using a training dataset. Next, the model can be directly applied to other images and generate predictions. Typically, such a model will only perform well on images similar to the ones used during training. This highlights the importance of the data used to train the DL algorithm (its quantity and diversity). The microscopy images displayed are breast cancer cells labelled with SiR-DNA to visualise their nuclei and imaged using a spinning disk confocal microscope (SDCM). The denoising performed in the “classical algorithm” section was performed using PureDenoise implemented in Fiji [20,21]. The denoising performed in the “Deep Learning algorithm” section was performed using CARE implemented in ZeroCostDL4Mic [10,22].

The training data provided to the ANN is commonly constituted of a large set of representative input images and their expected results. For instance, in denoising, the training dataset is composed of noisy and high signal-to-noise ratio (SNR) images (Figure 1). This type of training using paired image-labels is commonly referred to as supervised training. On the other hand, for so-called self-supervised training, pre-processing steps directly generate the training pairs, and therefore, the users only need to provide input images. Training is typically the most challenging, time-consuming and resource-greedy part of the process and can take minutes to weeks depending on the size of the training dataset and the type of ANN. It often requires specialised knowledge, dedicated training datasets and access to powerful computational resources such as Graphical Processing Units (GPUs) to run and optimise ANN training. In comparison, using DL models (predictions) can be straightforward (parameter-free, one-click solution) and fast (seconds to minutes). Multiple tools are in development to facilitate the training and use of DL for bioimage analysis, including both online and offline, commercial and open-source solutions [8,22-30]. Once a model has been trained, it constitutes a portable algorithm to process new images, often with excellent speed performance, even on a local machine. However, in general, a DL model will only perform well on images similar to those used during training. How similar the images need to be depends on the type of network used, and aspects to consider here encompass microscope types, label types, and the SNR or optical aberrations. This highlights the importance of the data used to train the DL algorithm, both in terms of its quantity and its diversity. Therefore, one powerful approach is to produce general models with high reusability potential using a large and diverse training dataset. For example, popular nuclei or cell segmentation models have been released [29,31,32] (Figure 2). However, this is only possible when large heterogeneous pre-curated datasets are available, which are challenging to produce.
Figure 2

Using quality metrics to assess the performance of DL models.

Figure illustrating that comparing DL-based predictions to ground truth images is a powerful strategy to assess a DL model performance. (A, B) Noisy images of breast cancer cells labelled with SiR-DNA were denoised using CARE (A, B; [10]), Noise2Void (B, [11]), and DecoNoising (C, [36]) all implemented in ZeroCostDL4Mic [22]. Noisy and ground truth images were acquired using different exposure times. (A) Matching noisy, ground truth, and CARE prediction images. White squares highlight regions of interest that are magnified in the bottom rows. Image similarity metrics mSSIM, NRMSE, and PNSR (see Box 1) shown on the images were obtained by comparing them to the ground truth image. The SSIM (yellow: high agreement; dark blue low agreement, 1 indicates perfect agreement) and RSE (yellow: high agreement; dark blue low agreement, 0 indicates perfect agreement) maps highlight the differences between the CARE prediction and the corresponding ground truth image. Note that the agreement between these two images is not homogenous across the field of view and that these maps are helpful to identify spatial artefacts. (B) Magnified region of interest from (A) showcasing how using image similarity metrics can compare different DL models trained using different algorithms but using the same training dataset. Note that in this example, all three algorithms improved the original image but to a different extent. Importantly, these results do not represent the algorithm’s overall performance to train these models but only assess their suitability to denoise this specific dataset. (C) Example highlighting how segmentation metrics can be used to evaluate the performance of segmentation pre-trained models [29,31,32] Image segmentation metrics Intersection over Union (loU, 1 indicates perfect agreement), F1 score (F1, 1 indicates perfect agreement), and panoptic quality (PQ, 1 indicates perfect agreement, [37]) displayed on the images were obtained by comparing them to the ground truth image which was manually annotated. Of note, these results do not reflect the overall quality of these pre-trained models (or of the algorithm used to train them) but only assess their suitability to segment this dataset.

Nonetheless, as DL models are becoming accessible through public repositories (so-called model zoos, such as bioimage.io) or web interfaces [29,32], it becomes straightforward to use them directly to analyse new data. This has the advantages of speeding up DL uptake but, unless the researcher can confirm that their own data were well represented within the training dataset used initially (which can be very difficult to do), the performance of such portable models on the new data often remains unclear. One major downside of this issue is that the DL model may generate artefacts and biases that can be difficult to identify. Therefore, despite its incredible potential, the application of DL in microscopy analysis has raised concerns [33-35], due to a lack of transparency and understanding of its limitations, especially for generalisability. In addition to this, DL is developing at an incredible rate, which then places a significant burden on users to determine the most appropriate tools for their needs, taking into account the validity and performance of a range of approaches that are often difficult to compare. Here, we propose that many of these concerns can be significantly alleviated by the careful assessment of DL models performance, consideration in the choice of tool and by following reporting guidelines to ensure transparency.

Assessing DL model predictions

Currently, the most unambiguous way to assess the quality of DL model predictions is to compare them to ground truth images or labels (Figure 2A). Here we primarily focus on image restoration and segmentation tasks, but similar concepts also apply to other image-to-image DL-based image analysis. Segmentation results can be compared to manually annotated masks. In this case, expert manual annotations remain the gold standard to evaluate segmentation. Denoising results can be compared to matching high SNR images acquired with high laser power or long exposure times [10,14] or computationally introducing noise to high SNR data [15]. The comparison between the model prediction and the ground truth dataset is scored using various metrics (see Box 1). These analyses are typically performed after a model has been trained. However, DL models are often evaluated using data that are similar to the one used during training, which does not always represent a general performance level. Therefore, we argue that it is also the end user’s responsibility to generate evaluation data to assess the specific performance of any DL model for their data. This would often involve generating ground truth images or investing time in manually annotating a few images to ensure that sufficient material is available for this essential quality control step. For instance, when planning to use a denoising DL model, users can acquire a few corresponding high SNR images to ensure that the chosen denoising strategy works appropriately. Additionally, using such a dataset, users can also compare the performance of various tools to find the most suitable for the job (Figure 2B and 2C). When comparing DL predictions to ground truth, it is important to visually assess the network output for artefacts, but equally important to quantitatively estimate similarity with the expected results. Box 1 presents a list of commonly used metrics and their appropriate uses depending on the tasks performed by the DL model. In addition, we provide a Jupyter notebook, as part of the ZeroCostDL4Mic platform [22], to easily compute these metrics directly in the cloud. One of the most straightforward image metrics used to assess denoising, restoration, and image-to-image translation predictions is the Root Square Error (RSE), which calculates the sum of the square differences between predictions and the expected ground truth on a pixel-by-pixel basis. RSE is an easy-to-understand metric but does not report on structures, only on intensities. So other image similarity metrics such as the structural similarity index measure (SSIM [38]) are also commonly used (Box 1 and Figure 2). Additionally, these metrics can be presented as maps that spatially render the discrepancies between the DL predictions and ground truth images. Such maps are especially useful to check for reconstruction artefacts that may be linked to specific structures in the images (Figure 2). Other metrics, such as Intersection over Union (loU), which measures the overlap between two binary masks, can assess the quality of segmentation outputs. Instance segmentation results can be further evaluated using additional scores such as F1 score or Panoptic quality [37], reflecting the ability of the algorithm to identify each object in the image correctly. Other metrics have also been developed to assess other image processing tasks such as image registration [39] or super-resolution reconstructions [40] but are not described here in detail. When using metrics to assess DL predictions, an issue that often arises is to decide when the metric scores are good enough. This is often less of a problem for segmentation tasks where predictions and ground truth images can reach a good agreement (IoU and F1 scores of 0.9 and above). However, assessing the quality of denoising and image-to-image translation predictions may be more challenging. We found the approach of comparing both the prediction and the raw images to the ground truth images to be especially useful to evaluate denoising. This allows checking that the predictions are more similar to the ground truth images than the raw input data. If this is not the case, the DL model used is not improving the dataset toward the target image and should be reconsidered. We recommend that efforts should be put into generating ground truth data as much as possible, and it is almost always possible to do so. But in rare cases, when ground truth images are not available, a careful visual inspection of the results may be the only option to assess a DL model’s performance. While less desirable, this solution may be sufficient if the results are already well characterised and well understood by the researcher such as when denoising known cellular structures. However, when studying novel phenomena, this approach should be avoided and observations cross-validated, especially if the structures observed after denoising are not easily visible in the raw data. Thus, there would be a need for developing metrics or novel evaluation methods that can assess the quality of predictions when no ground truth images are available.

Choosing a DL tool

With the increasing availability of networks, models and software, it becomes challenging to identify the most suitable tool to answer a biological question. We do not recommend any particular software or tool simply because each user’s needs are distinct (for an excellent review of DL-based segmentation tools, see [9]). Instead, we present a few pointers to help readers sieve through the literature based on what developers have reported in their work and reports from early adopters. First, we recommend choosing an active, well-documented and well-maintained tool that matches the user’s prefered interface. Available DL tools now span various web interfaces [29,32], standalone software [24,28,32,41], plugins for popular image analysis software [10,11,27,42], online notebooks [22] and Python packages [43]. Each platform requires a different level of technical skills to use. In addition, the details of the documentation provided by the developers can vary significantly and ranges from annotated code to online video tutorials and detailed step-by-step guides. This will limit accidental misuse of the tool and help the users understand the tools and their capabilities. Additionally, a substantial existing user base and online forums discussing troubleshooting are signs of a healthy and helpful tool. It also provides a wealth of information about users’ experiences as well as tips and tricks. We advise being wary about works that do not provide source code and associated data for users to reproduce the results on example data. It is typically free and easy to make these publically available via common platforms (i.e. GitHub). We support works that themselves encourage open science. We also believe that example data are instrumental as they allow users to test and learn how to use a tool properly before applying it to their data. As discussed above, it is essential to carefully assess the performance of DL-based tools on the dataset of interest. Therefore we also recommend using tools that offer purposely-built evaluation and sanity check strategies. We also strongly encourage users to consider how the chosen tool can be used within their prefered image analysis pipeline. DL-based analyses will often constitute only a small part of the overall analysis process, and therefore, the pipeline as a whole should be considered before selecting a tool. When training DL networks using a new algorithm or software, one feature to look for is strategies to identify and prevent overfitting. Overfitting occurs when a model becomes too specialised to the training dataset and does not generalise well to new data. In practice, this means that the trained model may not perform well on new data even if they are similar to those used during training. Overfitting can be detected by monitoring how the performance of the model evolves over training time on the training dataset and a set-aside validation dataset. When more training leads to an improvement in performance on the training dataset but an otherwise worsening of the performance on the validation dataset, this is a sign that overfitting is occurring which can be typically visualised by plotting so-called loss curves over training time. Overfitting may be prevented by increasing the training dataset’s diversity using, for instance, data augmentation [44,45] or using strategies such as reducing the model complexity, adding regularisation (L1, L2) or early stopping during training [46]. DL tools dedicated to training would enormously benefit from these features as these simplify the assessment and potential improvement on model optimisation for the user. Another feature to look for when choosing a tool to train DL models is the possibility to perform transfer learning. Transfer learning enables the use of existing models as a starting point when training a new model. This allows taking advantage of previously learned model features present in these trained models instead of starting the training process from scratch. Transfer learning can considerably accelerate training or reduce the size of the necessary training dataset and produce models with higher performance [22,47]. Finally, when testing a new tool, it is often informative (and even often appreciated) to get in touch with developers and contribute to improving the tools when discovering bugs or by reporting issues in some particular configurations that may not have been encountered at the development stage. We feel the importance of this conversation is sometimes understated, even though it promotes good tools, open-mindedness and multidisciplinarity while building trust in the methods.

Reporting the use of DL in publications

As previously done for other transformative technologies, we believe that the bioimaging community needs to discuss and flesh out guidelines for reporting DL use for bioimaging in publications [48-51]. This is especially important as the reporting of more traditional image analyses and acquisitions pipelines is still raising concerns [48,52-54]. It is beyond the intention of the present work to propose guidance to developers on evaluation and reporting when proposing new DL algorithms, and we refer the readers to recent work that has initiated this conversation within the computer science community [55]. Instead, we focus on what would be useful to report when using DL tools. Due to the wealth of hyperparameters, architecture choices and data manipulation available with DL, incorrectly trained or incorrectly evaluated DL models can be easily generated and lead to suboptimal results. This, therefore, highlights the importance of reporting clearly and appropriately the steps leading to the generation of a particular model. Indeed, standard guidelines will increase confidence in the use of DL and promote transparency and reproducibility. Such guidelines will also help reviewers assess manuscripts using DL for image analysis, especially if this technology is unfamiliar to them. Below, we listed several suggestions for contributing to this critical discussion. Naturally, the algorithm used should be reported, and the appropriate paper(s) cited. We also recommend indicating the version of the algorithm used or, failing that, the date at which the tool was obtained, since most analytical tools change over time, and each update may lead to varying performance on the same data. For DL, this is currently not a widespread habit, especially because both the network and the dataset may change over time (acquiring more data to expand the training dataset, for instance). Similarly, when using models trained by others, it is advisable to indicate the version of the model used. If not available, we recommend providing the date when the model was obtained and used. A DL model performance is entirely dependent on the dataset used at the training stage. When training dedicated DL models, the training dataset should be clearly described in the material and methods (types of microscopes, modality etc., as recommended in other work [52]). Also, the training dataset should be deposited in a suitable and semi-permanent data repository (i.e. Zenodo, BioImageArchive). When training a DL model, we recommend indicating the key hyperparameters used and the main underlying libraries (e.g. TensorFlow, PyTorch). We recommend that DL models with reusability potential be deposited in a suitable repository (i.e. Zenodo) and linked to a model Zoo (i.e. TensorFlow hub, bioimage.io) along with their associated metadata. If custom code was generated to run the algorithm or process the data (pre or post-processing steps, for instance), it should also be shared with the paper and archived (i.e. GitHub, Zenodo). The steps taken to validate the DL model used should be clearly described. This includes the type of validation (i.e. indicating the evaluation metric used and what score was achieved), the number and the origin of the images used for evaluation (it is often considered imperative for evaluation data to be completely absent from training data to have bearings on how well the model generalises to new data), and explaining why the result was deemed acceptable. If space allows, we also recommend providing evaluation examples as supplementary figures. When performing predictions using a DL model, the tool used to run the model should be indicated (with the version again), and appropriate paper(s) cited. Indeed several tools offer the possibility to run DL models and may involve different pre-or post-processing steps that can influence the results obtained.

Concluding remarks

DL tools are transforming the way we analyse microscopy images. However, we think that DL cannot be used on any dataset without prior validation. This is especially important as users risk falling into the artificial intelligence hype when other techniques may be more appropriate, more robust and sometimes quicker to analyse their images. Importantly, due to the complexity of operations performed in DL, not knowing precisely how the images are manipulated may affect how they can be reliably analysed downstream of DL. As an example, it is hard to estimate whether it is appropriate to quantify absolute image intensities following DL-based denoising due to potential non-linearity with respect to the input data. Similarly, although image-to-image translation and resolution improvement using DL are very promising approaches, they remain prone to undetected artefacts generation due to the inherent addition of data to the input data [56] from the training dataset, raising concerns of validity. Here, we presented arguments towards the importance of validating any models using a purposefully-built evaluation dataset containing ground truth target images or labels. Similarly, the use of DL models should be reported appropriately to ensure reproducibility and transparency. This is a challenging task for DL since many components, both internal (hyperparameters) and external (training dataset) to the network used, can dramatically influence the results obtained. With the increasing availability of networks and models, we also stress the importance of finding ways to identify what might be a good tool. We believe that a good tool is not only a performant one, but that transparency of what it does to the data, useability and reliability are equally important. The responsibility of proper use of DL in microscopy is now equally shared between users and developers. Uncle Ben has never been more right than today: “With great powers comes great responsibility”. Finally, this article is not intended to set strict standards in place but rather serve as a starting point for further discussions between users, developers, image analysis specialists and journal editors to define appropriate use of these otherwise powerful techniques.
  37 in total

Review 1.  Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction.

Authors:  Chinmay Belthangady; Loic A Royer
Journal:  Nat Methods       Date:  2019-07-08       Impact factor: 28.547

2.  Fiji: an open-source platform for biological-image analysis.

Authors:  Johannes Schindelin; Ignacio Arganda-Carreras; Erwin Frise; Verena Kaynig; Mark Longair; Tobias Pietzsch; Stephan Preibisch; Curtis Rueden; Stephan Saalfeld; Benjamin Schmid; Jean-Yves Tinevez; Daniel James White; Volker Hartenstein; Kevin Eliceiri; Pavel Tomancak; Albert Cardona
Journal:  Nat Methods       Date:  2012-06-28       Impact factor: 28.547

Review 3.  ilastik: interactive machine learning for (bio)image analysis.

Authors:  Stuart Berg; Dominik Kutra; Thorben Kroeger; Christoph N Straehle; Bernhard X Kausler; Carsten Haubold; Martin Schiegg; Janez Ales; Thorsten Beier; Markus Rudy; Kemal Eren; Jaime I Cervantes; Buote Xu; Fynn Beuttenmueller; Adrian Wolny; Chong Zhang; Ullrich Koethe; Fred A Hamprecht; Anna Kreshuk
Journal:  Nat Methods       Date:  2019-09-30       Impact factor: 28.547

4.  Deep learning enables cross-modality super-resolution in fluorescence microscopy.

Authors:  Hongda Wang; Yair Rivenson; Yiyin Jin; Zhensong Wei; Ronald Gao; Harun Günaydın; Laurent A Bentolila; Comert Kural; Aydogan Ozcan
Journal:  Nat Methods       Date:  2018-12-17       Impact factor: 28.547

5.  DeepMIB: User-friendly and open-source software for training of deep learning network for biological image segmentation.

Authors:  Ilya Belevich; Eija Jokitalo
Journal:  PLoS Comput Biol       Date:  2021-03-02       Impact factor: 4.475

6.  Democratising deep learning for microscopy with ZeroCostDL4Mic.

Authors:  Lucas von Chamier; Romain F Laine; Johanna Jukkala; Christoph Spahn; Daniel Krentzel; Elias Nehme; Martina Lerche; Sara Hernández-Pérez; Pieta K Mattila; Eleni Karinou; Séamus Holden; Ahmet Can Solak; Alexander Krull; Tim-Oliver Buchholz; Martin L Jones; Loïc A Royer; Christophe Leterrier; Yoav Shechtman; Florian Jug; Mike Heilemann; Guillaume Jacquemet; Ricardo Henriques
Journal:  Nat Commun       Date:  2021-04-15       Impact factor: 14.919

7.  Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition).

Authors:  Daniel J Klionsky; Kotb Abdelmohsen; Akihisa Abe; Md Joynal Abedin; Hagai Abeliovich; Abraham Acevedo Arozena; Hiroaki Adachi; Christopher M Adams; Peter D Adams; Khosrow Adeli; Peter J Adhihetty; Sharon G Adler; Galila Agam; Rajesh Agarwal; Manish K Aghi; Maria Agnello; Patrizia Agostinis; Patricia V Aguilar; Julio Aguirre-Ghiso; Edoardo M Airoldi; Slimane Ait-Si-Ali; Takahiko Akematsu; Emmanuel T Akporiaye; Mohamed Al-Rubeai; Guillermo M Albaiceta; Chris Albanese; Diego Albani; Matthew L Albert; Jesus Aldudo; Hana Algül; Mehrdad Alirezaei; Iraide Alloza; Alexandru Almasan; Maylin Almonte-Beceril; Emad S Alnemri; Covadonga Alonso; Nihal Altan-Bonnet; Dario C Altieri; Silvia Alvarez; Lydia Alvarez-Erviti; Sandro Alves; Giuseppina Amadoro; Atsuo Amano; Consuelo Amantini; Santiago Ambrosio; Ivano Amelio; Amal O Amer; Mohamed Amessou; Angelika Amon; Zhenyi An; Frank A Anania; Stig U Andersen; Usha P Andley; Catherine K Andreadi; Nathalie Andrieu-Abadie; Alberto Anel; David K Ann; Shailendra Anoopkumar-Dukie; Manuela Antonioli; Hiroshi Aoki; Nadezda Apostolova; Saveria Aquila; Katia Aquilano; Koichi Araki; Eli Arama; Agustin Aranda; Jun Araya; Alexandre Arcaro; Esperanza Arias; Hirokazu Arimoto; Aileen R Ariosa; Jane L Armstrong; Thierry Arnould; Ivica Arsov; Katsuhiko Asanuma; Valerie Askanas; Eric Asselin; Ryuichiro Atarashi; Sally S Atherton; Julie D Atkin; Laura D Attardi; Patrick Auberger; Georg Auburger; Laure Aurelian; Riccardo Autelli; Laura Avagliano; Maria Laura Avantaggiati; Limor Avrahami; Suresh Awale; Neelam Azad; Tiziana Bachetti; Jonathan M Backer; Dong-Hun Bae; Jae-Sung Bae; Ok-Nam Bae; Soo Han Bae; Eric H Baehrecke; Seung-Hoon Baek; Stephen Baghdiguian; Agnieszka Bagniewska-Zadworna; Hua Bai; Jie Bai; Xue-Yuan Bai; Yannick Bailly; Kithiganahalli Narayanaswamy Balaji; Walter Balduini; Andrea Ballabio; Rena Balzan; Rajkumar Banerjee; Gábor Bánhegyi; Haijun Bao; Benoit Barbeau; Maria D Barrachina; Esther Barreiro; Bonnie Bartel; Alberto Bartolomé; Diane C Bassham; Maria Teresa Bassi; Robert C Bast; Alakananda Basu; Maria Teresa Batista; Henri Batoko; Maurizio Battino; Kyle Bauckman; Bradley L Baumgarner; K Ulrich Bayer; Rupert Beale; Jean-François Beaulieu; George R Beck; Christoph Becker; J David Beckham; Pierre-André Bédard; Patrick J Bednarski; Thomas J Begley; Christian Behl; Christian Behrends; Georg Mn Behrens; Kevin E Behrns; Eloy Bejarano; Amine Belaid; Francesca Belleudi; Giovanni Bénard; Guy Berchem; Daniele Bergamaschi; Matteo Bergami; Ben Berkhout; Laura Berliocchi; Amélie Bernard; Monique Bernard; Francesca Bernassola; Anne Bertolotti; Amanda S Bess; Sébastien Besteiro; Saverio Bettuzzi; Savita Bhalla; Shalmoli Bhattacharyya; Sujit K Bhutia; Caroline Biagosch; Michele Wolfe Bianchi; Martine Biard-Piechaczyk; Viktor Billes; Claudia Bincoletto; Baris Bingol; Sara W Bird; Marc Bitoun; Ivana Bjedov; Craig Blackstone; Lionel Blanc; Guillermo A Blanco; Heidi Kiil Blomhoff; Emilio Boada-Romero; Stefan Böckler; Marianne Boes; Kathleen Boesze-Battaglia; Lawrence H Boise; Alessandra Bolino; Andrea Boman; Paolo Bonaldo; Matteo Bordi; Jürgen Bosch; Luis M Botana; Joelle Botti; German Bou; Marina Bouché; Marion Bouchecareilh; Marie-Josée Boucher; Michael E Boulton; Sebastien G Bouret; Patricia Boya; Michaël Boyer-Guittaut; Peter V Bozhkov; Nathan Brady; Vania Mm Braga; Claudio Brancolini; Gerhard H Braus; José M Bravo-San Pedro; Lisa A Brennan; Emery H Bresnick; Patrick Brest; Dave Bridges; Marie-Agnès Bringer; Marisa Brini; Glauber C Brito; Bertha Brodin; Paul S Brookes; Eric J Brown; Karen Brown; Hal E Broxmeyer; Alain Bruhat; Patricia Chakur Brum; John H Brumell; Nicola Brunetti-Pierri; Robert J Bryson-Richardson; Shilpa Buch; Alastair M Buchan; Hikmet Budak; Dmitry V Bulavin; Scott J Bultman; Geert Bultynck; Vladimir Bumbasirevic; Yan Burelle; Robert E Burke; Margit Burmeister; Peter Bütikofer; Laura Caberlotto; Ken Cadwell; Monika Cahova; Dongsheng Cai; Jingjing Cai; Qian Cai; Sara Calatayud; Nadine Camougrand; Michelangelo Campanella; Grant R Campbell; Matthew Campbell; Silvia Campello; Robin Candau; Isabella Caniggia; Lavinia Cantoni; Lizhi Cao; Allan B Caplan; Michele Caraglia; Claudio Cardinali; Sandra Morais Cardoso; Jennifer S Carew; Laura A Carleton; Cathleen R Carlin; Silvia Carloni; Sven R Carlsson; Didac Carmona-Gutierrez; Leticia Am Carneiro; Oliana Carnevali; Serena Carra; Alice Carrier; Bernadette Carroll; Caty Casas; Josefina Casas; Giuliana Cassinelli; Perrine Castets; Susana Castro-Obregon; Gabriella Cavallini; Isabella Ceccherini; Francesco Cecconi; Arthur I Cederbaum; Valentín Ceña; Simone Cenci; Claudia Cerella; Davide Cervia; Silvia Cetrullo; Hassan Chaachouay; Han-Jung Chae; Andrei S Chagin; Chee-Yin Chai; Gopal Chakrabarti; Georgios Chamilos; Edmond Yw Chan; Matthew Tv Chan; Dhyan Chandra; Pallavi Chandra; Chih-Peng Chang; Raymond Chuen-Chung Chang; Ta Yuan Chang; John C Chatham; Saurabh Chatterjee; Santosh Chauhan; Yongsheng Che; Michael E Cheetham; Rajkumar Cheluvappa; Chun-Jung Chen; Gang Chen; Guang-Chao Chen; Guoqiang Chen; Hongzhuan Chen; Jeff W Chen; Jian-Kang Chen; Min Chen; Mingzhou Chen; Peiwen Chen; Qi Chen; Quan Chen; Shang-Der Chen; Si Chen; Steve S-L Chen; Wei Chen; Wei-Jung Chen; Wen Qiang Chen; Wenli Chen; Xiangmei Chen; Yau-Hung Chen; Ye-Guang Chen; Yin Chen; Yingyu Chen; Yongshun Chen; Yu-Jen Chen; Yue-Qin Chen; Yujie Chen; Zhen Chen; Zhong Chen; Alan Cheng; Christopher Hk Cheng; Hua Cheng; Heesun Cheong; Sara Cherry; Jason Chesney; Chun Hei Antonio Cheung; Eric Chevet; Hsiang Cheng Chi; Sung-Gil Chi; Fulvio Chiacchiera; Hui-Ling Chiang; Roberto Chiarelli; Mario Chiariello; Marcello Chieppa; Lih-Shen Chin; Mario Chiong; Gigi Nc Chiu; Dong-Hyung Cho; Ssang-Goo Cho; William C Cho; Yong-Yeon Cho; Young-Seok Cho; Augustine Mk Choi; Eui-Ju Choi; Eun-Kyoung Choi; Jayoung Choi; Mary E Choi; Seung-Il Choi; Tsui-Fen Chou; Salem Chouaib; Divaker Choubey; Vinay Choubey; Kuan-Chih Chow; Kamal Chowdhury; Charleen T Chu; Tsung-Hsien Chuang; Taehoon Chun; Hyewon Chung; Taijoon Chung; Yuen-Li Chung; Yong-Joon Chwae; Valentina Cianfanelli; Roberto Ciarcia; Iwona A Ciechomska; Maria Rosa Ciriolo; Mara Cirone; Sofie Claerhout; Michael J Clague; Joan Clària; Peter Gh Clarke; Robert Clarke; Emilio Clementi; Cédric Cleyrat; Miriam Cnop; Eliana M Coccia; Tiziana Cocco; Patrice Codogno; Jörn Coers; Ezra Ew Cohen; David Colecchia; Luisa Coletto; Núria S Coll; Emma Colucci-Guyon; Sergio Comincini; Maria Condello; Katherine L Cook; Graham H Coombs; Cynthia D Cooper; J Mark Cooper; Isabelle Coppens; Maria Tiziana Corasaniti; Marco Corazzari; Ramon Corbalan; Elisabeth Corcelle-Termeau; Mario D Cordero; Cristina Corral-Ramos; Olga Corti; Andrea Cossarizza; Paola Costelli; Safia Costes; Susan L Cotman; Ana Coto-Montes; Sandra Cottet; Eduardo Couve; Lori R Covey; L Ashley Cowart; Jeffery S Cox; Fraser P Coxon; Carolyn B Coyne; Mark S Cragg; Rolf J Craven; Tiziana Crepaldi; Jose L Crespo; Alfredo Criollo; Valeria Crippa; Maria Teresa Cruz; Ana Maria Cuervo; Jose M Cuezva; Taixing Cui; Pedro R Cutillas; Mark J Czaja; Maria F Czyzyk-Krzeska; Ruben K Dagda; Uta Dahmen; Chunsun Dai; Wenjie Dai; Yun Dai; Kevin N Dalby; Luisa Dalla Valle; Guillaume Dalmasso; Marcello D'Amelio; Markus Damme; Arlette Darfeuille-Michaud; Catherine Dargemont; Victor M Darley-Usmar; Srinivasan Dasarathy; Biplab Dasgupta; Srikanta Dash; Crispin R Dass; Hazel Marie Davey; Lester M Davids; David Dávila; Roger J Davis; Ted M Dawson; Valina L Dawson; Paula Daza; Jackie de Belleroche; Paul de Figueiredo; Regina Celia Bressan Queiroz de Figueiredo; José de la Fuente; Luisa De Martino; Antonella De Matteis; Guido Ry De Meyer; Angelo De Milito; Mauro De Santi; Wanderley de Souza; Vincenzo De Tata; Daniela De Zio; Jayanta Debnath; Reinhard Dechant; Jean-Paul Decuypere; Shane Deegan; Benjamin Dehay; Barbara Del Bello; Dominic P Del Re; Régis Delage-Mourroux; Lea Md Delbridge; Louise Deldicque; Elizabeth Delorme-Axford; Yizhen Deng; Joern Dengjel; Melanie Denizot; Paul Dent; Channing J Der; Vojo Deretic; Benoît Derrien; Eric Deutsch; Timothy P Devarenne; Rodney J Devenish; Sabrina Di Bartolomeo; Nicola Di Daniele; Fabio Di Domenico; Alessia Di Nardo; Simone Di Paola; Antonio Di Pietro; Livia Di Renzo; Aaron DiAntonio; Guillermo Díaz-Araya; Ines Díaz-Laviada; Maria T Diaz-Meco; Javier Diaz-Nido; Chad A Dickey; Robert C Dickson; Marc Diederich; Paul Digard; Ivan Dikic; Savithrama P Dinesh-Kumar; Chan Ding; Wen-Xing Ding; Zufeng Ding; Luciana Dini; Jörg Hw Distler; Abhinav Diwan; Mojgan Djavaheri-Mergny; Kostyantyn Dmytruk; Renwick Cj Dobson; Volker Doetsch; Karol Dokladny; Svetlana Dokudovskaya; Massimo Donadelli; X Charlie Dong; Xiaonan Dong; Zheng Dong; Terrence M Donohue; Kelly S Doran; Gabriella D'Orazi; Gerald W Dorn; Victor Dosenko; Sami Dridi; Liat Drucker; Jie Du; Li-Lin Du; Lihuan Du; André du Toit; Priyamvada Dua; Lei Duan; Pu Duann; Vikash Kumar Dubey; Michael R Duchen; Michel A Duchosal; Helene Duez; Isabelle Dugail; Verónica I Dumit; Mara C Duncan; Elaine A Dunlop; William A Dunn; Nicolas Dupont; Luc Dupuis; Raúl V Durán; Thomas M Durcan; Stéphane Duvezin-Caubet; Umamaheswar Duvvuri; Vinay Eapen; Darius Ebrahimi-Fakhari; Arnaud Echard; Leopold Eckhart; Charles L Edelstein; Aimee L Edinger; Ludwig Eichinger; Tobias Eisenberg; Avital Eisenberg-Lerner; N Tony Eissa; Wafik S El-Deiry; Victoria El-Khoury; Zvulun Elazar; Hagit Eldar-Finkelman; Chris Jh Elliott; Enzo Emanuele; Urban Emmenegger; Nikolai Engedal; Anna-Mart Engelbrecht; Simone Engelender; Jorrit M Enserink; Ralf Erdmann; Jekaterina Erenpreisa; Rajaraman Eri; Jason L Eriksen; Andreja Erman; Ricardo Escalante; Eeva-Liisa Eskelinen; Lucile Espert; Lorena Esteban-Martínez; Thomas J Evans; Mario Fabri; Gemma Fabrias; Cinzia Fabrizi; Antonio Facchiano; Nils J Færgeman; Alberto Faggioni; W Douglas Fairlie; Chunhai Fan; Daping Fan; Jie Fan; Shengyun Fang; Manolis Fanto; Alessandro Fanzani; Thomas Farkas; Mathias Faure; Francois B Favier; Howard Fearnhead; Massimo Federici; Erkang Fei; Tania C Felizardo; Hua Feng; Yibin Feng; Yuchen Feng; Thomas A Ferguson; Álvaro F Fernández; Maite G Fernandez-Barrena; Jose C Fernandez-Checa; Arsenio Fernández-López; Martin E Fernandez-Zapico; Olivier Feron; Elisabetta Ferraro; Carmen Veríssima Ferreira-Halder; Laszlo Fesus; Ralph Feuer; Fabienne C Fiesel; Eduardo C Filippi-Chiela; Giuseppe Filomeni; Gian Maria Fimia; John H Fingert; Steven Finkbeiner; Toren Finkel; Filomena Fiorito; Paul B Fisher; Marc Flajolet; Flavio Flamigni; Oliver Florey; Salvatore Florio; R Andres Floto; Marco Folini; Carlo Follo; Edward A Fon; Francesco Fornai; Franco Fortunato; Alessandro Fraldi; Rodrigo Franco; Arnaud Francois; Aurélie François; Lisa B Frankel; Iain Dc Fraser; Norbert Frey; Damien G Freyssenet; Christian Frezza; Scott L Friedman; Daniel E Frigo; Dongxu Fu; José M Fuentes; Juan Fueyo; Yoshio Fujitani; Yuuki Fujiwara; Mikihiro Fujiya; Mitsunori Fukuda; Simone Fulda; Carmela Fusco; Bozena Gabryel; Matthias Gaestel; Philippe Gailly; Malgorzata Gajewska; Sehamuddin Galadari; Gad Galili; Inmaculada Galindo; Maria F Galindo; Giovanna Galliciotti; Lorenzo Galluzzi; Luca Galluzzi; Vincent Galy; Noor Gammoh; Sam Gandy; Anand K Ganesan; Swamynathan Ganesan; Ian G Ganley; Monique Gannagé; Fen-Biao Gao; Feng Gao; Jian-Xin Gao; Lorena García Nannig; Eleonora García Véscovi; Marina Garcia-Macía; Carmen Garcia-Ruiz; Abhishek D Garg; Pramod Kumar Garg; Ricardo Gargini; Nils Christian Gassen; Damián Gatica; Evelina Gatti; Julie Gavard; Evripidis Gavathiotis; Liang Ge; Pengfei Ge; Shengfang Ge; Po-Wu Gean; Vania Gelmetti; Armando A Genazzani; Jiefei Geng; Pascal Genschik; Lisa Gerner; Jason E Gestwicki; David A Gewirtz; Saeid Ghavami; Eric Ghigo; Debabrata Ghosh; Anna Maria Giammarioli; Francesca Giampieri; Claudia Giampietri; Alexandra Giatromanolaki; Derrick J Gibbings; Lara Gibellini; Spencer B Gibson; Vanessa Ginet; Antonio Giordano; Flaviano Giorgini; Elisa Giovannetti; Stephen E Girardin; Suzana Gispert; Sandy Giuliano; Candece L Gladson; Alvaro Glavic; Martin Gleave; Nelly Godefroy; Robert M Gogal; Kuppan Gokulan; Gustavo H Goldman; Delia Goletti; Michael S Goligorsky; Aldrin V Gomes; Ligia C Gomes; Hernando Gomez; Candelaria Gomez-Manzano; Rubén Gómez-Sánchez; Dawit Ap Gonçalves; Ebru Goncu; Qingqiu Gong; Céline Gongora; Carlos B Gonzalez; Pedro Gonzalez-Alegre; Pilar Gonzalez-Cabo; Rosa Ana González-Polo; Ing Swie Goping; Carlos Gorbea; Nikolai V Gorbunov; Daphne R Goring; Adrienne M Gorman; Sharon M Gorski; Sandro Goruppi; Shino Goto-Yamada; Cecilia Gotor; Roberta A Gottlieb; Illana Gozes; Devrim Gozuacik; Yacine Graba; Martin Graef; Giovanna E Granato; Gary Dean Grant; Steven Grant; Giovanni Luca Gravina; Douglas R Green; Alexander Greenhough; Michael T Greenwood; Benedetto Grimaldi; Frédéric Gros; Charles Grose; Jean-Francois Groulx; Florian Gruber; Paolo Grumati; Tilman Grune; Jun-Lin Guan; Kun-Liang Guan; Barbara Guerra; Carlos Guillen; Kailash Gulshan; Jan Gunst; Chuanyong Guo; Lei Guo; Ming Guo; Wenjie Guo; Xu-Guang Guo; Andrea A Gust; Åsa B Gustafsson; Elaine Gutierrez; Maximiliano G Gutierrez; Ho-Shin Gwak; Albert Haas; James E Haber; Shinji Hadano; Monica Hagedorn; David R Hahn; Andrew J Halayko; Anne Hamacher-Brady; Kozo Hamada; Ahmed Hamai; Andrea Hamann; Maho Hamasaki; Isabelle Hamer; Qutayba Hamid; Ester M Hammond; Feng Han; Weidong Han; James T Handa; John A Hanover; Malene Hansen; Masaru Harada; Ljubica Harhaji-Trajkovic; J Wade Harper; Abdel Halim Harrath; Adrian L Harris; James Harris; Udo Hasler; Peter Hasselblatt; Kazuhisa Hasui; Robert G Hawley; Teresa S Hawley; Congcong He; Cynthia Y He; Fengtian He; Gu He; Rong-Rong He; Xian-Hui He; You-Wen He; Yu-Ying He; Joan K Heath; Marie-Josée Hébert; Robert A Heinzen; Gudmundur Vignir Helgason; Michael Hensel; Elizabeth P Henske; Chengtao Her; Paul K Herman; Agustín Hernández; Carlos Hernandez; Sonia Hernández-Tiedra; Claudio Hetz; P Robin Hiesinger; Katsumi Higaki; Sabine Hilfiker; Bradford G Hill; Joseph A Hill; William D Hill; Keisuke Hino; Daniel Hofius; Paul Hofman; Günter U Höglinger; Jörg Höhfeld; Marina K Holz; Yonggeun Hong; David A Hood; Jeroen Jm Hoozemans; Thorsten Hoppe; Chin Hsu; Chin-Yuan Hsu; Li-Chung Hsu; Dong Hu; Guochang Hu; Hong-Ming Hu; Hongbo Hu; Ming Chang Hu; Yu-Chen Hu; Zhuo-Wei Hu; Fang Hua; Ya Hua; Canhua Huang; Huey-Lan Huang; Kuo-How Huang; Kuo-Yang Huang; Shile Huang; Shiqian Huang; Wei-Pang Huang; Yi-Ran Huang; Yong Huang; Yunfei Huang; Tobias B Huber; Patricia Huebbe; Won-Ki Huh; Juha J Hulmi; Gang Min Hur; James H Hurley; Zvenyslava Husak; Sabah Na Hussain; Salik Hussain; Jung Jin Hwang; Seungmin Hwang; Thomas Is Hwang; Atsuhiro Ichihara; Yuzuru Imai; Carol Imbriano; Megumi Inomata; Takeshi Into; Valentina Iovane; Juan L Iovanna; Renato V Iozzo; Nancy Y Ip; Javier E Irazoqui; Pablo Iribarren; Yoshitaka Isaka; Aleksandra J Isakovic; Harry Ischiropoulos; Jeffrey S Isenberg; Mohammad Ishaq; Hiroyuki Ishida; Isao Ishii; Jane E Ishmael; Ciro Isidoro; Ken-Ichi Isobe; Erika Isono; Shohreh Issazadeh-Navikas; Koji Itahana; Eisuke Itakura; Andrei I Ivanov; Anand Krishnan V Iyer; José M Izquierdo; Yotaro Izumi; Valentina Izzo; Marja Jäättelä; Nadia Jaber; Daniel John Jackson; William T Jackson; Tony George Jacob; Thomas S Jacques; Chinnaswamy Jagannath; Ashish Jain; Nihar Ranjan Jana; Byoung Kuk Jang; Alkesh Jani; Bassam Janji; Paulo Roberto Jannig; Patric J Jansson; Steve Jean; Marina Jendrach; Ju-Hong Jeon; Niels Jessen; Eui-Bae Jeung; Kailiang Jia; Lijun Jia; Hong Jiang; Hongchi Jiang; Liwen Jiang; Teng Jiang; Xiaoyan Jiang; Xuejun Jiang; Xuejun Jiang; Ying Jiang; Yongjun Jiang; Alberto Jiménez; Cheng Jin; Hongchuan Jin; Lei Jin; Meiyan Jin; Shengkan Jin; Umesh Kumar Jinwal; Eun-Kyeong Jo; Terje Johansen; Daniel E Johnson; Gail Vw Johnson; James D Johnson; Eric Jonasch; Chris Jones; Leo Ab Joosten; Joaquin Jordan; Anna-Maria Joseph; Bertrand Joseph; Annie M Joubert; Dianwen Ju; Jingfang Ju; Hsueh-Fen Juan; Katrin Juenemann; Gábor Juhász; Hye Seung Jung; Jae U Jung; Yong-Keun Jung; Heinz Jungbluth; Matthew J Justice; Barry Jutten; Nadeem O Kaakoush; Kai Kaarniranta; Allen Kaasik; Tomohiro Kabuta; Bertrand Kaeffer; Katarina Kågedal; Alon Kahana; Shingo Kajimura; Or Kakhlon; Manjula Kalia; Dhan V Kalvakolanu; Yoshiaki Kamada; Konstantinos Kambas; Vitaliy O Kaminskyy; Harm H Kampinga; Mustapha Kandouz; Chanhee Kang; Rui Kang; Tae-Cheon Kang; Tomotake Kanki; Thirumala-Devi Kanneganti; Haruo Kanno; Anumantha G Kanthasamy; Marc Kantorow; Maria Kaparakis-Liaskos; Orsolya Kapuy; Vassiliki Karantza; Md Razaul Karim; Parimal Karmakar; Arthur Kaser; Susmita Kaushik; Thomas Kawula; A Murat Kaynar; Po-Yuan Ke; Zun-Ji Ke; John H Kehrl; Kate E Keller; Jongsook Kim Kemper; Anne K Kenworthy; Oliver Kepp; Andreas Kern; Santosh Kesari; David Kessel; Robin Ketteler; Isis do Carmo Kettelhut; Bilon Khambu; Muzamil Majid Khan; Vinoth Km Khandelwal; Sangeeta Khare; Juliann G Kiang; Amy A Kiger; Akio Kihara; Arianna L Kim; Cheol Hyeon Kim; Deok Ryong Kim; Do-Hyung Kim; Eung Kweon Kim; Hye Young Kim; Hyung-Ryong Kim; Jae-Sung Kim; Jeong Hun Kim; Jin Cheon Kim; Jin Hyoung Kim; Kwang Woon Kim; Michael D Kim; Moon-Moo Kim; Peter K Kim; Seong Who Kim; Soo-Youl Kim; Yong-Sun Kim; Yonghyun Kim; Adi Kimchi; Alec C Kimmelman; Tomonori Kimura; Jason S King; Karla Kirkegaard; Vladimir Kirkin; Lorrie A Kirshenbaum; Shuji Kishi; Yasuo Kitajima; Katsuhiko Kitamoto; Yasushi Kitaoka; Kaio Kitazato; Rudolf A Kley; Walter T Klimecki; Michael Klinkenberg; Jochen Klucken; Helene Knævelsrud; Erwin Knecht; Laura Knuppertz; Jiunn-Liang Ko; Satoru Kobayashi; Jan C Koch; Christelle Koechlin-Ramonatxo; Ulrich Koenig; Young Ho Koh; Katja Köhler; Sepp D Kohlwein; Masato Koike; Masaaki Komatsu; Eiki Kominami; Dexin Kong; Hee Jeong Kong; Eumorphia G Konstantakou; Benjamin T Kopp; Tamas Korcsmaros; Laura Korhonen; Viktor I Korolchuk; Nadya V Koshkina; Yanjun Kou; Michael I Koukourakis; Constantinos Koumenis; Attila L Kovács; Tibor Kovács; Werner J Kovacs; Daisuke Koya; Claudine Kraft; Dimitri Krainc; Helmut Kramer; Tamara Kravic-Stevovic; Wilhelm Krek; Carole Kretz-Remy; Roswitha Krick; Malathi Krishnamurthy; Janos Kriston-Vizi; Guido Kroemer; Michael C Kruer; Rejko Kruger; Nicholas T Ktistakis; Kazuyuki Kuchitsu; Christian Kuhn; Addanki Pratap Kumar; Anuj Kumar; Ashok Kumar; Deepak Kumar; Dhiraj Kumar; Rakesh Kumar; Sharad Kumar; Mondira Kundu; Hsing-Jien Kung; Atsushi Kuno; Sheng-Han Kuo; Jeff Kuret; Tino Kurz; Terry Kwok; Taeg Kyu Kwon; Yong Tae Kwon; Irene Kyrmizi; Albert R La Spada; Frank Lafont; Tim Lahm; Aparna Lakkaraju; Truong Lam; Trond Lamark; Steve Lancel; Terry H Landowski; Darius J R Lane; Jon D Lane; Cinzia Lanzi; Pierre Lapaquette; Louis R Lapierre; Jocelyn Laporte; Johanna Laukkarinen; Gordon W Laurie; Sergio Lavandero; Lena Lavie; Matthew J LaVoie; Betty Yuen Kwan Law; Helen Ka-Wai Law; Kelsey B Law; Robert Layfield; Pedro A Lazo; Laurent Le Cam; Karine G Le Roch; Hervé Le Stunff; Vijittra Leardkamolkarn; Marc Lecuit; Byung-Hoon Lee; Che-Hsin Lee; Erinna F Lee; Gyun Min Lee; He-Jin Lee; Hsinyu Lee; Jae Keun Lee; Jongdae Lee; Ju-Hyun Lee; Jun Hee Lee; Michael Lee; Myung-Shik Lee; Patty J Lee; Sam W Lee; Seung-Jae Lee; Shiow-Ju Lee; Stella Y Lee; Sug Hyung Lee; Sung Sik Lee; Sung-Joon Lee; Sunhee Lee; Ying-Ray Lee; Yong J Lee; Young H Lee; Christiaan Leeuwenburgh; Sylvain Lefort; Renaud Legouis; Jinzhi Lei; Qun-Ying Lei; David A Leib; Gil Leibowitz; Istvan Lekli; Stéphane D Lemaire; John J Lemasters; Marius K Lemberg; Antoinette Lemoine; Shuilong Leng; Guido Lenz; Paola Lenzi; Lilach O Lerman; Daniele Lettieri Barbato; Julia I-Ju Leu; Hing Y Leung; Beth Levine; Patrick A Lewis; Frank Lezoualc'h; Chi Li; Faqiang Li; Feng-Jun Li; Jun Li; Ke Li; Lian Li; Min Li; Min Li; Qiang Li; Rui Li; Sheng Li; Wei Li; Wei Li; Xiaotao Li; Yumin Li; Jiqin Lian; Chengyu Liang; Qiangrong Liang; Yulin Liao; Joana Liberal; Pawel P Liberski; Pearl Lie; Andrew P Lieberman; Hyunjung Jade Lim; Kah-Leong Lim; Kyu Lim; Raquel T Lima; Chang-Shen Lin; Chiou-Feng Lin; Fang Lin; Fangming Lin; Fu-Cheng Lin; Kui Lin; Kwang-Huei Lin; Pei-Hui Lin; Tianwei Lin; Wan-Wan Lin; Yee-Shin Lin; Yong Lin; Rafael Linden; Dan Lindholm; Lisa M Lindqvist; Paul Lingor; Andreas Linkermann; Lance A Liotta; Marta M Lipinski; Vitor A Lira; Michael P Lisanti; Paloma B Liton; Bo Liu; Chong Liu; Chun-Feng Liu; Fei Liu; Hung-Jen Liu; Jianxun Liu; Jing-Jing Liu; Jing-Lan Liu; Ke Liu; Leyuan Liu; Liang Liu; Quentin Liu; Rong-Yu Liu; Shiming Liu; Shuwen Liu; Wei Liu; Xian-De Liu; Xiangguo Liu; Xiao-Hong Liu; Xinfeng Liu; Xu Liu; Xueqin Liu; Yang Liu; Yule Liu; Zexian Liu; Zhe Liu; Juan P Liuzzi; Gérard Lizard; Mila Ljujic; Irfan J Lodhi; Susan E Logue; Bal L Lokeshwar; Yun Chau Long; Sagar Lonial; Benjamin Loos; Carlos López-Otín; Cristina López-Vicario; Mar Lorente; Philip L Lorenzi; Péter Lõrincz; Marek Los; Michael T Lotze; Penny E Lovat; Binfeng Lu; Bo Lu; Jiahong Lu; Qing Lu; She-Min Lu; Shuyan Lu; Yingying Lu; Frédéric Luciano; Shirley Luckhart; John Milton Lucocq; Paula Ludovico; Aurelia Lugea; Nicholas W Lukacs; Julian J Lum; Anders H Lund; Honglin Luo; Jia Luo; Shouqing Luo; Claudio Luparello; Timothy Lyons; Jianjie Ma; Yi Ma; Yong Ma; Zhenyi Ma; Juliano Machado; Glaucia M Machado-Santelli; Fernando Macian; Gustavo C MacIntosh; Jeffrey P MacKeigan; Kay F Macleod; John D MacMicking; Lee Ann MacMillan-Crow; Frank Madeo; Muniswamy Madesh; Julio Madrigal-Matute; Akiko Maeda; Tatsuya Maeda; Gustavo Maegawa; Emilia Maellaro; Hannelore Maes; Marta Magariños; Kenneth Maiese; Tapas K Maiti; Luigi Maiuri; Maria Chiara Maiuri; Carl G Maki; Roland Malli; Walter Malorni; Alina Maloyan; Fathia Mami-Chouaib; Na Man; Joseph D Mancias; Eva-Maria Mandelkow; Michael A Mandell; Angelo A Manfredi; Serge N Manié; Claudia Manzoni; Kai Mao; Zixu Mao; Zong-Wan Mao; Philippe Marambaud; Anna Maria Marconi; Zvonimir Marelja; Gabriella Marfe; Marta Margeta; Eva Margittai; Muriel Mari; Francesca V Mariani; Concepcio Marin; Sara Marinelli; Guillermo Mariño; Ivanka Markovic; Rebecca Marquez; Alberto M Martelli; Sascha Martens; Katie R Martin; Seamus J Martin; Shaun Martin; Miguel A Martin-Acebes; Paloma Martín-Sanz; Camille Martinand-Mari; Wim Martinet; Jennifer Martinez; Nuria Martinez-Lopez; Ubaldo Martinez-Outschoorn; Moisés Martínez-Velázquez; Marta Martinez-Vicente; Waleska Kerllen Martins; Hirosato Mashima; James A Mastrianni; Giuseppe Matarese; Paola Matarrese; Roberto Mateo; Satoaki Matoba; Naomichi Matsumoto; Takehiko Matsushita; Akira Matsuura; Takeshi Matsuzawa; Mark P Mattson; Soledad Matus; Norma Maugeri; Caroline Mauvezin; Andreas Mayer; Dusica Maysinger; Guillermo D Mazzolini; Mary Kate McBrayer; Kimberly McCall; Craig McCormick; Gerald M McInerney; Skye C McIver; Sharon McKenna; John J McMahon; Iain A McNeish; Fatima Mechta-Grigoriou; Jan Paul Medema; Diego L Medina; Klara Megyeri; Maryam Mehrpour; Jawahar L Mehta; Yide Mei; Ute-Christiane Meier; Alfred J Meijer; Alicia Meléndez; Gerry Melino; Sonia Melino; Edesio Jose Tenorio de Melo; Maria A Mena; Marc D Meneghini; Javier A Menendez; Regina Menezes; Liesu Meng; Ling-Hua Meng; Songshu Meng; Rossella Menghini; A Sue Menko; Rubem Fs Menna-Barreto; Manoj B Menon; Marco A Meraz-Ríos; Giuseppe Merla; Luciano Merlini; Angelica M Merlot; Andreas Meryk; Stefania Meschini; Joel N Meyer; Man-Tian Mi; Chao-Yu Miao; Lucia Micale; Simon Michaeli; Carine Michiels; Anna Rita Migliaccio; Anastasia Susie Mihailidou; Dalibor Mijaljica; Katsuhiko Mikoshiba; Enrico Milan; Leonor Miller-Fleming; Gordon B Mills; Ian G Mills; Georgia Minakaki; Berge A Minassian; Xiu-Fen Ming; Farida Minibayeva; Elena A Minina; Justine D Mintern; Saverio Minucci; Antonio Miranda-Vizuete; Claire H Mitchell; Shigeki Miyamoto; Keisuke Miyazawa; Noboru Mizushima; Katarzyna Mnich; Baharia Mograbi; Simin Mohseni; Luis Ferreira Moita; Marco Molinari; Maurizio Molinari; Andreas Buch Møller; Bertrand Mollereau; Faustino Mollinedo; Marco Mongillo; Martha M Monick; Serena Montagnaro; Craig Montell; Darren J Moore; Michael N Moore; Rodrigo Mora-Rodriguez; Paula I Moreira; Etienne Morel; Maria Beatrice Morelli; Sandra Moreno; Michael J Morgan; Arnaud Moris; Yuji Moriyasu; Janna L Morrison; Lynda A Morrison; Eugenia Morselli; Jorge Moscat; Pope L Moseley; Serge Mostowy; Elisa Motori; Denis Mottet; Jeremy C Mottram; Charbel E-H Moussa; Vassiliki E Mpakou; Hasan Mukhtar; Jean M Mulcahy Levy; Sylviane Muller; Raquel Muñoz-Moreno; Cristina Muñoz-Pinedo; Christian Münz; Maureen E Murphy; James T Murray; Aditya Murthy; Indira U Mysorekar; Ivan R Nabi; Massimo Nabissi; Gustavo A Nader; Yukitoshi Nagahara; Yoshitaka Nagai; Kazuhiro Nagata; Anika Nagelkerke; Péter Nagy; Samisubbu R Naidu; Sreejayan Nair; Hiroyasu Nakano; Hitoshi Nakatogawa; Meera Nanjundan; Gennaro Napolitano; Naweed I Naqvi; Roberta Nardacci; Derek P Narendra; Masashi Narita; Anna Chiara Nascimbeni; Ramesh Natarajan; Luiz C Navegantes; Steffan T Nawrocki; Taras Y Nazarko; Volodymyr Y Nazarko; Thomas Neill; Luca M Neri; Mihai G Netea; Romana T Netea-Maier; Bruno M Neves; Paul A Ney; Ioannis P Nezis; Hang Tt Nguyen; Huu Phuc Nguyen; Anne-Sophie Nicot; Hilde Nilsen; Per Nilsson; Mikio Nishimura; Ichizo Nishino; Mireia Niso-Santano; Hua Niu; Ralph A Nixon; Vincent Co Njar; Takeshi Noda; Angelika A Noegel; Elsie Magdalena Nolte; Erik Norberg; Koenraad K Norga; Sakineh Kazemi Noureini; Shoji Notomi; Lucia Notterpek; Karin Nowikovsky; Nobuyuki Nukina; Thorsten Nürnberger; Valerie B O'Donnell; Tracey O'Donovan; Peter J O'Dwyer; Ina Oehme; Clara L Oeste; Michinaga Ogawa; Besim Ogretmen; Yuji Ogura; Young J Oh; Masaki Ohmuraya; Takayuki Ohshima; Rani Ojha; Koji Okamoto; Toshiro Okazaki; F Javier Oliver; Karin Ollinger; Stefan Olsson; Daniel P Orban; Paulina Ordonez; Idil Orhon; Laszlo Orosz; Eyleen J O'Rourke; Helena Orozco; Angel L Ortega; Elena Ortona; Laura D Osellame; Junko Oshima; Shigeru Oshima; Heinz D Osiewacz; Takanobu Otomo; Kinya Otsu; Jing-Hsiung James Ou; Tiago F Outeiro; Dong-Yun Ouyang; Hongjiao Ouyang; Michael Overholtzer; Michelle A Ozbun; P Hande Ozdinler; Bulent Ozpolat; Consiglia Pacelli; Paolo Paganetti; Guylène Page; Gilles Pages; Ugo Pagnini; Beata Pajak; Stephen C Pak; Karolina Pakos-Zebrucka; Nazzy Pakpour; Zdena Palková; Francesca Palladino; Kathrin Pallauf; Nicolas Pallet; Marta Palmieri; Søren R Paludan; Camilla Palumbo; Silvia Palumbo; Olatz Pampliega; Hongming Pan; Wei Pan; Theocharis Panaretakis; Aseem Pandey; Areti Pantazopoulou; Zuzana Papackova; Daniela L Papademetrio; Issidora Papassideri; Alessio Papini; Nirmala Parajuli; Julian Pardo; Vrajesh V Parekh; Giancarlo Parenti; Jong-In Park; Junsoo Park; Ohkmae K Park; Roy Parker; Rosanna Parlato; Jan B Parys; Katherine R Parzych; Jean-Max Pasquet; Benoit Pasquier; Kishore Bs Pasumarthi; Daniel Patschan; Cam Patterson; Sophie Pattingre; Scott Pattison; Arnim Pause; Hermann Pavenstädt; Flaminia Pavone; Zully Pedrozo; Fernando J Peña; Miguel A Peñalva; Mario Pende; Jianxin Peng; Fabio Penna; Josef M Penninger; Anna Pensalfini; Salvatore Pepe; Gustavo Js Pereira; Paulo C Pereira; Verónica Pérez-de la Cruz; María Esther Pérez-Pérez; Diego Pérez-Rodríguez; Dolores Pérez-Sala; Celine Perier; Andras Perl; David H Perlmutter; Ida Perrotta; Shazib Pervaiz; Maija Pesonen; Jeffrey E Pessin; Godefridus J Peters; Morten Petersen; Irina Petrache; Basil J Petrof; Goran Petrovski; James M Phang; Mauro Piacentini; Marina Pierdominici; Philippe Pierre; Valérie Pierrefite-Carle; Federico Pietrocola; Felipe X Pimentel-Muiños; Mario Pinar; Benjamin Pineda; Ronit Pinkas-Kramarski; Marcello Pinti; Paolo Pinton; Bilal Piperdi; James M Piret; Leonidas C Platanias; Harald W Platta; Edward D Plowey; Stefanie Pöggeler; Marc Poirot; Peter Polčic; Angelo Poletti; Audrey H Poon; Hana Popelka; Blagovesta Popova; Izabela Poprawa; Shibu M Poulose; Joanna Poulton; Scott K Powers; Ted Powers; Mercedes Pozuelo-Rubio; Krisna Prak; Reinhild Prange; Mark Prescott; Muriel Priault; Sharon Prince; Richard L Proia; Tassula Proikas-Cezanne; Holger Prokisch; Vasilis J Promponas; Karin Przyklenk; Rosa Puertollano; Subbiah Pugazhenthi; Luigi Puglielli; Aurora Pujol; Julien Puyal; Dohun Pyeon; Xin Qi; Wen-Bin Qian; Zheng-Hong Qin; Yu Qiu; Ziwei Qu; Joe Quadrilatero; Frederick Quinn; Nina Raben; Hannah Rabinowich; Flavia Radogna; Michael J Ragusa; Mohamed Rahmani; Komal Raina; Sasanka Ramanadham; Rajagopal Ramesh; Abdelhaq Rami; Sarron Randall-Demllo; Felix Randow; Hai Rao; V Ashutosh Rao; Blake B Rasmussen; Tobias M Rasse; Edward A Ratovitski; Pierre-Emmanuel Rautou; Swapan K Ray; Babak Razani; Bruce H Reed; Fulvio Reggiori; Markus Rehm; Andreas S Reichert; Theo Rein; David J Reiner; Eric Reits; Jun Ren; Xingcong Ren; Maurizio Renna; Jane Eb Reusch; Jose L Revuelta; Leticia Reyes; Alireza R Rezaie; Robert I Richards; Des R Richardson; Clémence Richetta; Michael A Riehle; Bertrand H Rihn; Yasuko Rikihisa; Brigit E Riley; Gerald Rimbach; Maria Rita Rippo; Konstantinos Ritis; Federica Rizzi; Elizete Rizzo; Peter J Roach; Jeffrey Robbins; Michel Roberge; Gabriela Roca; Maria Carmela Roccheri; Sonia Rocha; Cecilia Mp Rodrigues; Clara I Rodríguez; Santiago Rodriguez de Cordoba; Natalia Rodriguez-Muela; Jeroen Roelofs; Vladimir V Rogov; Troy T Rohn; Bärbel Rohrer; Davide Romanelli; Luigina Romani; Patricia Silvia Romano; M Isabel G Roncero; Jose Luis Rosa; Alicia Rosello; Kirill V Rosen; Philip Rosenstiel; Magdalena Rost-Roszkowska; Kevin A Roth; Gael Roué; Mustapha Rouis; Kasper M Rouschop; Daniel T Ruan; Diego Ruano; David C Rubinsztein; Edmund B Rucker; Assaf Rudich; Emil Rudolf; Ruediger Rudolf; Markus A Ruegg; Carmen Ruiz-Roldan; Avnika Ashok Ruparelia; Paola Rusmini; David W Russ; Gian Luigi Russo; Giuseppe Russo; Rossella Russo; Tor Erik Rusten; Victoria Ryabovol; Kevin M Ryan; Stefan W Ryter; David M Sabatini; Michael Sacher; Carsten Sachse; Michael N Sack; Junichi Sadoshima; Paul Saftig; Ronit Sagi-Eisenberg; Sumit Sahni; Pothana Saikumar; Tsunenori Saito; Tatsuya Saitoh; Koichi Sakakura; Machiko Sakoh-Nakatogawa; Yasuhito Sakuraba; María Salazar-Roa; Paolo Salomoni; Ashok K Saluja; Paul M Salvaterra; Rosa Salvioli; Afshin Samali; Anthony Mj Sanchez; José A Sánchez-Alcázar; Ricardo Sanchez-Prieto; Marco Sandri; Miguel A Sanjuan; Stefano Santaguida; Laura Santambrogio; Giorgio Santoni; Claudia Nunes Dos Santos; Shweta Saran; Marco Sardiello; Graeme Sargent; Pallabi Sarkar; Sovan Sarkar; Maria Rosa Sarrias; Minnie M Sarwal; Chihiro Sasakawa; Motoko Sasaki; Miklos Sass; Ken Sato; Miyuki Sato; Joseph Satriano; Niramol Savaraj; Svetlana Saveljeva; Liliana Schaefer; Ulrich E Schaible; Michael Scharl; Hermann M Schatzl; Randy Schekman; Wiep Scheper; Alfonso Schiavi; Hyman M Schipper; Hana Schmeisser; Jens Schmidt; Ingo Schmitz; Bianca E Schneider; E Marion Schneider; Jaime L Schneider; Eric A Schon; Miriam J Schönenberger; Axel H Schönthal; Daniel F Schorderet; Bernd Schröder; Sebastian Schuck; Ryan J Schulze; Melanie Schwarten; Thomas L Schwarz; Sebastiano Sciarretta; Kathleen Scotto; A Ivana Scovassi; Robert A Screaton; Mark Screen; Hugo Seca; Simon Sedej; Laura Segatori; Nava Segev; Per O Seglen; Jose M Seguí-Simarro; Juan Segura-Aguilar; Ekihiro Seki; Christian Sell; Iban Seiliez; Clay F Semenkovich; Gregg L Semenza; Utpal Sen; Andreas L Serra; Ana Serrano-Puebla; Hiromi Sesaki; Takao Setoguchi; Carmine Settembre; John J Shacka; Ayesha N Shajahan-Haq; Irving M Shapiro; Shweta Sharma; Hua She; C-K James Shen; Chiung-Chyi Shen; Han-Ming Shen; Sanbing Shen; Weili Shen; Rui Sheng; Xianyong Sheng; Zu-Hang Sheng; Trevor G Shepherd; Junyan Shi; Qiang Shi; Qinghua Shi; Yuguang Shi; Shusaku Shibutani; Kenichi Shibuya; Yoshihiro Shidoji; Jeng-Jer Shieh; Chwen-Ming Shih; Yohta Shimada; Shigeomi Shimizu; Dong Wook Shin; Mari L Shinohara; Michiko Shintani; Takahiro Shintani; Tetsuo Shioi; Ken Shirabe; Ronit Shiri-Sverdlov; Orian Shirihai; Gordon C Shore; Chih-Wen Shu; Deepak Shukla; Andriy A Sibirny; Valentina Sica; Christina J Sigurdson; Einar M Sigurdsson; Puran Singh Sijwali; Beata Sikorska; Wilian A Silveira; Sandrine Silvente-Poirot; Gary A Silverman; Jan Simak; Thomas Simmet; Anna Katharina Simon; Hans-Uwe Simon; Cristiano Simone; Matias Simons; Anne Simonsen; Rajat Singh; Shivendra V Singh; Shrawan K Singh; Debasish Sinha; Sangita Sinha; Frank A Sinicrope; Agnieszka Sirko; Kapil Sirohi; Balindiwe Jn Sishi; Annie Sittler; Parco M Siu; Efthimios Sivridis; Anna Skwarska; Ruth Slack; Iva Slaninová; Nikolai Slavov; Soraya S Smaili; Keiran Sm Smalley; Duncan R Smith; Stefaan J Soenen; Scott A Soleimanpour; Anita Solhaug; Kumaravel Somasundaram; Jin H Son; Avinash Sonawane; Chunjuan Song; Fuyong Song; Hyun Kyu Song; Ju-Xian Song; Wei Song; Kai Y Soo; Anil K Sood; Tuck Wah Soong; Virawudh Soontornniyomkij; Maurizio Sorice; Federica Sotgia; David R Soto-Pantoja; Areechun Sotthibundhu; Maria João Sousa; Herman P Spaink; Paul N Span; Anne Spang; Janet D Sparks; Peter G Speck; Stephen A Spector; Claudia D Spies; Wolfdieter Springer; Daret St Clair; Alessandra Stacchiotti; Bart Staels; Michael T Stang; Daniel T Starczynowski; Petro Starokadomskyy; Clemens Steegborn; John W Steele; Leonidas Stefanis; Joan Steffan; Christine M Stellrecht; Harald Stenmark; Tomasz M Stepkowski; Stęphan T Stern; Craig Stevens; Brent R Stockwell; Veronika Stoka; Zuzana Storchova; Björn Stork; Vassilis Stratoulias; Dimitrios J Stravopodis; Pavel Strnad; Anne Marie Strohecker; Anna-Lena Ström; Per Stromhaug; Jiri Stulik; Yu-Xiong Su; Zhaoliang Su; Carlos S Subauste; Srinivasa Subramaniam; Carolyn M Sue; Sang Won Suh; Xinbing Sui; Supawadee Sukseree; David Sulzer; Fang-Lin Sun; Jiaren Sun; Jun Sun; Shi-Yong Sun; Yang Sun; Yi Sun; Yingjie Sun; Vinod Sundaramoorthy; Joseph Sung; Hidekazu Suzuki; Kuninori Suzuki; Naoki Suzuki; Tadashi Suzuki; Yuichiro J Suzuki; Michele S Swanson; Charles Swanton; Karl Swärd; Ghanshyam Swarup; Sean T Sweeney; Paul W Sylvester; Zsuzsanna Szatmari; Eva Szegezdi; Peter W Szlosarek; Heinrich Taegtmeyer; Marco Tafani; Emmanuel Taillebourg; Stephen Wg Tait; Krisztina Takacs-Vellai; Yoshinori Takahashi; Szabolcs Takáts; Genzou Takemura; Nagio Takigawa; Nicholas J Talbot; Elena Tamagno; Jerome Tamburini; Cai-Ping Tan; Lan Tan; Mei Lan Tan; Ming Tan; Yee-Joo Tan; Keiji Tanaka; Masaki Tanaka; Daolin Tang; Dingzhong Tang; Guomei Tang; Isei Tanida; Kunikazu Tanji; Bakhos A Tannous; Jose A Tapia; Inmaculada Tasset-Cuevas; Marc Tatar; Iman Tavassoly; Nektarios Tavernarakis; Allen Taylor; Graham S Taylor; Gregory A Taylor; J Paul Taylor; Mark J Taylor; Elena V Tchetina; Andrew R Tee; Fatima Teixeira-Clerc; Sucheta Telang; Tewin Tencomnao; Ba-Bie Teng; Ru-Jeng Teng; Faraj Terro; Gianluca Tettamanti; Arianne L Theiss; Anne E Theron; Kelly Jean Thomas; Marcos P Thomé; Paul G Thomes; Andrew Thorburn; Jeremy Thorner; Thomas Thum; Michael Thumm; Teresa Lm Thurston; Ling Tian; Andreas Till; Jenny Pan-Yun Ting; Vladimir I Titorenko; Lilach Toker; Stefano Toldo; Sharon A Tooze; Ivan Topisirovic; Maria Lyngaas Torgersen; Liliana Torosantucci; Alicia Torriglia; Maria Rosaria Torrisi; Cathy Tournier; Roberto Towns; Vladimir Trajkovic; Leonardo H Travassos; Gemma Triola; Durga Nand Tripathi; Daniela Trisciuoglio; Rodrigo Troncoso; Ioannis P Trougakos; Anita C Truttmann; Kuen-Jer Tsai; Mario P Tschan; Yi-Hsin Tseng; Takayuki Tsukuba; Allan Tsung; Andrey S Tsvetkov; Shuiping Tu; Hsing-Yu Tuan; Marco Tucci; David A Tumbarello; Boris Turk; Vito Turk; Robin Fb Turner; Anders A Tveita; Suresh C Tyagi; Makoto Ubukata; Yasuo Uchiyama; Andrej Udelnow; Takashi Ueno; Midori Umekawa; Rika Umemiya-Shirafuji; Benjamin R Underwood; Christian Ungermann; Rodrigo P Ureshino; Ryo Ushioda; Vladimir N Uversky; Néstor L Uzcátegui; Thomas Vaccari; Maria I Vaccaro; Libuše Váchová; Helin Vakifahmetoglu-Norberg; Rut Valdor; Enza Maria Valente; Francois Vallette; Angela M Valverde; Greet Van den Berghe; Ludo Van Den Bosch; Gijs R van den Brink; F Gisou van der Goot; Ida J van der Klei; Luc Jw van der Laan; Wouter G van Doorn; Marjolein van Egmond; Kenneth L van Golen; Luc Van Kaer; Menno van Lookeren Campagne; Peter Vandenabeele; Wim Vandenberghe; Ilse Vanhorebeek; Isabel Varela-Nieto; M Helena Vasconcelos; Radovan Vasko; Demetrios G Vavvas; Ignacio Vega-Naredo; Guillermo Velasco; Athanassios D Velentzas; Panagiotis D Velentzas; Tibor Vellai; Edo Vellenga; Mikkel Holm Vendelbo; Kartik Venkatachalam; Natascia Ventura; Salvador Ventura; Patrícia St Veras; Mireille Verdier; Beata G Vertessy; Andrea Viale; Michel Vidal; Helena L A Vieira; Richard D Vierstra; Nadarajah Vigneswaran; Neeraj Vij; Miquel Vila; Margarita Villar; Victor H Villar; Joan Villarroya; Cécile Vindis; Giampietro Viola; Maria Teresa Viscomi; Giovanni Vitale; Dan T Vogl; Olga V Voitsekhovskaja; Clarissa von Haefen; Karin von Schwarzenberg; Daniel E Voth; Valérie Vouret-Craviari; Kristina Vuori; Jatin M Vyas; Christian Waeber; Cheryl Lyn Walker; Mark J Walker; Jochen Walter; Lei Wan; Xiangbo Wan; Bo Wang; Caihong Wang; Chao-Yung Wang; Chengshu Wang; Chenran Wang; Chuangui Wang; Dong Wang; Fen Wang; Fuxin Wang; Guanghui Wang; Hai-Jie Wang; Haichao Wang; Hong-Gang Wang; Hongmin Wang; Horng-Dar Wang; Jing Wang; Junjun Wang; Mei Wang; Mei-Qing Wang; Pei-Yu Wang; Peng Wang; Richard C Wang; Shuo Wang; Ting-Fang Wang; Xian Wang; Xiao-Jia Wang; Xiao-Wei Wang; Xin Wang; Xuejun Wang; Yan Wang; Yanming Wang; Ying Wang; Ying-Jan Wang; Yipeng Wang; Yu Wang; Yu Tian Wang; Yuqing Wang; Zhi-Nong Wang; Pablo Wappner; Carl Ward; Diane McVey Ward; Gary Warnes; Hirotaka Watada; Yoshihisa Watanabe; Kei Watase; Timothy E Weaver; Colin D Weekes; Jiwu Wei; Thomas Weide; Conrad C Weihl; Günther Weindl; Simone Nardin Weis; Longping Wen; Xin Wen; Yunfei Wen; Benedikt Westermann; Cornelia M Weyand; Anthony R White; Eileen White; J Lindsay Whitton; Alexander J Whitworth; Joëlle Wiels; Franziska Wild; Manon E Wildenberg; Tom Wileman; Deepti Srinivas Wilkinson; Simon Wilkinson; Dieter Willbold; Chris Williams; Katherine Williams; Peter R Williamson; Konstanze F Winklhofer; Steven S Witkin; Stephanie E Wohlgemuth; Thomas Wollert; Ernst J Wolvetang; Esther Wong; G William Wong; Richard W Wong; Vincent Kam Wai Wong; Elizabeth A Woodcock; Karen L Wright; Chunlai Wu; Defeng Wu; Gen Sheng Wu; Jian Wu; Junfang Wu; Mian Wu; Min Wu; Shengzhou Wu; William Kk Wu; Yaohua Wu; Zhenlong Wu; Cristina Pr Xavier; Ramnik J Xavier; Gui-Xian Xia; Tian Xia; Weiliang Xia; Yong Xia; Hengyi Xiao; Jian Xiao; Shi Xiao; Wuhan Xiao; Chuan-Ming Xie; Zhiping Xie; Zhonglin Xie; Maria Xilouri; Yuyan Xiong; Chuanshan Xu; Congfeng Xu; Feng Xu; Haoxing Xu; Hongwei Xu; Jian Xu; Jianzhen Xu; Jinxian Xu; Liang Xu; Xiaolei Xu; Yangqing Xu; Ye Xu; Zhi-Xiang Xu; Ziheng Xu; Yu Xue; Takahiro Yamada; Ai Yamamoto; Koji Yamanaka; Shunhei Yamashina; Shigeko Yamashiro; Bing Yan; Bo Yan; Xianghua Yan; Zhen Yan; Yasuo Yanagi; Dun-Sheng Yang; Jin-Ming Yang; Liu Yang; Minghua Yang; Pei-Ming Yang; Peixin Yang; Qian Yang; Wannian Yang; Wei Yuan Yang; Xuesong Yang; Yi Yang; Ying Yang; Zhifen Yang; Zhihong Yang; Meng-Chao Yao; Pamela J Yao; Xiaofeng Yao; Zhenyu Yao; Zhiyuan Yao; Linda S Yasui; Mingxiang Ye; Barry Yedvobnick; Behzad Yeganeh; Elizabeth S Yeh; Patricia L Yeyati; Fan Yi; Long Yi; Xiao-Ming Yin; Calvin K Yip; Yeong-Min Yoo; Young Hyun Yoo; Seung-Yong Yoon; Ken-Ichi Yoshida; Tamotsu Yoshimori; Ken H Young; Huixin Yu; Jane J Yu; Jin-Tai Yu; Jun Yu; Li Yu; W Haung Yu; Xiao-Fang Yu; Zhengping Yu; Junying Yuan; Zhi-Min Yuan; Beatrice Yjt Yue; Jianbo Yue; Zhenyu Yue; David N Zacks; Eldad Zacksenhaus; Nadia Zaffaroni; Tania Zaglia; Zahra Zakeri; Vincent Zecchini; Jinsheng Zeng; Min Zeng; Qi Zeng; Antonis S Zervos; Donna D Zhang; Fan Zhang; Guo Zhang; Guo-Chang Zhang; Hao Zhang; Hong Zhang; Hong Zhang; Hongbing Zhang; Jian Zhang; Jian Zhang; Jiangwei Zhang; Jianhua Zhang; Jing-Pu Zhang; Li Zhang; Lin Zhang; Lin Zhang; Long Zhang; Ming-Yong Zhang; Xiangnan Zhang; Xu Dong Zhang; Yan Zhang; Yang Zhang; Yanjin Zhang; Yingmei Zhang; Yunjiao Zhang; Mei Zhao; Wei-Li Zhao; Xiaonan Zhao; Yan G Zhao; Ying Zhao; Yongchao Zhao; Yu-Xia Zhao; Zhendong Zhao; Zhizhuang J Zhao; Dexian Zheng; Xi-Long Zheng; Xiaoxiang Zheng; Boris Zhivotovsky; Qing Zhong; Guang-Zhou Zhou; Guofei Zhou; Huiping Zhou; Shu-Feng Zhou; Xu-Jie Zhou; Hongxin Zhu; Hua Zhu; Wei-Guo Zhu; Wenhua Zhu; Xiao-Feng Zhu; Yuhua Zhu; Shi-Mei Zhuang; Xiaohong Zhuang; Elio Ziparo; Christos E Zois; Teresa Zoladek; Wei-Xing Zong; Antonio Zorzano; Susu M Zughaier
Journal:  Autophagy       Date:  2016       Impact factor: 16.016

8.  NiftyNet: a deep-learning platform for medical imaging.

Authors:  Eli Gibson; Wenqi Li; Carole Sudre; Lucas Fidon; Dzhoshkun I Shakir; Guotai Wang; Zach Eaton-Rosen; Robert Gray; Tom Doel; Yipeng Hu; Tom Whyntie; Parashkev Nachev; Marc Modat; Dean C Barratt; Sébastien Ourselin; M Jorge Cardoso; Tom Vercauteren
Journal:  Comput Methods Programs Biomed       Date:  2018-01-31       Impact factor: 5.428

9.  nucleAIzer: A Parameter-free Deep Learning Framework for Nucleus Segmentation Using Image Style Transfer.

Authors:  Reka Hollandi; Abel Szkalisity; Timea Toth; Ervin Tasnadi; Csaba Molnar; Botond Mathe; Istvan Grexa; Jozsef Molnar; Arpad Balind; Mate Gorbe; Maria Kovacs; Ede Migh; Allen Goodman; Tamas Balassa; Krisztian Koos; Wenyu Wang; Juan Carlos Caicedo; Norbert Bara; Ferenc Kovacs; Lassi Paavolainen; Tivadar Danka; Andras Kriston; Anne Elizabeth Carpenter; Kevin Smith; Peter Horvath
Journal:  Cell Syst       Date:  2020-05-07       Impact factor: 10.304

View more
  7 in total

1.  Open microscopy in the life sciences: quo vadis?

Authors:  Johannes Hohlbein; Benedict Diederich; Barbora Marsikova; Emmanuel G Reynaud; Séamus Holden; Wiebke Jahr; Robert Haase; Kirti Prakash
Journal:  Nat Methods       Date:  2022-09       Impact factor: 47.990

2.  DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches.

Authors:  Christoph Spahn; Estibaliz Gómez-de-Mariscal; Romain F Laine; Pedro M Pereira; Lucas von Chamier; Mia Conduit; Mariana G Pinho; Guillaume Jacquemet; Séamus Holden; Mike Heilemann; Ricardo Henriques
Journal:  Commun Biol       Date:  2022-07-09

3.  Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation.

Authors:  Kevin J Cutler; Carsen Stringer; Teresa W Lo; Luca Rappez; Nicholas Stroustrup; S Brook Peterson; Paul A Wiggins; Joseph D Mougous
Journal:  Nat Methods       Date:  2022-10-17       Impact factor: 47.990

Review 4.  Deep learning -- promises for 3D nuclear imaging: a guide for biologists.

Authors:  Guillaume Mougeot; Tristan Dubos; Frédéric Chausse; Emilie Péry; Katja Graumann; Christophe Tatout; David E Evans; Sophie Desset
Journal:  J Cell Sci       Date:  2022-04-14       Impact factor: 5.235

Review 5.  Prospects of Surface-Enhanced Raman Spectroscopy for Biomarker Monitoring toward Precision Medicine.

Authors:  Javier Plou; Pablo S Valera; Isabel García; Carlos D L de Albuquerque; Arkaitz Carracedo; Luis M Liz-Marzán
Journal:  ACS Photonics       Date:  2022-02-02       Impact factor: 7.529

6.  DetecDiv, a generalist deep-learning platform for automated cell division tracking and survival analysis.

Authors:  Théo Aspert; Didier Hentsch; Gilles Charvin
Journal:  Elife       Date:  2022-08-17       Impact factor: 8.713

7.  Fast DNA-PAINT imaging using a deep neural network.

Authors:  Kaarjel K Narayanasamy; Johanna V Rahm; Siddharth Tourani; Mike Heilemann
Journal:  Nat Commun       Date:  2022-08-27       Impact factor: 17.694

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.