Literature DB >> 32401604

Deep learning approach for quantification of organelles and misfolded polypeptide delivery within degradative compartments.

Diego Morone1,2, Alessandro Marazza1,2,3, Timothy J Bergmann1,2, Maurizio Molinari1,2,4.   

Abstract

Endolysosomal compartments maintain cellular fitness by clearing dysfunctional organelles and proteins from cells. Modulation of their activity offers therapeutic opportunities. Quantification of cargo delivery to and/or accumulation within endolysosomes is instrumental for characterizing lysosome-driven pathways at the molecular level and monitoring consequences of genetic or environmental modifications. Here we introduce LysoQuant, a deep learning approach for segmentation and classification of fluorescence images capturing cargo delivery within endolysosomes for clearance. LysoQuant is trained for unbiased and rapid recognition with human-level accuracy, and the pipeline informs on a series of quantitative parameters such as endolysosome number, size, shape, position within cells, and occupancy, which report on activity of lysosome-driven pathways. In our selected examples, LysoQuant successfully determines the magnitude of mechanistically distinct catabolic pathways that ensure lysosomal clearance of a model organelle, the endoplasmic reticulum, and of a model protein, polymerogenic ATZ. It does so with accuracy and velocity compatible with those of high-throughput analyses.

Entities:  

Mesh:

Substances:

Year:  2020        PMID: 32401604      PMCID: PMC7359569          DOI: 10.1091/mbc.E20-04-0269

Source DB:  PubMed          Journal:  Mol Biol Cell        ISSN: 1059-1524            Impact factor:   4.138


INTRODUCTION

Lysosomes (hereafter endolysosomes, EL) have for long time been considered static organelles ensuring degradation and recycling of cellular wastes. However, they are all but static and their activity, intracellular distribution, and number, as well as their capacity to welcome cargo, are regulated by various signaling pathways and cellular needs (Huotari and Helenius, 2011; Bright ; Ballabio and Bonifacino, 2019). By clearing damaged organelles and proteins from cells, they make a substantial contribution to tissue and organ homeostasis. Cumulating knowledge expands the number of diseases directly and indirectly linked to their dysfunction, from rare lysosomal storage disorders (Marques and Saftig, 2019) to more frequent cancers, metabolic and neurodegenerative diseases (Fraldi ; Kimmelman and White, 2017; Gilleron ). Quantitative approaches to monitoring the magnitude of the delivery to EL of proteins or organelles to be removed from cells or the accumulation of unprocessed material within their lumina are expected to contribute to understanding of the mechanistic details of lysosomal-driven pathways and may find application for diagnostic and therapeutic purposes. Here, we used confocal laser scanning microscopy (CLSM) to monitor delivery and/or luminal accumulation of cargo within LAMP1- or RAB7-positive EL. As cargo, we selected a model organelle (the endoplasmic reticulum [ER]) and a model disease-causing aberrant gene product (the polymerogenic ATZ variant of the secretory protein α1-antitrypsin), since their lysosomal turnover has clear connection with human diseases (Marciniak ; Bergmann ; Hubner and Dikic, 2019). As such, the catabolic pathways under quantitative investigation were recov-ER-phagy, which ensures lysosomal removal of excess ER during resolution of ER stresses (Fumagalli, Noack, Bergmann, Presmanes, ; Loi ), ER-to-lysosome-associated degradation (ERLAD) (Fregno and Molinari, 2019) that removes proteasome-resistant misfolded proteins from cells by delivering ER portions containing them to the EL (Forrester, De Leonibus, Grumati, Fasana, , Fregno, Fasana, ) as well as conditions that mimic starvation-induced, FAM134B-driven ER-phagy (Khaminets, Heinrich, , Liang, Lingeman, ). Quantitative analyses of CLSM images were first performed with classic segmentation algorithms, which rely on a set of manually defined features and the training of machine learning. Despite careful optimization, these tended to fail with objects of heterogeneous signal intensity and morphology (Kan, 2017), as EL in mammalian cells are. The performance of machine learning approaches remained well below human accuracy, for example, when the quantification tasks had to distinguish empty EL from EL capturing select material to be cleared from cells, such as an organelle (e.g., the ER) or a disease-causing polypeptide. We therefore turned to deep learning (DL), a set of machine learning techniques that involves neural networks with many layers of abstraction (LeCun ). Supervised DL methods are computational models that extract relevant features from sets of human-defined training data and increasingly adapt with multiple iterations to perform a specific task (Marx, 2019). DL has been applied to many cellular image analysis tasks (Moen ) and the application of DL techniques to morphometric studies of subcellular structures (in our study organelles, organelle portions and protein aggregates) is sought after (Moen ). Among the neural network architectures, U-Net has been used for detection and segmentation of medical and light microscopy images (Ronneberger ; Falk ). U-net is especially useful due to its training ability with limited annotated data sets (Van Valen ; Bulten ; Nguyen ; Oktay and Gurses, 2019; van der Heyden ; Zhuang ). Additionally, U-Net automatically finds the representation of an image, which optimizes the segmentation performances through the use of the typical architecture of a convolutional network on subsequent abstraction levels, while adding an up-sampling path that increases network optimization by propagating context information to higher resolution layers (Ronneberger ). In this study, we developed a novel DL approach to perform detection and segmentation of endolysosomal degradative compartments. For this, we constructed a dataset of images fully annotated manually by operators and built an optimized neural network architecture based on U-Net. We then integrated this DL framework into an ImageJ plugin, which we provide, to streamline the segmentation and analysis tasks. We challenged the neural network architecture for its capacity to quantitatively assess three catabolic pathways with proven involvement in maintenance of cellular homeostasis and proteostasis (recov-ER-phagy, ERLAD and ER-phagy). We evaluated the network performance with different metrics: Intersection over Union (IoU), Receiver Operating Characteristic (ROC) curve, and F1 scores for segmentation and detection tasks (Sokolova and Lapalme, 2009; Moen ). LysoQuant determined the magnitude of recov-ER-phagy, ERLAD, and ER-phagy and the consequences of gene editing inactivating crucial regulatory elements of the pathways. It did so with high accuracy and provided information on a series of parameters such as number, size, shape, and position of empty and loaded EL that are instrumental to understanding the pathways under investigation and the consequences of their modulation in the finest detail. Its analytic speed makes LysoQuant useful for high-content screening.

RESULTS

Manual detection of endolysosomes

Ectopic expression of the ER-phagy receptor SEC62 in mouse embryonic fibroblasts (MEF) triggers delivery of ER portions within LAMP1-positive EL for clearance (Fumagalli, Noack, Bergmann, Presmanes, ; Loi ). SEC62-labeled ER portions accumulate within EL upon inhibition of lysosomal activity with bafilomycin A1 (BafA1) (Klionsky ). We reasoned that this is a paradigmatic experiment to set up an approach that performs automated, image-based, quantitative analyses of EL features and function. We first established a gold standard of EL detection accuracy in confocal laser scanning microscopy (CSLM) by asking three operators to manually annotate 1170 LAMP1-positive EL from 10 cells (in four biological replicates). We generated a lab consensus image, defined as the pixelwise regions annotated by at least two out of three operators. We then calculated the average IoU for each operator. Here, we were interested in segmentation differences between single operators, so we quantified the performance in drawing all EL irrespective of their content. This resulted in an average IoU of 0.850 ± 0.040 or a relative standard error of 12.9 ± 3.2%.

Automatized detection of endolysozymes: available approaches

Next, we assessed the capacity of available image analysis approaches to faithfully recognize individual EL dispersed in the mammalian cells’ cytosol. EL are revealed in CSLM with antibodies to the endogenous surface protein LAMP1 (Figure 1A, green and its inset). Their size, the intensity of their surface signal, their intracellular distribution, and the distance between individual EL vary within a cell and among different cells (Figure 1A, inset). Thus, automatized analyses aiming at establishing the number of EL (detection task) and defining their shape (segmentation task) often fail when applied to biological samples. For example, common standard image analysis approaches that rely on automatic threshold with IsoData algorithm followed by watershed segmentation (Soille and Vincent, 1990; IoU 0.641, F1 score 0.782. Figure 1B, Supplemental Figure 1B), machine learning approaches such as iLastik Random Forest (Berg ; IoU 0.599, F1 score 0.749. Figure 1C, Supplemental Figure 1C), Support Vector Machine (Sommer ; IoU 0.512, F1 score 0.677. Figure 1D, Supplemental Figure 1D), Trainable Weka Segmentation (Arganda-Carreras ; IoU 0.487, F1 score 0.655. Figure 1E, Supplemental Figure 1E), but also state-of-the-art algorithms previously developed for quantifying endosomes, such as Squassh (Helmuth ; IoU 0.696, F1 score 0.821. Figure 1F, Supplemental Figure 1F), Icy spot plugin detection (de Chaumont ; IoU 0.187, F1 score 0.315. Figure 1G, Supplemental Figure 1G), cmeAnalysis (Aguet ; IoU 0.504, F1 score 0.671. Figure 1H, Supplemental Figure 1H) and object-based colocalization algorithms, such as the Interaction Analysis plugin (Helmuth ; IoU 0.229, F1 score 0.372. Figure 1I, Supplemental Figure 1I) or the Colocalisation Pipeline plugin (Woodcroft ; IoU 0.651, F1 score 0.789. Figure 1J, Supplemental Figure 1J), fail to satisfactorily distinguish individual EL if these are located in close proximity to each other. The Circular Hough Transform (IoU 0.573, F1 score 0.728. Figure 1K, Supplemental Figure 1K; Atherton and Kerbyson, 1999), when applied to detect circular bright (EL membrane) or circular dark regions (EL lumen), fails to detect EL in samples with heterogeneous intensity of their surface signal and to segment their shape (Figure 1K, Supplemental Figure 1K, inset). Finally, the application of Stardist star-convex polygons, which involves the training of a deep learning network (Schmidt, Weigert, ; Weigert ), correctly recognizes the positions of most of the individual EL with strong membrane signals but fails to recognize those with low intensity (IoU 0.351, F1 score 0.719. Figure 1L, Supplemental Figure 1L). Stardist also exaggerates EL size, resulting in poor shape definition, which would prevent the recognition of differences in shape factors and the overlapping of EL boundaries when in close proximity.
FIGURE 1:

Standard segmentation approaches. (A) CLSM image where LAMP1 decorates the limiting membrane of EL and its inset. (B) Automatic IsoData thresholding followed by watershed segmentation; (C) random forest machine learning with iLastik; (D) Support Vector Machine (SVM); (E) trainable Weka segmentation; (F) Squassh region competition segmentation; (G) Icy spot detection plugin; (H) cmeAnalysis; (I) Interaction Analysis ImageJ plugin; (J) Colocalisation Pipeline ImageJ plugin; (K) circular Hough transform (CHT); (L) Stardist star-convex polygons ImageJ plugin. In all images, true positives are in white, false negatives in yellow, and false positives in blue with respect to manual segmentation. F1 score for segmentation and Intersection over Union (IoU) are indicated.

Standard segmentation approaches. (A) CLSM image where LAMP1 decorates the limiting membrane of EL and its inset. (B) Automatic IsoData thresholding followed by watershed segmentation; (C) random forest machine learning with iLastik; (D) Support Vector Machine (SVM); (E) trainable Weka segmentation; (F) Squassh region competition segmentation; (G) Icy spot detection plugin; (H) cmeAnalysis; (I) Interaction Analysis ImageJ plugin; (J) Colocalisation Pipeline ImageJ plugin; (K) circular Hough transform (CHT); (L) Stardist star-convex polygons ImageJ plugin. In all images, true positives are in white, false negatives in yellow, and false positives in blue with respect to manual segmentation. F1 score for segmentation and Intersection over Union (IoU) are indicated. The unsatisfactory results obtained with the approaches presented above prove the need for more reliable and accurate methods for rapid, automatic segmentation and quantification of EL shape and activity.

Detection and classification of endolysosomes: LysoQuant workflow

To tackle the weaknesses of these approaches in identifying and offer quantitative information on number, size, and other properties of cellular EL, we developed a supervised DL approach that we named LysoQuant. We anticipate here the analysis workflow initially established on the same cells analyzed in Figure 1, while leaving the details of the DL architecture and performance to the next sections. The image shows eight cells, where the EL have been labeled with an antibody to endogenous LAMP1 (Figures 1A and 2Aa). Three cells express variable levels of ectopic SEC62, which is labeled with an antibody to the HA epitope at the SEC62’s C-terminus. Cell 1 produces high, and cells 2 and 3 low levels of SEC62 (Figure 2Aa, SEC62). After drawing multiple regions of interest (ROIs), each delimiting one of the three cells to be analyzed (Figure 2Ab), we select channels corresponding to LAMP1-immunoreactivity (green) to visualize individual EL and to HA-immunoreactivity to show ectopically expressed SEC62 (red). All signal outside the ROIs is cleared (Figure 2Ac). The image is then normalized with min and max in the range [0, 1] (Falk ; Figure 2Ad), linearly brought to a pixel size of 0.025 µm, and sent to the U-Net DL framework (either local or remote) for segmentation to generate a 16-bit image. Pixels corresponding to empty and loaded EL (defined as not containing or containing SEC62, respectively) take values 1 and 2, respectively (Figure 2A, e and f). In Figure 2Ae (and the inset), empty EL are colored in cyan and loaded EL in magenta. Given the optical sectioning properties of confocal microscopy, quantification relies on a good choice of the acquisition focal plane, where most of the EL are luminally cut. For this reason, we added an area threshold step corresponding to our minimum annotated EL size. To set this, we plotted a histogram of all annotated sizes from training images and set the filter threshold to the minimum value of this dataset (EL area larger than 0.13 µm2, Figure 2Ag, corresponding to an approximate EL diameter of 0.4 µm). This threshold is configurable.
FIGURE 2:

Analysis workflow and deep learning architecture. (A) Analysis workflow. (a) Multichannel CLSM image. LAMP1 decorates the limiting membrane of EL. SEC62 stains the cargo. (b) Cells to be analyzed are identified as regions of interest (ROIs). (c) Signal outside ROIs is cleared and image is converted to RGB color image. (d) This RGB is then normalized in the range [0, 1] and rescaled to a pixel size of 0.025 µm. (e, f) Image is segmented into two classes: empty (cyan) and loaded EL (magenta). Classes are filtered for a configurable minimum size, which in our case was equal to the minimum of all annotated EL (dotted line, n = 1573). Diameter scale was also added as a reference. (g–i) Total number of EL for each class and each ROI is listed with a configurable number of individual EL parameters (e.g., average size, fluorescence intensity, circularity). (B) Deep learning architecture is a seven–resolution level 2D U-net fully convolutional network with 16 base feature channels that takes RGB images as input. Green channel shows the EL structure, red channel the protein or the ER subdomain delivered within EL.

Analysis workflow and deep learning architecture. (A) Analysis workflow. (a) Multichannel CLSM image. LAMP1 decorates the limiting membrane of EL. SEC62 stains the cargo. (b) Cells to be analyzed are identified as regions of interest (ROIs). (c) Signal outside ROIs is cleared and image is converted to RGB color image. (d) This RGB is then normalized in the range [0, 1] and rescaled to a pixel size of 0.025 µm. (e, f) Image is segmented into two classes: empty (cyan) and loaded EL (magenta). Classes are filtered for a configurable minimum size, which in our case was equal to the minimum of all annotated EL (dotted line, n = 1573). Diameter scale was also added as a reference. (g–i) Total number of EL for each class and each ROI is listed with a configurable number of individual EL parameters (e.g., average size, fluorescence intensity, circularity). (B) Deep learning architecture is a seven–resolution level 2D U-net fully convolutional network with 16 base feature channels that takes RGB images as input. Green channel shows the EL structure, red channel the protein or the ER subdomain delivered within EL. For each of the previously defined ROIs, the software identifies the segmented objects and quantifies total numbers for each class (Figure 2Ah). This workflow has been implemented in an ImageJ plugin that performs all the steps described above. Segmentation is instrumental for the assessment of morphological measurements for each EL, which include average size, shape descriptors such as circularity, position, and fluorescence intensity corresponding to cargo load, within EL (Figure 2Ai). Furthermore, and instrumental in analyzing more complex phenotypes, sequential quantification for multiple protein markers on the same cell and simple Boolean algebra between segmentation masks can be used to identify multiple positivity combinations.

Insufficient performance of a five-level neural network

Deep learning segmentation with U-Net architecture was first implemented with a 2D five-level and 64-feature-channel fully convolutional neural network (Falk ). We trained this network with confocal images of 10 single cells, in which two operators fully annotated, at pixel level, 1573 individual EL and defined if they were empty or loaded with select cargo (organelle portions or misfolded proteins). Training data were augmented with random rotations, elastic deformations, and smooth intensity curve transformation (Falk ). To prevent model overfitting due to data leakage (Riley, 2019), we acquired and manually annotated two separate image sets for training and test, so that validation was always occurring on unseen data (Jones, 2019). Low signal-to-noise ratio, size, and position with respect to the acquisition plane may hamper recognition of individual EL. To minimize this, two operators verified the annotated images before the learning process and confronted them with the output segmentations after a learning test in order to spot and correct inaccurate or missing regions. Also, the pixel size can determine the accuracy of manual annotation. To provide consistency in this phase, we linearly scaled all training images to a pixel size of 0.025 µm. Segmented images were then linearly downscaled to the original pixel size for comparison with original images. We anticipate that these steps greatly improved the segmentation ability of the network and did not introduce any noticeable artifact in the final segmented image (Figure 3).
FIGURE 3:

Computational performance of the machine learning architecture. (A) Fully annotated and segmented image of wild-type mouse embryonic fibroblasts (MEF) transfected with ATZ-HA (red). LAMP1-positive EL are stained in green. The annotated RGB and its inset show empty EL (cyan ROIs) and ATZ-loaded EL (magenta ROIs). The segmented image and its inset show empty EL (cyan) and ATZ-loaded EL (magenta). False positives (not annotated but segmented) are in blue. False negatives,(annotated but not segmented) are in yellow. (B) Single channels of input image, inset of LAMP1 channel, and its overlay with annotated ROIs. Scale bars 10 µm. Computational performance is evaluated with different metrics. After 3 × 105 iterations, (C) IoU is 0.881 ± 0.012 and 0.877 ± 0.014 for empty and cargo-loaded EL classes, respectively (average ± SD of three validation images). (D) ROC curves for both classes show AUCs of 99.43% and 99.47% for empty and loaded EL, respectively. (E) F1 scores for segmentation task 0.752 ± 0.134 and 0.814 ± 0.100, respectively. (F) F1 scores for detection task 0.777 ± 0.136 and 0.790 ± 0.055, respectively.

Computational performance of the machine learning architecture. (A) Fully annotated and segmented image of wild-type mouse embryonic fibroblasts (MEF) transfected with ATZ-HA (red). LAMP1-positive EL are stained in green. The annotated RGB and its inset show empty EL (cyan ROIs) and ATZ-loaded EL (magenta ROIs). The segmented image and its inset show empty EL (cyan) and ATZ-loaded EL (magenta). False positives (not annotated but segmented) are in blue. False negatives,(annotated but not segmented) are in yellow. (B) Single channels of input image, inset of LAMP1 channel, and its overlay with annotated ROIs. Scale bars 10 µm. Computational performance is evaluated with different metrics. After 3 × 105 iterations, (C) IoU is 0.881 ± 0.012 and 0.877 ± 0.014 for empty and cargo-loaded EL classes, respectively (average ± SD of three validation images). (D) ROC curves for both classes show AUCs of 99.43% and 99.47% for empty and loaded EL, respectively. (E) F1 scores for segmentation task 0.752 ± 0.134 and 0.814 ± 0.100, respectively. (F) F1 scores for detection task 0.777 ± 0.136 and 0.790 ± 0.055, respectively. To evaluate the network performance, we first assessed the IoU, as a measure of the area overlap, between the automatically segmented EL and human-annotated EL. Both upon transfer learning from previously published 2D U-Net network weights (Falk ) and if trained from scratch (Supplemental Figure 2), after 60,000 iterations, the network reached a plateau corresponding to IoUs of 0.722 ± 0.176 and 0.686 ± 0.093 for empty and cargo-loaded EL, respectively (Supplemental Figure 2B; metrics expressed as average ± SD of three test images). These values are in the range of those delivered by Squassh (Figure 1F) and the Colocalisation pipeline (Figure 1J), the best performers amongst the algorithms tested in Figure 1, which were not suited for our quantitative analyses because they failed to properly distinguish individual EL (both tests one-sample Student’s t test, p > 0.05) and were much worse than the IoU of 0.850 ± 0.040 for manual segmentation (Student’s t-test, p < 0.05 and p < 0.001 for empty and loaded EL classes, respectively). The neural network showed good performance in avoiding false classifications of empty vs. loaded EL by drawing the receiver operating characteristic (ROC) curve and calculating its area under curve (AUC; Sokolova and Lapalme, 2009) as 98.40% and 99.83% for empty and cargo-loaded EL, respectively (Supplemental Figure 2C). However, assessment of the F1 score (a value in the range [0, 1] expressing the harmonic mean between precision, the algorithm ability to avoid false positives, and recall, the ability to avoid false negatives) highlighted insufficient performance of the neural network (F1 scores for segmentation 0.361 ± 0.211 and 0.404 ± 0.215 for empty and cargo-loaded EL, respectively; F1 scores for detection 0.323 ± 0.104 and 0.370 ± 0.132, respectively, Supplemental Figure 2, D and E). In this case, the performance of the neural network was worse than those of Squassh and Colocalisation Pipeline (one sample Student’s t test, p < 0.05). The low IoU values with respect to manual segmentation, plus the low F1 values, reveal that the performances of the 2D five-level and 64-feature channels network is below human accuracy. We anticipate that the limiting factor is the network architecture.

Setting-up LysoQuant, a seven-level fully convolutional neural network

To improve segmentation accuracy, we increased the neural network depth to seven levels with 16 base features (Figure 2B). As for the five-level neural network, the training data were augmented with random rotations, elastic deformations, and smooth intensity curve transformation (Falk ). To accelerate the learning process, we initially ran the iterations with a learning rate of 1 × 10–4, and then switched in the last 50,000 steps to 5 × 10–5 for refinement (Figure 3, A and B, shows an output example with inset and comparison with manual annotation, Figure 3, C–F the performance evolution over training iterations). After a total of 3 × 105 iterations, the resulting trained network provided IoUs of 0.881 ± 0.012 and 0.877 ± 0.014 for empty and cargo-loaded EL classes, respectively (Figure 3C, average ± SD of three validation images), which fall within the manual annotation accuracy (IoU of 0.850 ± 0.040 as reported above, Student’s t test, p > 0.05), thus validating the performance of LysoQuant and largely surpassing the performance of Squassh and the Colocalisation Pipeline (one-sample Student’s t test, p < 0.01). F1 scores for segmentation (0.752 ± 0.134 and 0.814 ± 0.100, respectively) and for the detection task (0.777 ± 0.136 and 0.790 ± 0.055, respectively; Figure 3, E and F) were in line with those of Squassh and the Colocalisation Pipeline. LysoQuant achieved AUC values of 99.43% and 99.47% for empty and cargo-loaded EL, respectively (Figure 3D). All in all, the seven-level architecture globally improved the performance of the network, as confirmed by the close correspondence of segmented images with manual annotation, and performed significantly better than other tested approaches. Thus, LysoQuant provides accurate, efficient, and scalable classification and segmentation of lysosomal-driven pathways in living cells.

Testing LysoQuant on biological, lysosomal-driven pathways

recov-ER-phagy.

To challenge the consistency of our automated analysis, we applied LysoQuant to quantitatively assess recov-ER-phagy, an ER turnover pathway that cells activate to recover from acute ER stress (Fregno and Molinari, 2018). Recov-ER-phagy relies on ER fragmentation and formation of ER-derived vesicles displaying the ER-resident LC3-binding protein SEC62 at the limiting membrane. These are eventually engulfed by RAB7/LAMP1-positive EL in microER-phagy pathways relying on intervention of the ESCRT-III machinery (Fumagalli, Noack, Bergmann, Presmanes, ; Loi ). Recov-ER-phagy is faithfully recapitulated in MEF ectopically expressing SEC62 (Fumagalli, Noack, Bergmann, Presmanes, ; Loi ). On inhibition of lysosomal activity with BafA1, ER portions decorated with SEC62 (Figure 4A, red signal) accumulate within LAMP1-positive EL (Figure 4A, green circles). As a negative control, cells were transfected with a plasmid for expression of SEC62LIR, where the LC3-interacting motif in the cytosolic domain of SEC62 (-FEMI-) has been replaced by an -AAAA- tetrapeptide (Fumagalli, Noack, Bergmann, Presmanes, ; Loi ). This abolishes the association of SEC62 with LC3 and substantially reduces ER delivery within EL (Fumagalli ; Loi ; Figure 4B, inset, where the green EL do not contain the red ER).
FIGURE 4:

Mimicking ER delivery within EL during recov-ER-phagy. (A) CLSM shows MEF transfected transiently with SEC62-HA and (B) with SEC62LIR-HA, subsequently segmented and quantified with LysoQuant. (C) Quantification of the same set of images by three different operators (with the Lab mean) and by LysoQuant to establish the fraction of EL containing SEC62-labeled ER in both SEC62- and SEC62LIR-expressing MEF cells. (D) Same as C to compare the time required for manual and LysoQuant-operated detection and segmentation tasks.

Mimicking ER delivery within EL during recov-ER-phagy. (A) CLSM shows MEF transfected transiently with SEC62-HA and (B) with SEC62LIR-HA, subsequently segmented and quantified with LysoQuant. (C) Quantification of the same set of images by three different operators (with the Lab mean) and by LysoQuant to establish the fraction of EL containing SEC62-labeled ER in both SEC62- and SEC62LIR-expressing MEF cells. (D) Same as C to compare the time required for manual and LysoQuant-operated detection and segmentation tasks. We analyzed 15 cells expressing ectopic SEC62 and 18 cells expressing ectopic SEC62LIR. The images were segmented into SEC62-positive and SEC62-negative EL (magenta and cyan, respectively, in Figure 4, A and B, Segmentation). Three different operators and LysoQuant were then asked to establish the occupancy of EL with ER portions in cells expressing active (SEC62) or inactive (SEC62LIR) forms of the ER phagy receptor. In cells expressing ectopic SEC62, the three operators reported an EL occupancy of 85 ± 9% (Figure 4C, blue column, Lab mean), which substantially decreases to 28 ± 8% in cells expressing ectopic SEC62LIR (Figure 4C, red column, Lab mean). LysoQuant revealed an EL occupancy of 68 ± 9% (Figure 4C, blue column, LysoQuant), which substantially decreases to 15 ± 7% in cells expressing ectopic SEC62LIR (Figure 4C, red column, LysoQuant). If compared with the operators, LysoQuant has two major advantages: 1) the variety of information that it can offer in addition to EL occupancy, which includes, for example the size, shape, and intracellular distribution of the EL (Figure 2Ai); 2) the much reduced time to complete the analyses. In our setup, where an analysis computer is remotely connected to a Caffe U-Net framework, LysoQuant completes analysis of each image in 0.53 ± 0.04 min (Figure 4D) by informing on size, shape, occupancy, intensity of the surface and of the luminal signal, and intracellular distribution of the EL. The average time for operators, who can only inform on number and occupancy of individual EL, is highly variable and about 30-fold higher (14.10 ± 3.42 min, Lab mean, Figure 4D). The gain in time and in accuracy offered by LysoQuant is expected to increase substantially when large numbers of cells are counted, when operators’ performance will inevitably decrease due to fatigue, and in high-throughput screenings. In fact, the time required by Lysoquant corresponds to 0.008 CPU hours/image, which is consistent with the rate required for high-throughput studies (Carpenter ). To test whether the segmentation output was dependent on the confocal acquisition settings, we acquired the same set of cells with different pixel sizes and compared the quantification output. Variations up to twofold in pixel size had no significant effect on the measured occupancy and total EL number (one-way ANOVA, N = 5 cells; Supplemental Figure 3, A and B), while the analysis speed in the same pixel size range increased from 0.58 ± 0.05 min to 1.45 ± 0.04 min (Supplemental Figure 3C). Interestingly, this makes it possible to optimize the acquisition settings by taking larger fields of view with a relatively small increase in analysis time, helping to reduce both acquisition times and potential cell selection bias. Thus, LysoQuant performs fast and accurate classification of EL, enabling clear, unbiased discrimination of two different phenotypes associated with functional and impaired recov-ER-phagy, respectively. LysoQuant achieves segmentation of EL with operator-like accuracy within a fraction of the time required for manual annotation. It offers additional information concerning size, geometrical parameters, and distribution and intensities of single EL to further dissect unclear or particular phenotypes and to perform high-throughput microscopy studies.

ERLAD.

We next challenged LysoQuant to acquire quantitative information on another biological pathway, the ERLAD pathway devised by mammalian cells to remove misfolded proteins that cannot be degraded by cytosolic proteasomes. These are segregated in ER subdomains that are eventually shed from the bulk ER and are delivered to EL for clearance (Fregno and Molinari, 2019). The polymerogenic Z variant of alpha1-antitrypsin (ATZ) is a classical ERLAD substrate (Fregno, Fasana, ), and defective degradation of ATZ polymers results in clinically significant hepatotoxicity, which is the major inherited cause of pediatric liver disease and transplantation (Sharp ; Eriksson ; Wu ; Hidvegi ; Perlmutter, 2011; Roussel ; Marciniak ). Lysosomal delivery of ATZ polymers relies on the ER-phagy receptor FAM134B, on the LC3 lipidation machinery and on fusion of ER-derived vesicles containing ATZ with LAMP1-positive EL, which depends on the SNARE complex STX17/VAMP8 (Fregno, Fasana, ; Fregno and Molinari, 2019). To evaluate the performance of LysoQuant and to compare it with manual operations, we monitored ATZ delivery to EL in 21 wild-type MEF (Figure 5A, WT) and in 15 MEF that we generated by CRISPR/Cas9 gene editing to delete STX17 (Figure 5B, STX17KO). This case is particularly challenging for automatized quantification because in WT cells exposed to BafA1 to inactivate hydrolytic enzymes, the cargo under investigation (ATZ) accumulates within EL (LysoQuant must identify these as “loaded EL”). In cells lacking STX17, ATZ remains in vesicles that dock at the cytosolic face of the EL membrane (Fregno, Fasana, ), which requires an accurate quantification of the EL membrane for a correct result. As reported above for recov-ER-phagy, LysoQuant correctly reported on the drastic reduction of ATZ delivery within EL upon deletion of STX17 (Figure 5C). In particular, it was able to classify LAMP1-positive EL displaying ATZ docked at their membrane correctly (Figure 5B and Fregno, Fasana, ) as empty EL. It did so, with high accuracy and about 20× faster than manual operators (Figure 5D, 0.81 ± 0.16 min/image vs. 13.18 ± 4.82). Note that this operation could not have been performed without accurate segmentation of individual EL.
FIGURE 5:

Quantification of misfolded protein delivery to lysosomes. (A) CLSM shows delivery of ATZ within LAMP1-positive EL in wild-type MEF. (B) Same as A in STX17KO MEF. (C) Quantification of ATZ delivery within EL in both wild-type and STX17KO MEF. (D) Time required for manual and LysoQuant-operated detection and segmentation tasks.

Quantification of misfolded protein delivery to lysosomes. (A) CLSM shows delivery of ATZ within LAMP1-positive EL in wild-type MEF. (B) Same as A in STX17KO MEF. (C) Quantification of ATZ delivery within EL in both wild-type and STX17KO MEF. (D) Time required for manual and LysoQuant-operated detection and segmentation tasks.

Mimicking starvation-induced ER-phagy in a different cellular model.

To further prove the versatility of LysoQuant, quantification of lysosomal activity was also performed in human embryonic kidney 293 (HEK293) cells, which are characterized by smaller size than the MEF used so far and are less adherent to surfaces. We labeled the EL with ectopically expressed GFP-RAB7 (another difference from the experiments described so far, where EL were identified with antibodies to endogenous LAMP1). ER delivery within EL was induced upon ectopic expression of FAM134 to mimic starvation-induced ER-phagy (Khaminets, Heinrich, ; Liang, Lingeman, ). Here, we analyzed 16 HEK293 cells expressing ectopic FAM134B and 25 HEK293 cells expressing ectopic FAM134BLIR, which is inactive in driving ER fragments within EL (Khaminets, Heinrich, ; Liang, Lingeman, ). The confocal images were segmented to reveal FAM134-positive and FAM134B-negative EL (magenta and cyan, respectively, in Figure 6, A and B, segmentation). In HEK293 cells expressing ectopic FAM134B, two operators independently established an EL occupancy ranging between 65% and 78% (average value 72 ± 9%, Figure 6C, blue column, lab mean). The occupancy dropped to 31 ± 12% in HEK293 cells expressing the inactive FAM134BLIR (Figure 6C, red column, lab mean). LysoQuant established values of 69 ± 10% (Figure 6C, blue column, LysoQuant) in HEK293 cells expressing ectopic FAM134B and of 30 ± 14% in HEK293 cells expressing the inactive FAM134BLIR (Figure 6C, red column, LysoQuant). In this case as well, LysoQuant, with 0.32 ± 0.04 min per image, was much faster than the operators (4.18 ± 3.08 min per cell, Figure 6D). Thus, LysoQuant maintains high performance even in quantifying lysosomal-driven pathways in small and poorly adherent cells that were a suboptimal choice to perform imaging analyses.
FIGURE 6:

Quantification of ER remodeling in HEK293 cells. (A) CLSM showing HEK293 cotransfected transiently with the late endosomal/lysosomal marker GFP-Rab7 and FAM134-HA. (B) Same as A in cells transfected for expression of FAM134BLIR-HA. Nuclei are shown to identify transfected and nontransfected HEK293 cells. (C) Same as 4C to quantify ER delivery within GFP-RAB7-positive EL in cells expressing FAM134B and FAM134BLIR, respectively. (D) Same as 4D.

Quantification of ER remodeling in HEK293 cells. (A) CLSM showing HEK293 cotransfected transiently with the late endosomal/lysosomal marker GFP-Rab7 and FAM134-HA. (B) Same as A in cells transfected for expression of FAM134BLIR-HA. Nuclei are shown to identify transfected and nontransfected HEK293 cells. (C) Same as 4C to quantify ER delivery within GFP-RAB7-positive EL in cells expressing FAM134B and FAM134BLIR, respectively. (D) Same as 4D.

DISCUSSION

Quantitative analysis of biological samples is time-consuming when it is about collecting a statistically significant number of cells and conditions and minimizing operator errors that are inherent to manual operations (Grams, 1998). Here, we report on a deep learning-based approach, LysoQuant, that quantifies and segments CSLM images capturing lysosomal delivery of subcellular entities such as organelles and proteins. Direct comparison with classic machine learning approaches clearly shows the superiority of LysoQuant, which was trained to evaluate images rapidly with human-level performance (Figures 1–3). LysoQuant performance was validated on quantification of catabolic pathways that maintain cellular homeostasis and proteostasis by ensuring lysosomal clearance of excess ER during recovery from ER stress (Figure 4), by removing ER portions containing aberrant, disease-causing gene products (Figure 5), and under conditions that mimic starvation-induced ER-phagy (Figure 6). It goes without saying that LysoQuant, tested here in MEF and HEK293 cells, is applicable to all adherent cell lines. Here, we used endogenous and ectopically expressed protein markers to monitor and quantify lysosomal properties (size, number, distribution, occupancy, shape) and delivery of the ER or of misfolded, disease-causing polypeptides within EL for clearance. However, LysoQuant is certainly applicable to quantitative investigation of biological processes involving other organelles and subcellular structures, if high-quality confocal images and appropriate tools to fluorescently label actors in the pathway under investigation are available. Moreover, with appropriate training, this approach is applicable to images obtained with other techniques, such as superresolution or electron microscopy. Praise for standardization in deep learning methods in biology has recently emerged (Jones, 2019). Indeed, quantitative image analyses are very suitable tasks for DL approaches because of the large number of variables (pixels) and the clear definition of the classified objects. Given the ability of U-Net to work with a limited set of images, at first, we approached the problem of the training dataset size by adding a training and validation image in sequential steps until maximal performance. Imbalances in the number of data points (in this case, EL) between classes can lead to misleading accuracy values (Moen ). Though U-Net balances classes through the use of a loss-weighting function, we choose training images to globally have a similar number of EL in both classes. Data leakage was avoided in our case by training the network with a different dataset than the datasets used for measuring the metrics and phenotypical validation. A point of attention has been raised also on so-called “adversarial images,” i.e. slight alterations in images that may fool a proper recognition (Serre, 2019). Confocal images of cells with varying levels of intensity have inherently different signal-to-noise ratios, and even in cells with low levels of expression and higher acquisition noise, we did not detect alteration in the segmentation accuracy. Looking forward, LysoQuant’s automatic segmentation with human-level performance and much faster operations will be instrumental in increasing the number of images analyzed and will thus enable the identification of subtler phenotypes. The reported analysis time per image makes LysoQuant applicable to high-throughput studies, rivaling other reported high-throughput analysis techniques in speed (Collinet ; Aguet ; Rizk ).

MATERIALS AND METHODS

Antibodies, expression plasmids, cell lines, and inhibitors

Commercial antibodies used to perform our studies were the following: HA (Sigma) and LAMP1 (DSHB). Alexa Fluor conjugated secondary antibodies (Invitrogen, Jackson Immunoresearch, Thermo Fisher) were used for immunofluorescence (IF) analysis. Plasmids encoding SEC62, ATZ, and FAM134B were subcloned in a pcDNA3.1 vector and a C-terminus hemagglutinin (HA) tag was added. SEC62LIR was generated by site-directed mutagenesis of the LIR motif by replacing the -FEMI- residues with -AAAA-. These plasmids and GFP-Rab7 are described in Fumagalli, Noack, Bergmann, Presmanes, . FAM134B-HA and FAM134BLIR-HA (DDFELL to AAAAAA) expression plasmids were purchased from GenScript. MEF STX17KO were generated in our lab using CRISPR-Cas9 technology as described in Fregno, Fasana, . BafA1 was administered to MEF for 12 h at the final concentration of 50 nM and to HEK293 for 6 h at 100 nM.

Cell culture and transient transfection

MEF and HEK293 cells were grown at 37°C and 5% CO2 in DMEM supplemented with 10% FCS. Transient transfections were performed using JetPrime transfection reagent (PolyPlus) according to the manufacturer’s instructions. Twenty-four hours after transfection and after a 12-h treatment with BafA1, MEF cells were fixed to perform IF. HEK293 cells were treated for 6 h with 100 nM BafA1 24 h after transfection and then fixed for IF.

Confocal laser scanning microscopy

Either 1.5 × 105 MEF or 3 × 105 HEK293 were seeded on alcian blue–covered (MEF) or uncoated (HEK293) glass coverslips in a 12-well plate, transfected, and exposed to BafA1 as specified above. After two washes with phosphate-buffered saline (PBS) supplemented with calcium and magnesium (PBS++), they were fixed with 3.7% paraformaldehyde at room temperature for 20 min, washed three times with PBS++, and incubated for 15 min with a permeabilization solution (PS) composed of 0.05% saponin, 10% goat serum, 10 mM HEPES, and 15 mM glycine for intracellular staining at room temperature. The cells were then incubated with the aforementioned antibodies diluted 1:100 (HA) and 1:50 (LAMP1) in PS for 90 min at room temperature. They were washed three times (5 min each) using PS and subsequently incubated with Alexa Fluor–conjugated secondary antibodies diluted 1:300 in PS for 30 min at room temperature. After three washes with PS and deionized water, coverslips were mounted using Vectashield (Vector Laboratories) supplemented with 40,6-diamidino-2-phenylindole (DAPI). Confocal images were acquired using a Leica TCS SP5 microscope with a 63.0 × 1.40 OIL UV objective. FIJI ImageJ was used for image analysis and processing. These protocols are explained in more detail in (Fumagalli, Noack, Bergmann, Presmanes, ; Fregno, Fasana, ).

Manual annotation accuracy

Three operators manually annotated 10 cells from four different experiments for a total of 1170 EL. These ROIs were used to create annotation masks where annotated regions took the value 1 and background regions 0. Masks from the three operators where then summed to get images in the range [0, 3]. Lab consensus was defined as regions higher or equal to 2, that is, pixelwise regions that were drawn by at least two out of three operators. We then computed the intersection and union masks and calculated the IoU. For the relative standard error, we calculated the difference mask between the lab consensus mask and each operator and measured the area of the lab consensus and differences. From these values we then calculated variance, standard error, and relative error for each cell. All image processing was performed in Fiji/ImageJ.

Classic and machine-learning segmentation

Automatic thresholding was performed comparing all the thresholding algorithms available in ImageJ (Supplemental Figure 4). Results were then compared with manual segmentation to evaluate F1 scores for segmentation and Intersection over Union, choosing the algorithm closest to ground truth. Trainable Weka Segmentation was performed with Random Forest, a widely adopted algorithm for image and data classification in the Fast Random Forest implementation provided with the plugin (Schroff ; Glory-Afshar ; Liang ). Training was performed with the LAMP1 channels of the same dataset used for deep learning training and the same annotations used for deep learning. After balancing the classes, we selected all available features and performed the training. Random forest segmentation was also performed with iLastik version 1.3.3rc2. Training was performed with the LAMP1 channels of the same dataset used for deep learning training and the same annotations. As classification algorithm, we used the VIGRA Parallel Random Forest algorithm. Features were selected with an automatic feature selection method based on the default algorithm (Jakulin, 2005). Support Vector Machine was performed with the Weka LibSVM library (Fan ) and the Trainable Weka Segmentation ImageJ plugin. Training was performed with the LAMP1 channels of the same dataset used for deep learning training and the same annotations used for deep learning. After balancing the classes, we selected all available features and performed the training. The circular Hough transform was performed with MATLAB R2019a by recognizing bright regions on dark background. We extrapolated the range of sizes from the sizes of the annotated data as shown in the histogram in Figure 2Ag. Processing the images with a background subtraction in this case did not improve the results, due to the high signal-to-noise ratio of the original images. We performed the analysis with Two Stage and Phase code methods. Two parameters can be controlled, sensitivity and edge recognition factors. These values have the range [0,1]. We then selected the image with the lowest values of errors, evaluated as false negatives and false positives. To assess this, we scanned both parameters with a step size of 0.1 and evaluated the F1 score and Intersection over Union for each condition. We then chose the parameters corresponding to the best F1 score and IoU (Supplemental Table 1). Squassh segmentation was performed following the method previously described (Rizk ) on the LAMP1 channel. After background subtraction with a rolling ball radius of 10 pixels, segmentation was performed with subpixel accuracy, Automatic local intensity estimation and PSF model were derived from the acquisition settings. Regions below two pixels in size were excluded. Icy spot detection plugin segmentation was performed on the LAMP1 channel with the UnDecimated Wavelet Transform detector, recognizing bright spots on a dark background with spot scales 2, 3 and 4. CmeAnalysis was performed on MATLAB R2019a on the LAMP1 channel, with wavelength and camera pixel size estimated from the acquisition settings. The Stardist convolutional neural network was trained from the same training dataset used for the deep learning techniques presented in this paper, with the stardist Docker container, based tensorflow/tensorflow:1.13.2-gpu-py3-jupyter, Ubuntu 18.04 with CUDA 10.0 and Python 3.6.8 (github: mpicbg-csbd/stardist). Labels were imported from the annotations masks generated for the DL computations described in the results. After normalization, eight images were randomly selected for training and two for model validation. The network was set with 32 rays, one channel, and a 2 × 2 grid. The model was trained for 400 epochs. We then calculated the threshold optimization factors for this training (probability threshold: 0.4, NMS threshold: 0.5) and exported the Tensorflow model to the Stardist Fiji plugin, with which we performed the segmentation. Interaction analysis segmentation was performed with radius size ranging from three to seven pixels, cutoff of 0.001. Percentile values were tested in a range from 0.3 to 0.9. Colocalisation Pipeline was performed with a watershed tolerance of 5, threshold 1, and a local autothreshold method.

ImageJ plugin and deep learning computations

The ImageJ plugin takes as an input a CSLM image (Figure 2Aa), the ROIs of the selected cells (Figure 2Ab), and the channels to analyze. It then performs the conversion to RGB, clears the signal outside the selected ROIs (Figure 2Ac), and calls the U-Net plugin, which normalizes the image (Figure 2Ad) and performs segmentation with the specified weight file. The segmented image (Figure 2Ae) is then recalled by the plugin and for each class (Figure 2Af) the objects above the minimum size (Figure 2Ag) are quantified (Figure 2A, h and i) through the use of the Analyze Particles class. Deep learning computations were performed on a single graphical processing unit (GPU, nVidia GTX 1080 with 8GB of VRAM). The Caffe framework was patched with U-Net version 99bd99_20190109 and compiled on a Linux CentOS remote server with cuda 8.1 and cudNN 7.1.

Statistical analysis

Statistical analysis was performed using GraphPad Prism 8 software. An unpaired two-tailed t test, ordinary one-way ANOVA with Tukey’s multiple comparisons test, or two-way ANOVA with Sidak’s multiple comparisons test was used to assess statistical significance. A p-value corresponding to 0.05 or less was considered statistically significant. All values are expressed as average ± SD unless otherwise stated.

Data availability

All data needed to evaluate the conclusions in the paper are present in the paper and/or in the Supplemental Information. The described plugin, the training network, and the model are available on the github platform (https://github.com/irb-imagingfacility/lysoquant) and through the ImageJ update site. Training datasets of the experiments will be available on request. Additional data related to this paper may be requested from the authors. Click here for additional data file. Click here for additional data file.
  51 in total

Review 1.  Machine learning applications in cell image analysis.

Authors:  Andrey Kan
Journal:  Immunol Cell Biol       Date:  2017-03-15       Impact factor: 5.126

2.  Translocon component Sec62 acts in endoplasmic reticulum turnover during stress recovery.

Authors:  Fiorenza Fumagalli; Julia Noack; Timothy J Bergmann; Eduardo Cebollero; Giorgia Brambilla Pisoni; Elisa Fasana; Ilaria Fregno; Carmela Galli; Marisa Loi; Tatiana Soldà; Rocco D'Antuono; Andrea Raimondi; Martin Jung; Armin Melnyk; Stefan Schorr; Anne Schreiber; Luca Simonelli; Luca Varani; Caroline Wilson-Zbinden; Oliver Zerbe; Kay Hofmann; Matthias Peter; Manfredo Quadroni; Richard Zimmermann; Maurizio Molinari
Journal:  Nat Cell Biol       Date:  2016-10-17       Impact factor: 28.824

3.  A Graphical Model to Determine the Subcellular Protein Location in Artificial Tissues.

Authors:  Estelle Glory-Afshar; Elvira Osuna-Highley; Brian Granger; Robert F Murphy
Journal:  Proc IEEE Int Symp Biomed Imaging       Date:  2010-04

Review 4.  Lysosomal storage disorders - challenges, concepts and avenues for therapy: beyond rare diseases.

Authors:  André R A Marques; Paul Saftig
Journal:  J Cell Sci       Date:  2019-01-16       Impact factor: 5.285

5.  ER-to-lysosome-associated degradation of proteasome-resistant ATZ polymers occurs via receptor-mediated vesicular transport.

Authors:  Ilaria Fregno; Elisa Fasana; Timothy J Bergmann; Andrea Raimondi; Marisa Loi; Tatiana Soldà; Carmela Galli; Rocco D'Antuono; Diego Morone; Alberto Danieli; Paolo Paganetti; Eelco van Anken; Maurizio Molinari
Journal:  EMBO J       Date:  2018-08-03       Impact factor: 11.598

6.  Beyond co-localization: inferring spatial interactions between sub-cellular structures from microscopy images.

Authors:  Jo A Helmuth; Grégory Paul; Ivo F Sbalzarini
Journal:  BMC Bioinformatics       Date:  2010-07-07       Impact factor: 3.169

Review 7.  Brain Disorders Due to Lysosomal Dysfunction.

Authors:  Alessandro Fraldi; Andrés D Klein; Diego L Medina; Carmine Settembre
Journal:  Annu Rev Neurosci       Date:  2016-04-18       Impact factor: 12.449

Review 8.  Lysosomes as dynamic regulators of cell and organismal homeostasis.

Authors:  Andrea Ballabio; Juan S Bonifacino
Journal:  Nat Rev Mol Cell Biol       Date:  2019-11-25       Impact factor: 94.444

9.  An RDAU-NET model for lesion segmentation in breast ultrasound images.

Authors:  Zhemin Zhuang; Nan Li; Alex Noel Joseph Raj; Vijayalakshmi G V Mahesh; Shunmin Qiu
Journal:  PLoS One       Date:  2019-08-23       Impact factor: 3.240

Review 10.  Endoplasmic reticulum turnover: ER-phagy and other flavors in selective and non-selective ER clearance.

Authors:  Ilaria Fregno; Maurizio Molinari
Journal:  F1000Res       Date:  2018-04-13
View more
  3 in total

1.  Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)1.

Authors:  Daniel J Klionsky; Amal Kamal Abdel-Aziz; Sara Abdelfatah; Mahmoud Abdellatif; Asghar Abdoli; Steffen Abel; Hagai Abeliovich; Marie H Abildgaard; Yakubu Princely Abudu; Abraham Acevedo-Arozena; Iannis E Adamopoulos; Khosrow Adeli; Timon E Adolph; Annagrazia Adornetto; Elma Aflaki; Galila Agam; Anupam Agarwal; Bharat B Aggarwal; Maria Agnello; Patrizia Agostinis; Javed N Agrewala; Alexander Agrotis; Patricia V Aguilar; S Tariq Ahmad; Zubair M Ahmed; Ulises Ahumada-Castro; Sonja Aits; Shu Aizawa; Yunus Akkoc; Tonia Akoumianaki; Hafize Aysin Akpinar; Ahmed M Al-Abd; Lina Al-Akra; Abeer Al-Gharaibeh; Moulay A Alaoui-Jamali; Simon Alberti; Elísabet Alcocer-Gómez; Cristiano Alessandri; Muhammad Ali; M Abdul Alim Al-Bari; Saeb Aliwaini; Javad Alizadeh; Eugènia Almacellas; Alexandru Almasan; Alicia Alonso; Guillermo D Alonso; Nihal Altan-Bonnet; Dario C Altieri; Élida M C Álvarez; Sara Alves; Cristine Alves da Costa; Mazen M Alzaharna; Marialaura Amadio; Consuelo Amantini; Cristina Amaral; Susanna Ambrosio; Amal O Amer; Veena Ammanathan; Zhenyi An; Stig U Andersen; Shaida A Andrabi; Magaiver Andrade-Silva; Allen M Andres; Sabrina Angelini; David Ann; Uche C Anozie; Mohammad Y Ansari; Pedro Antas; Adam Antebi; Zuriñe Antón; Tahira Anwar; Lionel Apetoh; Nadezda Apostolova; Toshiyuki Araki; Yasuhiro Araki; Kohei Arasaki; Wagner L Araújo; Jun Araya; Catherine Arden; Maria-Angeles Arévalo; Sandro Arguelles; Esperanza Arias; Jyothi Arikkath; Hirokazu Arimoto; Aileen R Ariosa; Darius Armstrong-James; Laetitia Arnauné-Pelloquin; Angeles Aroca; Daniela S Arroyo; Ivica Arsov; Rubén Artero; Dalia Maria Lucia Asaro; Michael Aschner; Milad Ashrafizadeh; Osnat Ashur-Fabian; Atanas G Atanasov; Alicia K Au; Patrick Auberger; Holger W Auner; Laure Aurelian; Riccardo Autelli; Laura Avagliano; Yenniffer Ávalos; Sanja Aveic; Célia Alexandra Aveleira; Tamar Avin-Wittenberg; Yucel Aydin; Scott Ayton; Srinivas Ayyadevara; Maria Azzopardi; Misuzu Baba; Jonathan M Backer; Steven K Backues; Dong-Hun Bae; Ok-Nam Bae; Soo Han Bae; Eric H Baehrecke; Ahruem Baek; Seung-Hoon Baek; Sung Hee Baek; Giacinto Bagetta; Agnieszka Bagniewska-Zadworna; Hua Bai; Jie Bai; Xiyuan Bai; Yidong Bai; Nandadulal Bairagi; Shounak Baksi; Teresa Balbi; Cosima T Baldari; Walter Balduini; Andrea Ballabio; Maria Ballester; Salma Balazadeh; Rena Balzan; Rina Bandopadhyay; Sreeparna Banerjee; Sulagna Banerjee; Ágnes Bánréti; Yan Bao; Mauricio S Baptista; Alessandra Baracca; Cristiana Barbati; Ariadna Bargiela; Daniela Barilà; Peter G Barlow; Sami J Barmada; Esther Barreiro; George E Barreto; Jiri Bartek; Bonnie Bartel; Alberto Bartolome; Gaurav R Barve; Suresh H Basagoudanavar; Diane C Bassham; Robert C Bast; Alakananda Basu; Henri Batoko; Isabella Batten; Etienne E Baulieu; Bradley L Baumgarner; Jagadeesh Bayry; Rupert Beale; Isabelle Beau; Florian Beaumatin; Luiz R G Bechara; George R Beck; Michael F Beers; Jakob Begun; Christian Behrends; Georg M N Behrens; Roberto Bei; Eloy Bejarano; Shai Bel; Christian Behl; Amine Belaid; Naïma Belgareh-Touzé; Cristina Bellarosa; Francesca Belleudi; Melissa Belló Pérez; Raquel Bello-Morales; Jackeline Soares de Oliveira Beltran; Sebastián Beltran; Doris Mangiaracina Benbrook; Mykolas Bendorius; Bruno A Benitez; Irene Benito-Cuesta; Julien Bensalem; Martin W Berchtold; Sabina Berezowska; Daniele Bergamaschi; Matteo Bergami; Andreas Bergmann; Laura Berliocchi; Clarisse Berlioz-Torrent; Amélie Bernard; Lionel Berthoux; Cagri G Besirli; Sebastien Besteiro; Virginie M Betin; Rudi Beyaert; Jelena S Bezbradica; Kiran Bhaskar; Ingrid Bhatia-Kissova; Resham Bhattacharya; Sujoy Bhattacharya; Shalmoli Bhattacharyya; Md Shenuarin Bhuiyan; Sujit Kumar Bhutia; Lanrong Bi; Xiaolin Bi; Trevor J Biden; Krikor Bijian; Viktor A Billes; Nadine Binart; Claudia Bincoletto; Asa B Birgisdottir; Geir Bjorkoy; Gonzalo Blanco; Ana Blas-Garcia; Janusz Blasiak; Robert Blomgran; Klas Blomgren; Janice S Blum; Emilio Boada-Romero; Mirta Boban; Kathleen Boesze-Battaglia; Philippe Boeuf; Barry Boland; Pascale Bomont; Paolo Bonaldo; Srinivasa Reddy Bonam; Laura Bonfili; Juan S Bonifacino; Brian A Boone; Martin D Bootman; Matteo Bordi; Christoph Borner; Beat C Bornhauser; Gautam Borthakur; Jürgen Bosch; Santanu Bose; Luis M Botana; Juan Botas; Chantal M Boulanger; Michael E Boulton; Mathieu Bourdenx; Benjamin Bourgeois; Nollaig M Bourke; Guilhem Bousquet; Patricia Boya; Peter V Bozhkov; Luiz H M Bozi; Tolga O Bozkurt; Doug E Brackney; Christian H Brandts; Ralf J Braun; Gerhard H Braus; Roberto Bravo-Sagua; José M Bravo-San Pedro; Patrick Brest; Marie-Agnès Bringer; Alfredo Briones-Herrera; V Courtney Broaddus; Peter Brodersen; Jeffrey L Brodsky; Steven L Brody; Paola G Bronson; Jeff M Bronstein; Carolyn N Brown; Rhoderick E Brown; Patricia C Brum; John H Brumell; Nicola Brunetti-Pierri; Daniele Bruno; Robert J Bryson-Richardson; Cecilia Bucci; Carmen Buchrieser; Marta Bueno; Laura Elisa Buitrago-Molina; Simone Buraschi; Shilpa Buch; J Ross Buchan; Erin M Buckingham; Hikmet Budak; Mauricio Budini; Geert Bultynck; Florin Burada; Joseph R Burgoyne; M Isabel Burón; Victor Bustos; Sabrina Büttner; Elena Butturini; Aaron Byrd; Isabel Cabas; Sandra Cabrera-Benitez; Ken Cadwell; Jingjing Cai; Lu Cai; Qian Cai; Montserrat Cairó; Jose A Calbet; Guy A Caldwell; Kim A Caldwell; Jarrod A Call; Riccardo Calvani; Ana C Calvo; Miguel Calvo-Rubio Barrera; Niels Os Camara; Jacques H Camonis; Nadine Camougrand; Michelangelo Campanella; Edward M Campbell; François-Xavier Campbell-Valois; Silvia Campello; Ilaria Campesi; Juliane C Campos; Olivier Camuzard; Jorge Cancino; Danilo Candido de Almeida; Laura Canesi; Isabella Caniggia; Barbara Canonico; Carles Cantí; Bin Cao; Michele Caraglia; Beatriz Caramés; Evie H Carchman; Elena Cardenal-Muñoz; Cesar Cardenas; Luis Cardenas; Sandra M Cardoso; Jennifer S Carew; Georges F Carle; Gillian Carleton; Silvia Carloni; Didac Carmona-Gutierrez; Leticia A Carneiro; Oliana Carnevali; Julian M Carosi; Serena Carra; Alice Carrier; Lucie Carrier; Bernadette Carroll; A Brent Carter; Andreia Neves Carvalho; Magali Casanova; Caty Casas; Josefina Casas; Chiara Cassioli; Eliseo F Castillo; Karen Castillo; Sonia Castillo-Lluva; Francesca Castoldi; Marco Castori; Ariel F Castro; Margarida Castro-Caldas; Javier Castro-Hernandez; Susana Castro-Obregon; Sergio D Catz; Claudia Cavadas; Federica Cavaliere; Gabriella Cavallini; Maria Cavinato; Maria L Cayuela; Paula Cebollada Rica; Valentina Cecarini; Francesco Cecconi; Marzanna Cechowska-Pasko; Simone Cenci; Victòria Ceperuelo-Mallafré; João J Cerqueira; Janete M Cerutti; Davide Cervia; Vildan Bozok Cetintas; Silvia Cetrullo; Han-Jung Chae; Andrei S Chagin; Chee-Yin Chai; Gopal Chakrabarti; Oishee Chakrabarti; Tapas Chakraborty; Trinad Chakraborty; Mounia Chami; Georgios Chamilos; David W Chan; Edmond Y W Chan; Edward D Chan; H Y Edwin Chan; Helen H Chan; Hung Chan; Matthew T V Chan; Yau Sang Chan; Partha K Chandra; Chih-Peng Chang; Chunmei Chang; Hao-Chun Chang; Kai Chang; Jie Chao; Tracey Chapman; Nicolas Charlet-Berguerand; Samrat Chatterjee; Shail K Chaube; Anu Chaudhary; Santosh Chauhan; Edward Chaum; Frédéric Checler; Michael E Cheetham; Chang-Shi Chen; Guang-Chao Chen; Jian-Fu Chen; Liam L Chen; Leilei Chen; Lin Chen; Mingliang Chen; Mu-Kuan Chen; Ning Chen; Quan Chen; Ruey-Hwa Chen; Shi Chen; Wei Chen; Weiqiang Chen; Xin-Ming Chen; Xiong-Wen Chen; Xu Chen; Yan Chen; Ye-Guang Chen; Yingyu Chen; Yongqiang Chen; Yu-Jen Chen; Yue-Qin Chen; Zhefan Stephen Chen; Zhi Chen; Zhi-Hua Chen; Zhijian J Chen; Zhixiang Chen; Hanhua Cheng; Jun Cheng; Shi-Yuan Cheng; Wei Cheng; Xiaodong Cheng; Xiu-Tang Cheng; Yiyun Cheng; Zhiyong Cheng; Zhong Chen; Heesun Cheong; Jit Kong Cheong; Boris V Chernyak; Sara Cherry; Chi Fai Randy Cheung; Chun Hei Antonio Cheung; King-Ho Cheung; Eric Chevet; Richard J Chi; Alan Kwok Shing Chiang; Ferdinando Chiaradonna; Roberto Chiarelli; Mario Chiariello; Nathalia Chica; Susanna Chiocca; Mario Chiong; Shih-Hwa Chiou; Abhilash I Chiramel; Valerio Chiurchiù; Dong-Hyung Cho; Seong-Kyu Choe; Augustine M K Choi; Mary E Choi; Kamalika Roy Choudhury; Norman S Chow; Charleen T Chu; Jason P Chua; John Jia En Chua; Hyewon Chung; Kin Pan Chung; Seockhoon Chung; So-Hyang Chung; Yuen-Li Chung; Valentina Cianfanelli; Iwona A Ciechomska; Mariana Cifuentes; Laura Cinque; Sebahattin Cirak; Mara Cirone; Michael J Clague; Robert Clarke; Emilio Clementi; Eliana M Coccia; Patrice Codogno; Ehud Cohen; Mickael M Cohen; Tania Colasanti; Fiorella Colasuonno; Robert A Colbert; Anna Colell; Miodrag Čolić; Nuria S Coll; Mark O Collins; María I Colombo; Daniel A Colón-Ramos; Lydie Combaret; Sergio Comincini; Márcia R Cominetti; Antonella Consiglio; Andrea Conte; Fabrizio Conti; Viorica Raluca Contu; Mark R Cookson; Kevin M Coombs; Isabelle Coppens; Maria Tiziana Corasaniti; Dale P Corkery; Nils Cordes; Katia Cortese; Maria do Carmo Costa; Sarah Costantino; Paola Costelli; Ana Coto-Montes; Peter J Crack; Jose L Crespo; Alfredo Criollo; Valeria Crippa; Riccardo Cristofani; Tamas Csizmadia; Antonio Cuadrado; Bing Cui; Jun Cui; Yixian Cui; Yong Cui; Emmanuel Culetto; Andrea C Cumino; Andrey V Cybulsky; Mark J Czaja; Stanislaw J Czuczwar; Stefania D'Adamo; Marcello D'Amelio; Daniela D'Arcangelo; Andrew C D'Lugos; Gabriella D'Orazi; James A da Silva; Hormos Salimi Dafsari; Ruben K Dagda; Yasin Dagdas; Maria Daglia; Xiaoxia Dai; Yun Dai; Yuyuan Dai; Jessica Dal Col; Paul Dalhaimer; Luisa Dalla Valle; Tobias Dallenga; Guillaume Dalmasso; Markus Damme; Ilaria Dando; Nico P Dantuma; April L Darling; Hiranmoy Das; Srinivasan Dasarathy; Santosh K Dasari; Srikanta Dash; Oliver Daumke; Adrian N Dauphinee; Jeffrey S Davies; Valeria A Dávila; Roger J Davis; Tanja Davis; Sharadha Dayalan Naidu; Francesca De Amicis; Karolien De Bosscher; Francesca De Felice; Lucia De Franceschi; Chiara De Leonibus; Mayara G de Mattos Barbosa; Guido R Y De Meyer; Angelo De Milito; Cosimo De Nunzio; Clara De Palma; Mauro De Santi; Claudio De Virgilio; Daniela De Zio; Jayanta Debnath; Brian J DeBosch; Jean-Paul Decuypere; Mark A Deehan; Gianluca Deflorian; James DeGregori; Benjamin Dehay; Gabriel Del Rio; Joe R Delaney; Lea M D Delbridge; Elizabeth Delorme-Axford; M Victoria Delpino; Francesca Demarchi; Vilma Dembitz; Nicholas D Demers; Hongbin Deng; Zhiqiang Deng; Joern Dengjel; Paul Dent; Donna Denton; Melvin L DePamphilis; Channing J Der; Vojo Deretic; Albert Descoteaux; Laura Devis; Sushil Devkota; Olivier Devuyst; Grant Dewson; Mahendiran Dharmasivam; Rohan Dhiman; Diego di Bernardo; Manlio Di Cristina; Fabio Di Domenico; Pietro Di Fazio; Alessio Di Fonzo; Giovanni Di Guardo; Gianni M Di Guglielmo; Luca Di Leo; Chiara Di Malta; Alessia Di Nardo; Martina Di Rienzo; Federica Di Sano; George Diallinas; Jiajie Diao; Guillermo Diaz-Araya; Inés Díaz-Laviada; Jared M Dickinson; Marc Diederich; Mélanie Dieudé; Ivan Dikic; Shiping Ding; Wen-Xing Ding; Luciana Dini; Jelena Dinić; Miroslav Dinic; Albena T Dinkova-Kostova; Marc S Dionne; Jörg H W Distler; Abhinav Diwan; Ian M C Dixon; Mojgan Djavaheri-Mergny; Ina Dobrinski; Oxana Dobrovinskaya; Radek Dobrowolski; Renwick C J Dobson; Jelena Đokić; Serap Dokmeci Emre; Massimo Donadelli; Bo Dong; Xiaonan Dong; Zhiwu Dong; Gerald W Dorn Ii; Volker Dotsch; Huan Dou; Juan Dou; Moataz Dowaidar; Sami Dridi; Liat Drucker; Ailian Du; Caigan Du; Guangwei Du; Hai-Ning Du; Li-Lin Du; André du Toit; Shao-Bin Duan; Xiaoqiong Duan; Sónia P Duarte; Anna Dubrovska; Elaine A Dunlop; Nicolas Dupont; Raúl V Durán; Bilikere S Dwarakanath; Sergey A Dyshlovoy; Darius Ebrahimi-Fakhari; Leopold Eckhart; Charles L Edelstein; Thomas Efferth; Eftekhar Eftekharpour; Ludwig Eichinger; Nabil Eid; Tobias Eisenberg; N Tony Eissa; Sanaa Eissa; Miriam Ejarque; Abdeljabar El Andaloussi; Nazira El-Hage; Shahenda El-Naggar; Anna Maria Eleuteri; Eman S El-Shafey; Mohamed Elgendy; Aristides G Eliopoulos; María M Elizalde; Philip M Elks; Hans-Peter Elsasser; Eslam S Elsherbiny; Brooke M Emerling; N C Tolga Emre; Christina H Eng; Nikolai Engedal; Anna-Mart Engelbrecht; Agnete S T Engelsen; Jorrit M Enserink; Ricardo Escalante; Audrey Esclatine; Mafalda Escobar-Henriques; Eeva-Liisa Eskelinen; Lucile Espert; Makandjou-Ola Eusebio; Gemma Fabrias; Cinzia Fabrizi; Antonio Facchiano; Francesco Facchiano; Bengt Fadeel; Claudio Fader; Alex C Faesen; W Douglas Fairlie; Alberto Falcó; Bjorn H Falkenburger; Daping Fan; Jie Fan; Yanbo Fan; Evandro F Fang; Yanshan Fang; Yognqi Fang; Manolis Fanto; Tamar Farfel-Becker; Mathias Faure; Gholamreza Fazeli; Anthony O Fedele; Arthur M Feldman; Du Feng; Jiachun Feng; Lifeng Feng; Yibin Feng; Yuchen Feng; Wei Feng; Thais Fenz Araujo; Thomas A Ferguson; Álvaro F Fernández; Jose C Fernandez-Checa; Sonia Fernández-Veledo; Alisdair R Fernie; Anthony W Ferrante; Alessandra Ferraresi; Merari F Ferrari; Julio C B Ferreira; Susan Ferro-Novick; Antonio Figueras; Riccardo Filadi; Nicoletta Filigheddu; Eduardo Filippi-Chiela; Giuseppe Filomeni; Gian Maria Fimia; Vittorio Fineschi; Francesca Finetti; Steven Finkbeiner; Edward A Fisher; Paul B Fisher; Flavio Flamigni; Steven J Fliesler; Trude H Flo; Ida Florance; Oliver Florey; Tullio Florio; Erika Fodor; Carlo Follo; Edward A Fon; Antonella Forlino; Francesco Fornai; Paola Fortini; Anna Fracassi; Alessandro Fraldi; Brunella Franco; Rodrigo Franco; Flavia Franconi; Lisa B Frankel; Scott L Friedman; Leopold F Fröhlich; Gema Frühbeck; Jose M Fuentes; Yukio Fujiki; Naonobu Fujita; Yuuki Fujiwara; Mitsunori Fukuda; Simone Fulda; Luc Furic; Norihiko Furuya; Carmela Fusco; Michaela U Gack; Lidia Gaffke; Sehamuddin Galadari; Alessia Galasso; Maria F Galindo; Sachith Gallolu Kankanamalage; Lorenzo Galluzzi; Vincent Galy; Noor Gammoh; Boyi Gan; Ian G Ganley; Feng Gao; Hui Gao; Minghui Gao; Ping Gao; Shou-Jiang Gao; Wentao Gao; Xiaobo Gao; Ana Garcera; Maria Noé Garcia; Verónica E Garcia; Francisco García-Del Portillo; Vega Garcia-Escudero; Aracely Garcia-Garcia; Marina Garcia-Macia; Diana García-Moreno; Carmen Garcia-Ruiz; Patricia García-Sanz; Abhishek D Garg; Ricardo Gargini; Tina Garofalo; Robert F Garry; Nils C Gassen; Damian Gatica; Liang Ge; Wanzhong Ge; Ruth Geiss-Friedlander; Cecilia Gelfi; Pascal Genschik; Ian E Gentle; Valeria Gerbino; Christoph Gerhardt; Kyla Germain; Marc Germain; David A Gewirtz; Elham Ghasemipour Afshar; Saeid Ghavami; Alessandra Ghigo; Manosij Ghosh; Georgios Giamas; Claudia Giampietri; Alexandra Giatromanolaki; Gary E Gibson; Spencer B Gibson; Vanessa Ginet; Edward Giniger; Carlotta Giorgi; Henrique Girao; Stephen E Girardin; Mridhula Giridharan; Sandy Giuliano; Cecilia Giulivi; Sylvie Giuriato; Julien Giustiniani; Alexander Gluschko; Veit Goder; Alexander Goginashvili; Jakub Golab; David C Goldstone; Anna Golebiewska; Luciana R Gomes; Rodrigo Gomez; Rubén Gómez-Sánchez; Maria Catalina Gomez-Puerto; Raquel Gomez-Sintes; Qingqiu Gong; Felix M Goni; Javier González-Gallego; Tomas Gonzalez-Hernandez; Rosa A Gonzalez-Polo; Jose A Gonzalez-Reyes; Patricia González-Rodríguez; Ing Swie Goping; Marina S Gorbatyuk; Nikolai V Gorbunov; Kıvanç Görgülü; Roxana M Gorojod; Sharon M Gorski; Sandro Goruppi; Cecilia Gotor; Roberta A Gottlieb; Illana Gozes; Devrim Gozuacik; Martin Graef; Markus H Gräler; Veronica Granatiero; Daniel Grasso; Joshua P Gray; Douglas R Green; Alexander Greenhough; Stephen L Gregory; Edward F Griffin; Mark W Grinstaff; Frederic Gros; Charles Grose; Angelina S Gross; Florian Gruber; Paolo Grumati; Tilman Grune; Xueyan Gu; Jun-Lin Guan; Carlos M Guardia; Kishore Guda; Flora Guerra; Consuelo Guerri; Prasun Guha; Carlos Guillén; Shashi Gujar; Anna Gukovskaya; Ilya Gukovsky; Jan Gunst; Andreas Günther; Anyonya R Guntur; Chuanyong Guo; Chun Guo; Hongqing Guo; Lian-Wang Guo; Ming Guo; Pawan Gupta; Shashi Kumar Gupta; Swapnil Gupta; Veer Bala Gupta; Vivek Gupta; Asa B Gustafsson; David D Gutterman; Ranjitha H B; Annakaisa Haapasalo; James E Haber; Aleksandra Hać; Shinji Hadano; Anders J Hafrén; Mansour Haidar; Belinda S Hall; Gunnel Halldén; Anne Hamacher-Brady; Andrea Hamann; Maho Hamasaki; Weidong Han; Malene Hansen; Phyllis I Hanson; Zijian Hao; Masaru Harada; Ljubica Harhaji-Trajkovic; Nirmala Hariharan; Nigil Haroon; James Harris; Takafumi Hasegawa; Noor Hasima Nagoor; Jeffrey A Haspel; Volker Haucke; Wayne D Hawkins; Bruce A Hay; Cole M Haynes; Soren B Hayrabedyan; Thomas S Hays; Congcong He; Qin He; Rong-Rong He; You-Wen He; Yu-Ying He; Yasser Heakal; Alexander M Heberle; J Fielding Hejtmancik; Gudmundur Vignir Helgason; Vanessa Henkel; Marc Herb; Alexander Hergovich; Anna Herman-Antosiewicz; Agustín Hernández; Carlos Hernandez; Sergio Hernandez-Diaz; Virginia Hernandez-Gea; Amaury Herpin; Judit Herreros; Javier H Hervás; Daniel Hesselson; Claudio Hetz; Volker T Heussler; Yujiro Higuchi; Sabine Hilfiker; Joseph A Hill; William S Hlavacek; Emmanuel A Ho; Idy H T Ho; Philip Wing-Lok Ho; Shu-Leong Ho; Wan Yun Ho; G Aaron Hobbs; Mark Hochstrasser; Peter H M Hoet; Daniel Hofius; Paul Hofman; Annika Höhn; Carina I Holmberg; Jose R Hombrebueno; Chang-Won Hong Yi-Ren Hong; Lora V Hooper; Thorsten Hoppe; Rastislav Horos; Yujin Hoshida; I-Lun Hsin; Hsin-Yun Hsu; Bing Hu; Dong Hu; Li-Fang Hu; Ming Chang Hu; Ronggui Hu; Wei Hu; Yu-Chen Hu; Zhuo-Wei Hu; Fang Hua; Jinlian Hua; Yingqi Hua; Chongmin Huan; Canhua Huang; Chuanshu Huang; Chuanxin Huang; Chunling Huang; Haishan Huang; Kun Huang; Michael L H Huang; Rui Huang; Shan Huang; Tianzhi Huang; Xing Huang; Yuxiang Jack Huang; Tobias B Huber; Virginie Hubert; Christian A Hubner; Stephanie M Hughes; William E Hughes; Magali Humbert; Gerhard Hummer; James H Hurley; Sabah Hussain; Salik Hussain; Patrick J Hussey; Martina Hutabarat; Hui-Yun Hwang; Seungmin Hwang; Antonio Ieni; Fumiyo Ikeda; Yusuke Imagawa; Yuzuru Imai; Carol Imbriano; Masaya Imoto; Denise M Inman; Ken Inoki; Juan Iovanna; Renato V Iozzo; Giuseppe Ippolito; Javier E Irazoqui; Pablo Iribarren; Mohd Ishaq; Makoto Ishikawa; Nestor Ishimwe; Ciro Isidoro; Nahed Ismail; Shohreh Issazadeh-Navikas; Eisuke Itakura; Daisuke Ito; Davor Ivankovic; Saška Ivanova; Anand Krishnan V Iyer; José M Izquierdo; Masanori Izumi; Marja Jäättelä; Majid Sakhi Jabir; William T Jackson; Nadia Jacobo-Herrera; Anne-Claire Jacomin; Elise Jacquin; Pooja Jadiya; Hartmut Jaeschke; Chinnaswamy Jagannath; Arjen J Jakobi; Johan Jakobsson; Bassam Janji; Pidder Jansen-Dürr; Patric J Jansson; Jonathan Jantsch; Sławomir Januszewski; Alagie Jassey; Steve Jean; Hélène Jeltsch-David; Pavla Jendelova; Andreas Jenny; Thomas E Jensen; Niels Jessen; Jenna L Jewell; Jing Ji; Lijun Jia; Rui Jia; Liwen Jiang; Qing Jiang; Richeng Jiang; Teng Jiang; Xuejun Jiang; Yu Jiang; Maria Jimenez-Sanchez; Eun-Jung Jin; Fengyan Jin; Hongchuan Jin; Li Jin; Luqi Jin; Meiyan Jin; Si Jin; Eun-Kyeong Jo; Carine Joffre; Terje Johansen; Gail V W Johnson; Simon A Johnston; Eija Jokitalo; Mohit Kumar Jolly; Leo A B Joosten; Joaquin Jordan; Bertrand Joseph; Dianwen Ju; Jeong-Sun Ju; Jingfang Ju; Esmeralda Juárez; Delphine Judith; Gábor Juhász; Youngsoo Jun; Chang Hwa Jung; Sung-Chul Jung; Yong Keun Jung; Heinz Jungbluth; Johannes Jungverdorben; Steffen Just; Kai Kaarniranta; Allen Kaasik; Tomohiro Kabuta; Daniel Kaganovich; Alon Kahana; Renate Kain; Shinjo Kajimura; Maria Kalamvoki; Manjula Kalia; Danuta S Kalinowski; Nina Kaludercic; Ioanna Kalvari; Joanna Kaminska; Vitaliy O Kaminskyy; Hiromitsu Kanamori; Keizo Kanasaki; Chanhee Kang; Rui Kang; Sang Sun Kang; Senthilvelrajan Kaniyappan; Tomotake Kanki; Thirumala-Devi Kanneganti; Anumantha G Kanthasamy; Arthi Kanthasamy; Marc Kantorow; Orsolya Kapuy; Michalis V Karamouzis; Md Razaul Karim; Parimal Karmakar; Rajesh G Katare; Masaru Kato; Stefan H E Kaufmann; Anu Kauppinen; Gur P Kaushal; Susmita Kaushik; Kiyoshi Kawasaki; Kemal Kazan; Po-Yuan Ke; Damien J Keating; Ursula Keber; John H Kehrl; Kate E Keller; Christian W Keller; Jongsook Kim Kemper; Candia M Kenific; Oliver Kepp; Stephanie Kermorgant; Andreas Kern; Robin Ketteler; Tom G Keulers; Boris Khalfin; Hany Khalil; Bilon Khambu; Shahid Y Khan; Vinoth Kumar Megraj Khandelwal; Rekha Khandia; Widuri Kho; Noopur V Khobrekar; Sataree Khuansuwan; Mukhran Khundadze; Samuel A Killackey; Dasol Kim; Deok Ryong Kim; Do-Hyung Kim; Dong-Eun Kim; Eun Young Kim; Eun-Kyoung Kim; Hak-Rim Kim; Hee-Sik Kim; Jeong Hun Kim; Jin Kyung Kim; Jin-Hoi Kim; Joungmok Kim; Ju Hwan Kim; Keun Il Kim; Peter K Kim; Seong-Jun Kim; Scot R Kimball; Adi Kimchi; Alec C Kimmelman; Tomonori Kimura; Matthew A King; Kerri J Kinghorn; Conan G Kinsey; Vladimir Kirkin; Lorrie A Kirshenbaum; Sergey L Kiselev; Shuji Kishi; Katsuhiko Kitamoto; Yasushi Kitaoka; Kaio Kitazato; Richard N Kitsis; Josef T Kittler; Ole Kjaerulff; Peter S Klein; Thomas Klopstock; Jochen Klucken; Helene Knævelsrud; Roland L Knorr; Ben C B Ko; Fred Ko; Jiunn-Liang Ko; Hotaka Kobayashi; Satoru Kobayashi; Ina Koch; Jan C Koch; Ulrich Koenig; Donat Kögel; Young Ho Koh; Masato Koike; Sepp D Kohlwein; Nur M Kocaturk; Masaaki Komatsu; Jeannette König; Toru Kono; Benjamin T Kopp; Tamas Korcsmaros; Gözde Korkmaz; Viktor I Korolchuk; Mónica Suárez Korsnes; Ali Koskela; Janaiah Kota; Yaichiro Kotake; Monica L Kotler; Yanjun Kou; Michael I Koukourakis; Evangelos Koustas; Attila L Kovacs; Tibor Kovács; Daisuke Koya; Tomohiro Kozako; Claudine Kraft; Dimitri Krainc; Helmut Krämer; Anna D Krasnodembskaya; Carole Kretz-Remy; Guido Kroemer; Nicholas T Ktistakis; Kazuyuki Kuchitsu; Sabine Kuenen; Lars Kuerschner; Thomas Kukar; Ajay Kumar; Ashok Kumar; Deepak Kumar; Dhiraj Kumar; Sharad Kumar; Shinji Kume; Caroline Kumsta; Chanakya N Kundu; Mondira Kundu; Ajaikumar B Kunnumakkara; Lukasz Kurgan; Tatiana G Kutateladze; Ozlem Kutlu; SeongAe Kwak; Ho Jeong Kwon; Taeg Kyu Kwon; Yong Tae Kwon; Irene Kyrmizi; Albert La Spada; Patrick Labonté; Sylvain Ladoire; Ilaria Laface; Frank Lafont; Diane C Lagace; Vikramjit Lahiri; Zhibing Lai; Angela S Laird; Aparna Lakkaraju; Trond Lamark; Sheng-Hui Lan; Ane Landajuela; Darius J R Lane; Jon D Lane; Charles H Lang; Carsten Lange; Ülo Langel; Rupert Langer; Pierre Lapaquette; Jocelyn Laporte; Nicholas F LaRusso; Isabel Lastres-Becker; Wilson Chun Yu Lau; Gordon W Laurie; Sergio Lavandero; Betty Yuen Kwan Law; Helen Ka-Wai Law; Rob Layfield; Weidong Le; Herve Le Stunff; Alexandre Y Leary; Jean-Jacques Lebrun; Lionel Y W Leck; Jean-Philippe Leduc-Gaudet; Changwook Lee; Chung-Pei Lee; Da-Hye Lee; Edward B Lee; Erinna F Lee; Gyun Min Lee; He-Jin Lee; Heung Kyu Lee; Jae Man Lee; Jason S Lee; Jin-A Lee; Joo-Yong Lee; Jun Hee Lee; Michael Lee; Min Goo Lee; Min Jae Lee; Myung-Shik Lee; Sang Yoon Lee; Seung-Jae Lee; Stella Y Lee; Sung Bae Lee; Won Hee Lee; Ying-Ray Lee; Yong-Ho Lee; Youngil Lee; Christophe Lefebvre; Renaud Legouis; Yu L Lei; Yuchen Lei; Sergey Leikin; Gerd Leitinger; Leticia Lemus; Shuilong Leng; Olivia Lenoir; Guido Lenz; Heinz Josef Lenz; Paola Lenzi; Yolanda León; Andréia M Leopoldino; Christoph Leschczyk; Stina Leskelä; Elisabeth Letellier; Chi-Ting Leung; Po Sing Leung; Jeremy S Leventhal; Beth Levine; Patrick A Lewis; Klaus Ley; Bin Li; Da-Qiang Li; Jianming Li; Jing Li; Jiong Li; Ke Li; Liwu Li; Mei Li; Min Li; Min Li; Ming Li; Mingchuan Li; Pin-Lan Li; Ming-Qing Li; Qing Li; Sheng Li; Tiangang Li; Wei Li; Wenming Li; Xue Li; Yi-Ping Li; Yuan Li; Zhiqiang Li; Zhiyong Li; Zhiyuan Li; Jiqin Lian; Chengyu Liang; Qiangrong Liang; Weicheng Liang; Yongheng Liang; YongTian Liang; Guanghong Liao; Lujian Liao; Mingzhi Liao; Yung-Feng Liao; Mariangela Librizzi; Pearl P Y Lie; Mary A Lilly; Hyunjung J Lim; Thania R R Lima; Federica Limana; Chao Lin; Chih-Wen Lin; Dar-Shong Lin; Fu-Cheng Lin; Jiandie D Lin; Kurt M Lin; Kwang-Huei Lin; Liang-Tzung Lin; Pei-Hui Lin; Qiong Lin; Shaofeng Lin; Su-Ju Lin; Wenyu Lin; Xueying Lin; Yao-Xin Lin; Yee-Shin Lin; Rafael Linden; Paula Lindner; Shuo-Chien Ling; Paul Lingor; Amelia K Linnemann; Yih-Cherng Liou; Marta M Lipinski; Saška Lipovšek; Vitor A Lira; Natalia Lisiak; Paloma B Liton; Chao Liu; Ching-Hsuan Liu; Chun-Feng Liu; Cui Hua Liu; Fang Liu; Hao Liu; Hsiao-Sheng Liu; Hua-Feng Liu; Huifang Liu; Jia Liu; Jing Liu; Julia Liu; Leyuan Liu; Longhua Liu; Meilian Liu; Qin Liu; Wei Liu; Wende Liu; Xiao-Hong Liu; Xiaodong Liu; Xingguo Liu; Xu Liu; Xuedong Liu; Yanfen Liu; Yang Liu; Yang Liu; Yueyang Liu; Yule Liu; J Andrew Livingston; Gerard Lizard; Jose M Lizcano; Senka Ljubojevic-Holzer; Matilde E LLeonart; David Llobet-Navàs; Alicia Llorente; Chih Hung Lo; Damián Lobato-Márquez; Qi Long; Yun Chau Long; Ben Loos; Julia A Loos; Manuela G López; Guillermo López-Doménech; José Antonio López-Guerrero; Ana T López-Jiménez; Óscar López-Pérez; Israel López-Valero; Magdalena J Lorenowicz; Mar Lorente; Peter Lorincz; Laura Lossi; Sophie Lotersztajn; Penny E Lovat; Jonathan F Lovell; Alenka Lovy; Péter Lőw; Guang Lu; Haocheng Lu; Jia-Hong Lu; Jin-Jian Lu; Mengji Lu; Shuyan Lu; Alessandro Luciani; John M Lucocq; Paula Ludovico; Micah A Luftig; Morten Luhr; Diego Luis-Ravelo; Julian J Lum; Liany Luna-Dulcey; Anders H Lund; Viktor K Lund; Jan D Lünemann; Patrick Lüningschrör; Honglin Luo; Rongcan Luo; Shouqing Luo; Zhi Luo; Claudio Luparello; Bernhard Lüscher; Luan Luu; Alex Lyakhovich; Konstantin G Lyamzaev; Alf Håkon Lystad; Lyubomyr Lytvynchuk; Alvin C Ma; Changle Ma; Mengxiao Ma; Ning-Fang Ma; Quan-Hong Ma; Xinliang Ma; Yueyun Ma; Zhenyi Ma; Ormond A MacDougald; Fernando Macian; Gustavo C MacIntosh; Jeffrey P MacKeigan; Kay F Macleod; Sandra Maday; Frank Madeo; Muniswamy Madesh; Tobias Madl; Julio Madrigal-Matute; Akiko Maeda; Yasuhiro Maejima; Marta Magarinos; Poornima Mahavadi; Emiliano Maiani; Kenneth Maiese; Panchanan Maiti; Maria Chiara Maiuri; Barbara Majello; Michael B Major; Elena Makareeva; Fayaz Malik; Karthik Mallilankaraman; Walter Malorni; Alina Maloyan; Najiba Mammadova; Gene Chi Wai Man; Federico Manai; Joseph D Mancias; Eva-Maria Mandelkow; Michael A Mandell; Angelo A Manfredi; Masoud H Manjili; Ravi Manjithaya; Patricio Manque; Bella B Manshian; Raquel Manzano; Claudia Manzoni; Kai Mao; Cinzia Marchese; Sandrine Marchetti; Anna Maria Marconi; Fabrizio Marcucci; Stefania Mardente; Olga A Mareninova; Marta Margeta; Muriel Mari; Sara Marinelli; Oliviero Marinelli; Guillermo Mariño; Sofia Mariotto; Richard S Marshall; Mark R Marten; Sascha Martens; Alexandre P J Martin; Katie R Martin; Sara Martin; Shaun Martin; Adrián Martín-Segura; Miguel A Martín-Acebes; Inmaculada Martin-Burriel; Marcos Martin-Rincon; Paloma Martin-Sanz; José A Martina; Wim Martinet; Aitor Martinez; Ana Martinez; Jennifer Martinez; Moises Martinez Velazquez; Nuria Martinez-Lopez; Marta Martinez-Vicente; Daniel O Martins; Joilson O Martins; Waleska K Martins; Tania Martins-Marques; Emanuele Marzetti; Shashank Masaldan; Celine Masclaux-Daubresse; Douglas G Mashek; Valentina Massa; Lourdes Massieu; Glenn R Masson; Laura Masuelli; Anatoliy I Masyuk; Tetyana V Masyuk; Paola Matarrese; Ander Matheu; Satoaki Matoba; Sachiko Matsuzaki; Pamela Mattar; Alessandro Matte; Domenico Mattoscio; José L Mauriz; Mario Mauthe; Caroline Mauvezin; Emanual Maverakis; Paola Maycotte; Johanna Mayer; Gianluigi Mazzoccoli; Cristina Mazzoni; Joseph R Mazzulli; Nami McCarty; Christine McDonald; Mitchell R McGill; Sharon L McKenna; BethAnn McLaughlin; Fionn McLoughlin; Mark A McNiven; Thomas G McWilliams; Fatima Mechta-Grigoriou; Tania Catarina Medeiros; Diego L Medina; Lynn A Megeney; Klara Megyeri; Maryam Mehrpour; Jawahar L Mehta; Alfred J Meijer; Annemarie H Meijer; Jakob Mejlvang; Alicia Meléndez; Annette Melk; Gonen Memisoglu; Alexandrina F Mendes; Delong Meng; Fei Meng; Tian Meng; Rubem Menna-Barreto; Manoj B Menon; Carol Mercer; Anne E Mercier; Jean-Louis Mergny; Adalberto Merighi; Seth D Merkley; Giuseppe Merla; Volker Meske; Ana Cecilia Mestre; Shree Padma Metur; Christian Meyer; Hemmo Meyer; Wenyi Mi; Jeanne Mialet-Perez; Junying Miao; Lucia Micale; Yasuo Miki; Enrico Milan; Małgorzata Milczarek; Dana L Miller; Samuel I Miller; Silke Miller; Steven W Millward; Ira Milosevic; Elena A Minina; Hamed Mirzaei; Hamid Reza Mirzaei; Mehdi Mirzaei; Amit Mishra; Nandita Mishra; Paras Kumar Mishra; Maja Misirkic Marjanovic; Roberta Misasi; Amit Misra; Gabriella Misso; Claire Mitchell; Geraldine Mitou; Tetsuji Miura; Shigeki Miyamoto; Makoto Miyazaki; Mitsunori Miyazaki; Taiga Miyazaki; Keisuke Miyazawa; Noboru Mizushima; Trine H Mogensen; Baharia Mograbi; Reza Mohammadinejad; Yasir Mohamud; Abhishek Mohanty; Sipra Mohapatra; Torsten Möhlmann; Asif Mohmmed; Anna Moles; Kelle H Moley; Maurizio Molinari; Vincenzo Mollace; Andreas Buch Møller; Bertrand Mollereau; Faustino Mollinedo; Costanza Montagna; Mervyn J Monteiro; Andrea Montella; L Ruth Montes; Barbara Montico; Vinod K Mony; Giacomo Monzio Compagnoni; Michael N Moore; Mohammad A Moosavi; Ana L Mora; Marina Mora; David Morales-Alamo; Rosario Moratalla; Paula I Moreira; Elena Morelli; Sandra Moreno; Daniel Moreno-Blas; Viviana Moresi; Benjamin Morga; Alwena H Morgan; Fabrice Morin; Hideaki Morishita; Orson L Moritz; Mariko Moriyama; Yuji Moriyasu; Manuela Morleo; Eugenia Morselli; Jose F Moruno-Manchon; Jorge Moscat; Serge Mostowy; Elisa Motori; Andrea Felinto Moura; Naima Moustaid-Moussa; Maria Mrakovcic; Gabriel Muciño-Hernández; Anupam Mukherjee; Subhadip Mukhopadhyay; Jean M Mulcahy Levy; Victoriano Mulero; Sylviane Muller; Christian Münch; Ashok Munjal; Pura Munoz-Canoves; Teresa Muñoz-Galdeano; Christian Münz; Tomokazu Murakawa; Claudia Muratori; Brona M Murphy; J Patrick Murphy; Aditya Murthy; Timo T Myöhänen; Indira U Mysorekar; Jennifer Mytych; Seyed Mohammad Nabavi; Massimo Nabissi; Péter Nagy; Jihoon Nah; Aimable Nahimana; Ichiro Nakagawa; Ken Nakamura; Hitoshi Nakatogawa; Shyam S Nandi; Meera Nanjundan; Monica Nanni; Gennaro Napolitano; Roberta Nardacci; Masashi Narita; Melissa Nassif; Ilana Nathan; Manabu Natsumeda; Ryno J Naude; Christin Naumann; Olaia Naveiras; Fatemeh Navid; Steffan T Nawrocki; Taras Y Nazarko; Francesca Nazio; Florentina Negoita; Thomas Neill; Amanda L Neisch; Luca M Neri; Mihai G Netea; Patrick Neubert; Thomas P Neufeld; Dietbert Neumann; Albert Neutzner; Phillip T Newton; Paul A Ney; Ioannis P Nezis; Charlene C W Ng; Tzi Bun Ng; Hang T T Nguyen; Long T Nguyen; Hong-Min Ni; Clíona Ní Cheallaigh; Zhenhong Ni; M Celeste Nicolao; Francesco Nicoli; Manuel Nieto-Diaz; Per Nilsson; Shunbin Ning; Rituraj Niranjan; Hiroshi Nishimune; Mireia Niso-Santano; Ralph A Nixon; Annalisa Nobili; Clevio Nobrega; Takeshi Noda; Uxía Nogueira-Recalde; Trevor M Nolan; Ivan Nombela; Ivana Novak; Beatriz Novoa; Takashi Nozawa; Nobuyuki Nukina; Carmen Nussbaum-Krammer; Jesper Nylandsted; Tracey R O'Donovan; Seónadh M O'Leary; Eyleen J O'Rourke; Mary P O'Sullivan; Timothy E O'Sullivan; Salvatore Oddo; Ina Oehme; Michinaga Ogawa; Eric Ogier-Denis; Margret H Ogmundsdottir; Besim Ogretmen; Goo Taeg Oh; Seon-Hee Oh; Young J Oh; Takashi Ohama; Yohei Ohashi; Masaki Ohmuraya; Vasileios Oikonomou; Rani Ojha; Koji Okamoto; Hitoshi Okazawa; Masahide Oku; Sara Oliván; Jorge M A Oliveira; Michael Ollmann; James A Olzmann; Shakib Omari; M Bishr Omary; Gizem Önal; Martin Ondrej; Sang-Bing Ong; Sang-Ging Ong; Anna Onnis; Juan A Orellana; Sara Orellana-Muñoz; Maria Del Mar Ortega-Villaizan; Xilma R Ortiz-Gonzalez; Elena Ortona; Heinz D Osiewacz; Abdel-Hamid K Osman; Rosario Osta; Marisa S Otegui; Kinya Otsu; Christiane Ott; Luisa Ottobrini; Jing-Hsiung James Ou; Tiago F Outeiro; Inger Oynebraten; Melek Ozturk; Gilles Pagès; Susanta Pahari; Marta Pajares; Utpal B Pajvani; Rituraj Pal; Simona Paladino; Nicolas Pallet; Michela Palmieri; Giuseppe Palmisano; Camilla Palumbo; Francesco Pampaloni; Lifeng Pan; Qingjun Pan; Wenliang Pan; Xin Pan; Ganna Panasyuk; Rahul Pandey; Udai B Pandey; Vrajesh Pandya; Francesco Paneni; Shirley Y Pang; Elisa Panzarini; Daniela L Papademetrio; Elena Papaleo; Daniel Papinski; Diana Papp; Eun Chan Park; Hwan Tae Park; Ji-Man Park; Jong-In Park; Joon Tae Park; Junsoo Park; Sang Chul Park; Sang-Youel Park; Abraham H Parola; Jan B Parys; Adrien Pasquier; Benoit Pasquier; João F Passos; Nunzia Pastore; Hemal H Patel; Daniel Patschan; Sophie Pattingre; Gustavo Pedraza-Alva; Jose Pedraza-Chaverri; Zully Pedrozo; Gang Pei; Jianming Pei; Hadas Peled-Zehavi; Joaquín M Pellegrini; Joffrey Pelletier; Miguel A Peñalva; Di Peng; Ying Peng; Fabio Penna; Maria Pennuto; Francesca Pentimalli; Cláudia Mf Pereira; Gustavo J S Pereira; Lilian C Pereira; Luis Pereira de Almeida; Nirma D Perera; Ángel Pérez-Lara; Ana B Perez-Oliva; María Esther Pérez-Pérez; Palsamy Periyasamy; Andras Perl; Cristiana Perrotta; Ida Perrotta; Richard G Pestell; Morten Petersen; Irina Petrache; Goran Petrovski; Thorsten Pfirrmann; Astrid S Pfister; Jennifer A Philips; Huifeng Pi; Anna Picca; Alicia M Pickrell; Sandy Picot; Giovanna M Pierantoni; Marina Pierdominici; Philippe Pierre; Valérie Pierrefite-Carle; Karolina Pierzynowska; Federico Pietrocola; Miroslawa Pietruczuk; Claudio Pignata; Felipe X Pimentel-Muiños; Mario Pinar; Roberta O Pinheiro; Ronit Pinkas-Kramarski; Paolo Pinton; Karolina Pircs; Sujan Piya; Paola Pizzo; Theo S Plantinga; Harald W Platta; Ainhoa Plaza-Zabala; Markus Plomann; Egor Y Plotnikov; Helene Plun-Favreau; Ryszard Pluta; Roger Pocock; Stefanie Pöggeler; Christian Pohl; Marc Poirot; Angelo Poletti; Marisa Ponpuak; Hana Popelka; Blagovesta Popova; Helena Porta; Soledad Porte Alcon; Eliana Portilla-Fernandez; Martin Post; Malia B Potts; Joanna Poulton; Ted Powers; Veena Prahlad; Tomasz K Prajsnar; Domenico Praticò; Rosaria Prencipe; Muriel Priault; Tassula Proikas-Cezanne; Vasilis J Promponas; Christopher G Proud; Rosa Puertollano; Luigi Puglielli; Thomas Pulinilkunnil; Deepika Puri; Rajat Puri; Julien Puyal; Xiaopeng Qi; Yongmei Qi; Wenbin Qian; Lei Qiang; Yu Qiu; Joe Quadrilatero; Jorge Quarleri; Nina Raben; Hannah Rabinowich; Debora Ragona; Michael J Ragusa; Nader Rahimi; Marveh Rahmati; Valeria Raia; Nuno Raimundo; Namakkal-Soorappan Rajasekaran; Sriganesh Ramachandra Rao; Abdelhaq Rami; Ignacio Ramírez-Pardo; David B Ramsden; Felix Randow; Pundi N Rangarajan; Danilo Ranieri; Hai Rao; Lang Rao; Rekha Rao; Sumit Rathore; J Arjuna Ratnayaka; Edward A Ratovitski; Palaniyandi Ravanan; Gloria Ravegnini; Swapan K Ray; Babak Razani; Vito Rebecca; Fulvio Reggiori; Anne Régnier-Vigouroux; Andreas S Reichert; David Reigada; Jan H Reiling; Theo Rein; Siegfried Reipert; Rokeya Sultana Rekha; Hongmei Ren; Jun Ren; Weichao Ren; Tristan Renault; Giorgia Renga; Karen Reue; Kim Rewitz; Bruna Ribeiro de Andrade Ramos; S Amer Riazuddin; Teresa M Ribeiro-Rodrigues; Jean-Ehrland Ricci; Romeo Ricci; Victoria Riccio; Des R Richardson; Yasuko Rikihisa; Makarand V Risbud; Ruth M Risueño; Konstantinos Ritis; Salvatore Rizza; Rosario Rizzuto; Helen C Roberts; Luke D Roberts; Katherine J Robinson; Maria Carmela Roccheri; Stephane Rocchi; George G Rodney; Tiago Rodrigues; Vagner Ramon Rodrigues Silva; Amaia Rodriguez; Ruth Rodriguez-Barrueco; Nieves Rodriguez-Henche; Humberto Rodriguez-Rocha; Jeroen Roelofs; Robert S Rogers; Vladimir V Rogov; Ana I Rojo; Krzysztof Rolka; Vanina Romanello; Luigina Romani; Alessandra Romano; Patricia S Romano; David Romeo-Guitart; Luis C Romero; Montserrat Romero; Joseph C Roney; Christopher Rongo; Sante Roperto; Mathias T Rosenfeldt; Philip Rosenstiel; Anne G Rosenwald; Kevin A Roth; Lynn Roth; Steven Roth; Kasper M A Rouschop; Benoit D Roussel; Sophie Roux; Patrizia Rovere-Querini; Ajit Roy; Aurore Rozieres; Diego Ruano; David C Rubinsztein; Maria P Rubtsova; Klaus Ruckdeschel; Christoph Ruckenstuhl; Emil Rudolf; Rüdiger Rudolf; Alessandra Ruggieri; Avnika Ashok Ruparelia; Paola Rusmini; Ryan R Russell; Gian Luigi Russo; Maria Russo; Rossella Russo; Oxana O Ryabaya; Kevin M Ryan; Kwon-Yul Ryu; Maria Sabater-Arcis; Ulka Sachdev; Michael Sacher; Carsten Sachse; Abhishek Sadhu; Junichi Sadoshima; Nathaniel Safren; Paul Saftig; Antonia P Sagona; Gaurav Sahay; Amirhossein Sahebkar; Mustafa Sahin; Ozgur Sahin; Sumit Sahni; Nayuta Saito; Shigeru Saito; Tsunenori Saito; Ryohei Sakai; Yasuyoshi Sakai; Jun-Ichi Sakamaki; Kalle Saksela; Gloria Salazar; Anna Salazar-Degracia; Ghasem H Salekdeh; Ashok K Saluja; Belém Sampaio-Marques; Maria Cecilia Sanchez; Jose A Sanchez-Alcazar; Victoria Sanchez-Vera; Vanessa Sancho-Shimizu; J Thomas Sanderson; Marco Sandri; Stefano Santaguida; Laura Santambrogio; Magda M Santana; Giorgio Santoni; Alberto Sanz; Pascual Sanz; Shweta Saran; Marco Sardiello; Timothy J Sargeant; Apurva Sarin; Chinmoy Sarkar; Sovan Sarkar; Maria-Rosa Sarrias; Surajit Sarkar; Dipanka Tanu Sarmah; Jaakko Sarparanta; Aishwarya Sathyanarayan; Ranganayaki Sathyanarayanan; K Matthew Scaglione; Francesca Scatozza; Liliana Schaefer; Zachary T Schafer; Ulrich E Schaible; Anthony H V Schapira; Michael Scharl; Hermann M Schatzl; Catherine H Schein; Wiep Scheper; David Scheuring; Maria Vittoria Schiaffino; Monica Schiappacassi; Rainer Schindl; Uwe Schlattner; Oliver Schmidt; Roland Schmitt; Stephen D Schmidt; Ingo Schmitz; Eran Schmukler; Anja Schneider; Bianca E Schneider; Romana Schober; Alejandra C Schoijet; Micah B Schott; Michael Schramm; Bernd Schröder; Kai Schuh; Christoph Schüller; Ryan J Schulze; Lea Schürmanns; Jens C Schwamborn; Melanie Schwarten; Filippo Scialo; Sebastiano Sciarretta; Melanie J Scott; Kathleen W Scotto; A Ivana Scovassi; Andrea Scrima; Aurora Scrivo; David Sebastian; Salwa Sebti; Simon Sedej; Laura Segatori; Nava Segev; Per O Seglen; Iban Seiliez; Ekihiro Seki; Scott B Selleck; Frank W Sellke; Joshua T Selsby; Michael Sendtner; Serif Senturk; Elena Seranova; Consolato Sergi; Ruth Serra-Moreno; Hiromi Sesaki; Carmine Settembre; Subba Rao Gangi Setty; Gianluca Sgarbi; Ou Sha; John J Shacka; Javeed A Shah; Dantong Shang; Changshun Shao; Feng Shao; Soroush Sharbati; Lisa M Sharkey; Dipali Sharma; Gaurav Sharma; Kulbhushan Sharma; Pawan Sharma; Surendra Sharma; Han-Ming Shen; Hongtao Shen; Jiangang Shen; Ming Shen; Weili Shen; Zheni Shen; Rui Sheng; Zhi Sheng; Zu-Hang Sheng; Jianjian Shi; Xiaobing Shi; Ying-Hong Shi; Kahori Shiba-Fukushima; Jeng-Jer Shieh; Yohta Shimada; Shigeomi Shimizu; Makoto Shimozawa; Takahiro Shintani; Christopher J Shoemaker; Shahla Shojaei; Ikuo Shoji; Bhupendra V Shravage; Viji Shridhar; Chih-Wen Shu; Hong-Bing Shu; Ke Shui; Arvind K Shukla; Timothy E Shutt; Valentina Sica; Aleem Siddiqui; Amanda Sierra; Virginia Sierra-Torre; Santiago Signorelli; Payel Sil; Bruno J de Andrade Silva; Johnatas D Silva; Eduardo Silva-Pavez; Sandrine Silvente-Poirot; Rachel E Simmonds; Anna Katharina Simon; Hans-Uwe Simon; Matias Simons; Anurag Singh; Lalit P Singh; Rajat Singh; Shivendra V Singh; Shrawan K Singh; Sudha B Singh; Sunaina Singh; Surinder Pal Singh; Debasish Sinha; Rohit Anthony Sinha; Sangita Sinha; Agnieszka Sirko; Kapil Sirohi; Efthimios L Sivridis; Panagiotis Skendros; Aleksandra Skirycz; Iva Slaninová; Soraya S Smaili; Andrei Smertenko; Matthew D Smith; Stefaan J Soenen; Eun Jung Sohn; Sophia P M Sok; Giancarlo Solaini; Thierry Soldati; Scott A Soleimanpour; Rosa M Soler; Alexei Solovchenko; Jason A Somarelli; Avinash Sonawane; Fuyong Song; Hyun Kyu Song; Ju-Xian Song; Kunhua Song; Zhiyin Song; Leandro R Soria; Maurizio Sorice; Alexander A Soukas; Sandra-Fausia Soukup; Diana Sousa; Nadia Sousa; Paul A Spagnuolo; Stephen A Spector; M M Srinivas Bharath; Daret St Clair; Venturina Stagni; Leopoldo Staiano; Clint A Stalnecker; Metodi V Stankov; Peter B Stathopulos; Katja Stefan; Sven Marcel Stefan; Leonidas Stefanis; Joan S Steffan; Alexander Steinkasserer; Harald Stenmark; Jared Sterneckert; Craig Stevens; Veronika Stoka; Stephan Storch; Björn Stork; Flavie Strappazzon; Anne Marie Strohecker; Dwayne G Stupack; Huanxing Su; Ling-Yan Su; Longxiang Su; Ana M Suarez-Fontes; Carlos S Subauste; Selvakumar Subbian; Paula V Subirada; Ganapasam Sudhandiran; Carolyn M Sue; Xinbing Sui; Corey Summers; Guangchao Sun; Jun Sun; Kang Sun; Meng-Xiang Sun; Qiming Sun; Yi Sun; Zhongjie Sun; Karen K S Sunahara; Eva Sundberg; Katalin Susztak; Peter Sutovsky; Hidekazu Suzuki; Gary Sweeney; J David Symons; Stephen Cho Wing Sze; Nathaniel J Szewczyk; Anna Tabęcka-Łonczynska; Claudio Tabolacci; Frank Tacke; Heinrich Taegtmeyer; Marco Tafani; Mitsuo Tagaya; Haoran Tai; Stephen W G Tait; Yoshinori Takahashi; Szabolcs Takats; Priti Talwar; Chit Tam; Shing Yau Tam; Davide Tampellini; Atsushi Tamura; Chong Teik Tan; Eng-King Tan; Ya-Qin Tan; Masaki Tanaka; Motomasa Tanaka; Daolin Tang; Jingfeng Tang; Tie-Shan Tang; Isei Tanida; Zhipeng Tao; Mohammed Taouis; Lars Tatenhorst; Nektarios Tavernarakis; Allen Taylor; Gregory A Taylor; Joan M Taylor; Elena Tchetina; Andrew R Tee; Irmgard Tegeder; David Teis; Natercia Teixeira; Fatima Teixeira-Clerc; Kumsal A Tekirdag; Tewin Tencomnao; Sandra Tenreiro; Alexei V Tepikin; Pilar S Testillano; Gianluca Tettamanti; Pierre-Louis Tharaux; Kathrin Thedieck; Arvind A Thekkinghat; Stefano Thellung; Josephine W Thinwa; V P Thirumalaikumar; Sufi Mary Thomas; Paul G Thomes; Andrew Thorburn; Lipi Thukral; Thomas Thum; Michael Thumm; Ling Tian; Ales Tichy; Andreas Till; Vincent Timmerman; Vladimir I Titorenko; Sokol V Todi; Krassimira Todorova; Janne M Toivonen; Luana Tomaipitinca; Dhanendra Tomar; Cristina Tomas-Zapico; Sergej Tomić; Benjamin Chun-Kit Tong; Chao Tong; Xin Tong; Sharon A Tooze; Maria L Torgersen; Satoru Torii; Liliana Torres-López; Alicia Torriglia; Christina G Towers; Roberto Towns; Shinya Toyokuni; Vladimir Trajkovic; Donatella Tramontano; Quynh-Giao Tran; Leonardo H Travassos; Charles B Trelford; Shirley Tremel; Ioannis P Trougakos; Betty P Tsao; Mario P Tschan; Hung-Fat Tse; Tak Fu Tse; Hitoshi Tsugawa; Andrey S Tsvetkov; David A Tumbarello; Yasin Tumtas; María J Tuñón; Sandra Turcotte; Boris Turk; Vito Turk; Bradley J Turner; Richard I Tuxworth; Jessica K Tyler; Elena V Tyutereva; Yasuo Uchiyama; Aslihan Ugun-Klusek; Holm H Uhlig; Marzena Ułamek-Kozioł; Ilya V Ulasov; Midori Umekawa; Christian Ungermann; Rei Unno; Sylvie Urbe; Elisabet Uribe-Carretero; Suayib Üstün; Vladimir N Uversky; Thomas Vaccari; Maria I Vaccaro; Björn F Vahsen; Helin Vakifahmetoglu-Norberg; Rut Valdor; Maria J Valente; Ayelén Valko; Richard B Vallee; Angela M Valverde; Greet Van den Berghe; Stijn van der Veen; Luc Van Kaer; Jorg van Loosdregt; Sjoerd J L van Wijk; Wim Vandenberghe; Ilse Vanhorebeek; Marcos A Vannier-Santos; Nicola Vannini; M Cristina Vanrell; Chiara Vantaggiato; Gabriele Varano; Isabel Varela-Nieto; Máté Varga; M Helena Vasconcelos; Somya Vats; Demetrios G Vavvas; Ignacio Vega-Naredo; Silvia Vega-Rubin-de-Celis; Guillermo Velasco; Ariadna P Velázquez; Tibor Vellai; Edo Vellenga; Francesca Velotti; Mireille Verdier; Panayotis Verginis; Isabelle Vergne; Paul Verkade; Manish Verma; Patrik Verstreken; Tim Vervliet; Jörg Vervoorts; Alexandre T Vessoni; Victor M Victor; Michel Vidal; Chiara Vidoni; Otilia V Vieira; Richard D Vierstra; Sonia Viganó; Helena Vihinen; Vinoy Vijayan; Miquel Vila; Marçal Vilar; José M Villalba; Antonio Villalobo; Beatriz Villarejo-Zori; Francesc Villarroya; Joan Villarroya; Olivier Vincent; Cecile Vindis; Christophe Viret; Maria Teresa Viscomi; Dora Visnjic; Ilio Vitale; David J Vocadlo; Olga V Voitsekhovskaja; Cinzia Volonté; Mattia Volta; Marta Vomero; Clarissa Von Haefen; Marc A Vooijs; Wolfgang Voos; Ljubica Vucicevic; Richard Wade-Martins; Satoshi Waguri; Kenrick A Waite; Shuji Wakatsuki; David W Walker; Mark J Walker; Simon A Walker; Jochen Walter; Francisco G Wandosell; Bo Wang; Chao-Yung Wang; Chen Wang; Chenran Wang; Chenwei Wang; Cun-Yu Wang; Dong Wang; Fangyang Wang; Feng Wang; Fengming Wang; Guansong Wang; Han Wang; Hao Wang; Hexiang Wang; Hong-Gang Wang; Jianrong Wang; Jigang Wang; Jiou Wang; Jundong Wang; Kui Wang; Lianrong Wang; Liming Wang; Maggie Haitian Wang; Meiqing Wang; Nanbu Wang; Pengwei Wang; Peipei Wang; Ping Wang; Ping Wang; Qing Jun Wang; Qing Wang; Qing Kenneth Wang; Qiong A Wang; Wen-Tao Wang; Wuyang Wang; Xinnan Wang; Xuejun Wang; Yan Wang; Yanchang Wang; Yanzhuang Wang; Yen-Yun Wang; Yihua Wang; Yipeng Wang; Yu Wang; Yuqi Wang; Zhe Wang; Zhenyu Wang; Zhouguang Wang; Gary Warnes; Verena Warnsmann; Hirotaka Watada; Eizo Watanabe; Maxinne Watchon; Anna Wawrzyńska; Timothy E Weaver; Grzegorz Wegrzyn; Ann M Wehman; Huafeng Wei; Lei Wei; Taotao Wei; Yongjie Wei; Oliver H Weiergräber; Conrad C Weihl; Günther Weindl; Ralf Weiskirchen; Alan Wells; Runxia H Wen; Xin Wen; Antonia Werner; Beatrice Weykopf; Sally P Wheatley; J Lindsay Whitton; Alexander J Whitworth; Katarzyna Wiktorska; Manon E Wildenberg; Tom Wileman; Simon Wilkinson; Dieter Willbold; Brett Williams; Robin S B Williams; Roger L Williams; Peter R Williamson; Richard A Wilson; Beate Winner; Nathaniel J Winsor; Steven S Witkin; Harald Wodrich; Ute Woehlbier; Thomas Wollert; Esther Wong; Jack Ho Wong; Richard W Wong; Vincent Kam Wai Wong; W Wei-Lynn Wong; An-Guo Wu; Chengbiao Wu; Jian Wu; Junfang Wu; Kenneth K Wu; Min Wu; Shan-Ying Wu; Shengzhou Wu; Shu-Yan Wu; Shufang Wu; William K K Wu; Xiaohong Wu; Xiaoqing Wu; Yao-Wen Wu; Yihua Wu; Ramnik J Xavier; Hongguang Xia; Lixin Xia; Zhengyuan Xia; Ge Xiang; Jin Xiang; Mingliang Xiang; Wei Xiang; Bin Xiao; Guozhi Xiao; Hengyi Xiao; Hong-Tao Xiao; Jian Xiao; Lan Xiao; Shi Xiao; Yin Xiao; Baoming Xie; Chuan-Ming Xie; Min Xie; Yuxiang Xie; Zhiping Xie; Zhonglin Xie; Maria Xilouri; Congfeng Xu; En Xu; Haoxing Xu; Jing Xu; JinRong Xu; Liang Xu; Wen Wen Xu; Xiulong Xu; Yu Xue; Sokhna M S Yakhine-Diop; Masamitsu Yamaguchi; Osamu Yamaguchi; Ai Yamamoto; Shunhei Yamashina; Shengmin Yan; Shian-Jang Yan; Zhen Yan; Yasuo Yanagi; Chuanbin Yang; Dun-Sheng Yang; Huan Yang; Huang-Tian Yang; Hui Yang; Jin-Ming Yang; Jing Yang; Jingyu Yang; Ling Yang; Liu Yang; Ming Yang; Pei-Ming Yang; Qian Yang; Seungwon Yang; Shu Yang; Shun-Fa Yang; Wannian Yang; Wei Yuan Yang; Xiaoyong Yang; Xuesong Yang; Yi Yang; Ying Yang; Honghong Yao; Shenggen Yao; Xiaoqiang Yao; Yong-Gang Yao; Yong-Ming Yao; Takahiro Yasui; Meysam Yazdankhah; Paul M Yen; Cong Yi; Xiao-Ming Yin; Yanhai Yin; Zhangyuan Yin; Ziyi Yin; Meidan Ying; Zheng Ying; Calvin K Yip; Stephanie Pei Tung Yiu; Young H Yoo; Kiyotsugu Yoshida; Saori R Yoshii; Tamotsu Yoshimori; Bahman Yousefi; Boxuan Yu; Haiyang Yu; Jun Yu; Jun Yu; Li Yu; Ming-Lung Yu; Seong-Woon Yu; Victor C Yu; W Haung Yu; Zhengping Yu; Zhou Yu; Junying Yuan; Ling-Qing Yuan; Shilin Yuan; Shyng-Shiou F Yuan; Yanggang Yuan; Zengqiang Yuan; Jianbo Yue; Zhenyu Yue; Jeanho Yun; Raymond L Yung; David N Zacks; Gabriele Zaffagnini; Vanessa O Zambelli; Isabella Zanella; Qun S Zang; Sara Zanivan; Silvia Zappavigna; Pilar Zaragoza; Konstantinos S Zarbalis; Amir Zarebkohan; Amira Zarrouk; Scott O Zeitlin; Jialiu Zeng; Ju-Deng Zeng; Eva Žerovnik; Lixuan Zhan; Bin Zhang; Donna D Zhang; Hanlin Zhang; Hong Zhang; Hong Zhang; Honghe Zhang; Huafeng Zhang; Huaye Zhang; Hui Zhang; Hui-Ling Zhang; Jianbin Zhang; Jianhua Zhang; Jing-Pu Zhang; Kalin Y B Zhang; Leshuai W Zhang; Lin Zhang; Lisheng Zhang; Lu Zhang; Luoying Zhang; Menghuan Zhang; Peng Zhang; Sheng Zhang; Wei Zhang; Xiangnan Zhang; Xiao-Wei Zhang; Xiaolei Zhang; Xiaoyan Zhang; Xin Zhang; Xinxin Zhang; Xu Dong Zhang; Yang Zhang; Yanjin Zhang; Yi Zhang; Ying-Dong Zhang; Yingmei Zhang; Yuan-Yuan Zhang; Yuchen Zhang; Zhe Zhang; Zhengguang Zhang; Zhibing Zhang; Zhihai Zhang; Zhiyong Zhang; Zili Zhang; Haobin Zhao; Lei Zhao; Shuang Zhao; Tongbiao Zhao; Xiao-Fan Zhao; Ying Zhao; Yongchao Zhao; Yongliang Zhao; Yuting Zhao; Guoping Zheng; Kai Zheng; Ling Zheng; Shizhong Zheng; Xi-Long Zheng; Yi Zheng; Zu-Guo Zheng; Boris Zhivotovsky; Qing Zhong; Ao Zhou; Ben Zhou; Cefan Zhou; Gang Zhou; Hao Zhou; Hong Zhou; Hongbo Zhou; Jie Zhou; Jing Zhou; Jing Zhou; Jiyong Zhou; Kailiang Zhou; Rongjia Zhou; Xu-Jie Zhou; Yanshuang Zhou; Yinghong Zhou; Yubin Zhou; Zheng-Yu Zhou; Zhou Zhou; Binglin Zhu; Changlian Zhu; Guo-Qing Zhu; Haining Zhu; Hongxin Zhu; Hua Zhu; Wei-Guo Zhu; Yanping Zhu; Yushan Zhu; Haixia Zhuang; Xiaohong Zhuang; Katarzyna Zientara-Rytter; Christine M Zimmermann; Elena Ziviani; Teresa Zoladek; Wei-Xing Zong; Dmitry B Zorov; Antonio Zorzano; Weiping Zou; Zhen Zou; Zhengzhi Zou; Steven Zuryn; Werner Zwerschke; Beate Brand-Saberi; X Charlie Dong; Chandra Shekar Kenchappa; Zuguo Li; Yong Lin; Shigeru Oshima; Yueguang Rong; Judith C Sluimer; Christina L Stallings; Chun-Kit Tong
Journal:  Autophagy       Date:  2021-02-08       Impact factor: 13.391

2.  A high-throughput protocol for monitoring starvation-induced autophagy in real time in mouse embryonic fibroblasts.

Authors:  Ada Nowosad; Arnaud Besson
Journal:  STAR Protoc       Date:  2021-11-17

3.  Quantitative and time-resolved monitoring of organelle and protein delivery to the lysosome with a tandem fluorescent Halo-GFP reporter.

Authors:  M Rudinskiy; T J Bergmann; M Molinari
Journal:  Mol Biol Cell       Date:  2022-02-02       Impact factor: 3.612

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.