Tal M Dankovich1,2, Silvio O Rizzoli1,3. 1. University Medical Center Göttingen, Institute for Neuro- and Sensory Physiology, Göttingen 37073, Germany. 2. International Max Planck Research School for Neuroscience, Göttingen, Germany. 3. Biostructural Imaging of Neurodegeneration (BIN) Center & Multiscale Bioimaging Excellence Center, Göttingen 37075, Germany.
Abstract
Optical super-resolution microscopy (SRM) has enabled biologists to visualize cellular structures with near-molecular resolution, giving unprecedented access to details about the amounts, sizes, and spatial distributions of macromolecules in the cell. Precisely quantifying these molecular details requires large datasets of high-quality, reproducible SRM images. In this review, we discuss the unique set of challenges facing quantitative SRM, giving particular attention to the shortcomings of conventional specimen preparation techniques and the necessity for optimal labeling of molecular targets. We further discuss the obstacles to scaling SRM methods, such as lengthy image acquisition and complex SRM data analysis. For each of these challenges, we review the recent advances in the field that circumvent these pitfalls and provide practical advice to biologists for optimizing SRM experiments.
Optical super-resolution microscopy (SRM) has enabled biologists to visualize cellular structures with near-molecular resolution, giving unprecedented access to details about the amounts, sizes, and spatial distributions of macromolecules in the cell. Precisely quantifying these molecular details requires large datasets of high-quality, reproducible SRM images. In this review, we discuss the unique set of challenges facing quantitative SRM, giving particular attention to the shortcomings of conventional specimen preparation techniques and the necessity for optimal labeling of molecular targets. We further discuss the obstacles to scaling SRM methods, such as lengthy image acquisition and complex SRM data analysis. For each of these challenges, we review the recent advances in the field that circumvent these pitfalls and provide practical advice to biologists for optimizing SRM experiments.
Some 300 years ago, light microscopy guided the discovery that all living organisms consist of individual cells, pioneering the entire discipline of cell biology. Microscopy has been one of the most important and rapidly advancing laboratory techniques, and countless milestones in biology have coincided with the advent of better, more powerful microscopes. Indeed, it is probably not by chance that the image of a researcher looking through the eyepiece of a microscope has become a universal icon for the natural sciences. The introduction of fluorescence microscopy in the early twentieth century marked a major milestone in biology, allowing scientists to visualize previously invisible cellular structures by tagging them with highly specific fluorescent labels. Nevertheless, as with any optical system, the spatial resolution of fluorescence microscopy is limited by the diffraction (or bending) of light as it passes through the circular aperture of the microscope objective. First described by Ernst Abbe in 1873, the diffraction limit implies that the smallest object that can be resolved by an optical microscope is limited to approximately half the wavelength of the light being used (Abbe, 1873), amounting typically to ≥200 nm, or more than twenty times the size of an average protein. In the last few decades, a number of techniques collectively termed “super-resolution microscopy” (SRM) have surpassed this diffraction barrier, extending the capabilities of optical imaging to the nanoscale and yielding unprecedented biological insights (Balzarotti et al., 2017; Betzig et al., 2006; Chen et al., 2015; Gustafsson, 2000; Hell and Wichmann, 1994; Hess et al., 2006; Moerner and Kador, 1989; Rust et al., 2006; Sharonov and Hochstrasser, 2006). As the field progressed, SRM shifted into the realm of quantitative biology, allowing for the accurate measurement of the abundances, distributions, and nanoscale movements of individual molecules. However, amassing such high-precision information requires more than mere access to super-resolution microscopes. Crucially, special care must be taken to prepare biological samples with minimal distortions and to select labels that are well suited to the chosen SRM approach. In addition, a good working knowledge of the applied technique is necessary to account for inherent limitations during acquisition and analysis, and to successfully scale up experiments to gather large datasets for robust quantifications. In this review, we will discuss the major obstacles to achieving quantitative SRM, particularly emphasizing the importance of optimal sample preparation and labeling. In addition, we will review the unique challenges facing SRM data analysis and the design of high-throughput SRM experiments. Our aim is to discuss possible solutions to these challenges based on both existing approaches in the field, as well as emerging techniques from recent years.
An overview of common SRM techniques
The key to achieving super resolution is minimizing the number of fluorophores that can be simultaneously detected within a diffraction-limited area. Most major SRM approaches achieve this by tampering with the on/off (bright/dark) state of the molecules to separate their emission in space, time, or both. These so-called “functional” SRM techniques broadly fall into one of two categories, namely, coordinated-targeted and stochastic, distinguished by their strategy for modulating fluorescent states. Coordinate-targeted SRM techniques use directed, focused lasers to actively push targeted fluorophores into a dark state. The best known of these techniques is stimulated emission depletion (STED) microscopy, where two superimposed beams are scanned over the sample in tandem: a focused excitation beam to switch the fluorophores on, and a powerful STED beam that depletes fluorophores through stimulated emission (Figure 1). The depletion beam is typically doughnut-shaped, gradually weakening toward its center, where it has ideally zero intensity. As a result, only fluorophores residing at the center of the beam can emit, whereas the rest of the diffraction-limited area remains dark (Hell and Wichmann, 1994). Although commercial STED setups typically achieve a lateral resolution of ∼60 nm, resolutions down to ∼20 nm have been demonstrated in biological samples, and down to single nanometers in fluorescent nitrogen vacancies in diamond (Göttfert et al., 2013; Rittweger et al., 2009; Wegel et al., 2016). In the axial plane, 3D STED approaches such as isoSTED can achieve a resolution of ∼30 nm (Curdt et al., 2015; Hell et al., 2009). A drawback of STED, however, is the high laser intensities required for efficient depletion of the fluorophores, which limits its applicability to live samples. As alternatives, techniques such as ground state depletion (GSD) and reversible saturable/switchable optically linear (fluorescence) transitions (RESOLFT) can be used, which require appreciably lower laser intensities to deplete the targeted fluorophores by transitioning them to additional “meta-stable” dark states (Bretschneider et al., 2007; Hell and Kroug, 1995). Recent applications of RESOLFT have achieved resolutions comparable to those of STED with reduced excitation power, making them well suited for live-cell and large volume imaging (Grotjohann et al., 2011; Masullo et al., 2018).
Figure 1
Overview of confocal and major super-resolution microscopy techniques
(A) Laser scanning confocal microscopy is a diffraction-limited technique, performed by scanning an excitation laser over the sample point-by-point and acquiring the emitted light in the focal plane through a pinhole.
(B) Like confocal microscopy, STED is a laser-scanning technique acquired in a point-by-point manner. In STED, the excitation beam is overlayed with a doughnut-shaped depletion beam that turns off fluorescence emission at the periphery, resulting in an emission area smaller than the diffraction limit.
(C) In SIM, the sample is imaged with widefield illumination overlayed with an interfering grating pattern. Multiple images are acquired by rotating the grating at different angles and then processed to produce the final image.
(D) Single-molecule localization microscopy relies on the stochastic nature of fluorophore emission (“blinking”) to ensure that only small, dispersed subsets of fluorophores are emitting at a given time. Blinking can be achieved by photoactivation/switching (PALM/STORM) or through transient binding to a target structure (PAINT). The sample is illuminated many times, and the data are processed to determine the spatial coordinates of the individual fluorophores.
(E) MINFLUX relies on fluorophore “blinking” (as in SMLM), but the sample is excited with a doughnut-shaped excitation (rather than depletion) beam. As a result, fluorophores close to the center of excitation will emit fewer photons, and emission minima can be used to determine the localizations.
(F) In expansion microscopy, the sample is embedded in a gel, which is expanded isometrically to distance the fluorophores physically. The enlarged sample can be imaged using conventional, diffraction-limited microscopy. Notes: resolutions reflect the typical values achieved with commercial microscope setups; colors refer to the use of multiple fluorophores simultaneously.
Overview of confocal and major super-resolution microscopy techniques(A) Laser scanning confocal microscopy is a diffraction-limited technique, performed by scanning an excitation laser over the sample point-by-point and acquiring the emitted light in the focal plane through a pinhole.(B) Like confocal microscopy, STED is a laser-scanning technique acquired in a point-by-point manner. In STED, the excitation beam is overlayed with a doughnut-shaped depletion beam that turns off fluorescence emission at the periphery, resulting in an emission area smaller than the diffraction limit.(C) In SIM, the sample is imaged with widefield illumination overlayed with an interfering grating pattern. Multiple images are acquired by rotating the grating at different angles and then processed to produce the final image.(D) Single-molecule localization microscopy relies on the stochastic nature of fluorophore emission (“blinking”) to ensure that only small, dispersed subsets of fluorophores are emitting at a given time. Blinking can be achieved by photoactivation/switching (PALM/STORM) or through transient binding to a target structure (PAINT). The sample is illuminated many times, and the data are processed to determine the spatial coordinates of the individual fluorophores.(E) MINFLUX relies on fluorophore “blinking” (as in SMLM), but the sample is excited with a doughnut-shaped excitation (rather than depletion) beam. As a result, fluorophores close to the center of excitation will emit fewer photons, and emission minima can be used to determine the localizations.(F) In expansion microscopy, the sample is embedded in a gel, which is expanded isometrically to distance the fluorophores physically. The enlarged sample can be imaged using conventional, diffraction-limited microscopy. Notes: resolutions reflect the typical values achieved with commercial microscope setups; colors refer to the use of multiple fluorophores simultaneously.Stochastic SRM techniques, collectively termed “single molecule localization microscopy” (SMLM), do not modulate the fluorophores spatially but rely on the stochastic nature of the on/off state transitions. Upon illuminating the entire sample with a widefield light source, only a small, random fraction of the fluorophores will switch states, ideally no more than a single fluorophore per diffraction-limited area. By repeatedly switching on random subsets of fluorophores, or passively measuring “off” fluorophores as they spontaneously return to their bright state, a super-resolved image of all fluorophore localizations can be ultimately reconstructed (Figure 1). Typically, hundreds to thousands of such sequential measurements are required in order to reconstruct an entire image, making acquisition times significantly longer than in targeted laser-scanning approaches. However, a major advantage of these techniques is that they can largely be realized using conventional wide-field setups and cameras, making them relatively simple and cost-effective. To date, numerous SMLM techniques have been devised, but these are mostly derivatives of two prototypical approaches: photoactivated localization microscopy (PALM) (Betzig et al., 2006; Hess et al., 2006), which relies on photoswitchable proteins, and stochastic optical reconstruction microscopy (STORM), which utilizes organic fluorophores (Rust et al., 2006). The main difference between the various techniques lies in their strategy for achieving the on/off switching and the types of photoswitchable probes that are used. A more recent approach, point accumulation in nanoscale topography (PAINT), does not utilize light-mediated activation but instead makes use of labels that fluoresce upon binding a target structure. By using dyes that transiently bind and quickly dissociate from their targets, a “blinking” effect can be produced, quite similar to that which is observed with photoswitchable dyes (Sharonov and Hochstrasser, 2006). Alternatively, the bound fluorophores may be irreversibly switched off through photobleaching (Burnette et al., 2011; Schoen et al., 2011). Both PALM/STORM and PAINT can reach resolutions on the order of tens of nanometers, but the actual values largely depend on the properties of the fluorophores being used. With sufficiently bright and photostable dyes, resolutions are typically ∼10–50 nm in xy on commercial setups, but can potentially reach single nanometers (Dai et al., 2016; Vaughan et al., 2012; Wegel et al., 2016). SMLM frequently utilizes total internal reflection fluorescence (TIRF) illumination, where the excitation light is shined on the sample at an angle greater than the critical angle and is totally reflected at the glass/water interface. The reflection creates a thin illumination field (known as an “evanescent wave”) that penetrates the sample superficially, selectively exciting fluorophores near the interface, up to ∼100 nm (Diekmann et al., 2017). The resolution in z can be further enhanced to ∼50–70 nm with 3D systems such as 3D-STORM or iPALM (Huang et al., 2008; Lin et al., 2020; Shtengel et al., 2009).Recently, a ground-breaking technique named MINFLUX was introduced, combining the strengths of both coordinate-targeted and stochastic SRM approaches. In MINFLUX, the fluorophores undergo both stochastic on/off switching, as in PALM/STORM, and simultaneous illumination with a doughnut-shaped excitation (rather than depletion) beam (Figure 1). The closer the emitting fluorophore to the zero-intensity center of the beam, the smaller the number of photons it will emit, and thus the fluorophore localizations can be deduced from local emission minima. MINFLUX boasts the highest precision of all SRM techniques to date, achieving resolutions of 1–3 nm in both the lateral and axial planes (Balzarotti et al., 2017; Gwosch et al., 2020).An additional approach, structured illumination microscopy (SIM), does not rely on fluorescence states but instead exploits wave optics to construct a super-resolution image. In SIM, optical gratings are used to illuminate the sample with patterned light, resulting in the formation of interference patterns, also known as Moiré fringes. Put simply, these fringes are a mixture between the frequency patterns of light emitted by the sample and the frequency of the illumination pattern, and because the latter is known, the former can be retrieved mathematically. In a SIM measurement, the sample is illuminated multiple times with rotations of the grating pattern at different angles (Figure 1). In total, ∼15 rotations are required to reconstruct an image with a lateral resolution of ∼100 nm (Gustafsson, 2000). In the axial plane, the introduction of an additional patterned illumination allows a typical resolution of ∼300–400 nm (Schermelleh et al., 2008). Higher lateral resolutions (<50 nm) can be obtained with the non-linear form of SIM (NL-SIM) that relies on saturation of the fluorophore excited state with a high intensity light to produce a nonlinear response between the excitation and emission intensities (Li et al., 2015).Lastly, expansion microscopy (ExM) presents a creative solution for overcoming the diffraction limit without any need for optical “tricks”, by physically expanding the sample itself (Chen et al., 2015). In ExM, the fluorescently labeled sample is embedded in a swellable polymer gel and the fluorophores are linked to the gel matrix. The sample is digested, and the gel is expanded in all dimensions, physically distancing the fluorescent labels (Figure 1). As a result, super-resolution images can be acquired with conventional, diffraction-limited microscopes and with most conventional dyes. Most variants of ExM expand samples by a factor of ∼4, allowing a lateral resolution of ∼70 nm, and iterative ExM, which includes a second gel step, allows x10 expansion, achieving a resolution of ∼25 nm (Truckenbrodt et al., 2018). In contrast to the previously mentioned techniques, ExM is limited to fixed samples only. On a cautionary note, it has been demonstrated that the expansion factor within a single cell may vary and should therefore be validated carefully when performing ExM experiments (Büttner et al., 2020).Variations on the techniques described earlier as well as the conception of new technologies are continually emerging. Since it is well beyond the scope of this review to describe these here, we refer the reader to more extensive reviews of SRM techniques (Sahl and Hell, 2019; Vangindertael et al., 2018).
Super-resolution microscopy paves the way for quantitative biological insights
Over the recent years, improvements in measurement fidelity and novel analyses have allowed biologists to ask not only where a molecule is located but also how many molecules there are. As per their name, SMLM techniques are particularly well suited to tackle this question, because the number of molecules can, in theory, be inferred from the discrete single-molecule localizations. Through careful characterization of different experimental variables such as the labeling efficiency of the probes and fluorophore blinking kinetics (see Challenge 4, below), it becomes possible to estimate the true molecule amounts and stoichiometries. For example, PALM was used to count the number of lipid binding sites on endocytic vesicles in yeast, showing how these vary throughout vesicle maturation (Puchner et al., 2013). In a more recent study, quantitative DNA-PAINT was used to determine the stoichiometry between ryanodine receptors (RyRs) and the RyR inhibitory protein junctophilin-1 on the membranes of cardiomyocytes. The authors found that the ratio of expression between the two proteins is highly heterogeneous, implying that other forms of RyR regulation are likely to exist (Jayasinghe et al., 2018). Molecule counting can also be achieved by SRM techniques that do not compute single-molecule localizations. For example, using STED, the number of internalized transferrin receptors could be accurately counted through analysis of photon emission statistics, relying on the idea that fluorophores can only emit a single photon at a given time, and thus a simultaneous detection of multiple photons should indicate the presence of multiple molecules (Ta et al., 2015) (see Challenge 4, below) (Figures 2A-2C).
Figure 2
Biological insights from quantitative super-resolution microscopy. The scheme shows three types of quantitative SRM analyses and example studies.
(A–C) Counting molecules and determining molecular stoichiometries. (A) PALM images of PI3P binding sites on endocytic vesicles colocalizing with markers of different maturation stages (top to bottom: clathrin, the GTPase Vps21, the GTPase Ypt7Scale). The images show that increasingly mature vesicles have a higher PI3P content. Scale bar: 100 nm. Adapted with permission from (Puchner et al., 2013). (B) Left panel: STED and confocal images of internalized TfRs (axial summation of 0.9 μm). H is the maximal intensity value (number of photon counts) per pixel. Right panel: 3D molecular map generated by photon statistics of STED and confocal recordings. Inset: isosurfaces of the molecular map (corresponding to the boxed region, encompassing ∼70% of these molecules). The colors represent the number of molecules in the corresponding region. Scale bars: 1 μm. Adapted from (Ta et al., 2015), CC-BY-4.0. (C) Exchange qPAINT image of RyR receptors (red) and JPH2 (green) co-clusters. The ratio between the number of JPH2 and RyR molecules is determined for each cluster. Scale bar: 250 nm. Adapted from (Jayasinghe et al., 2018), CC-BY-4.0. (D–F) Spatial organization/clustering of molecules.
(D) 2-color PALM image of TCRs (red) co-clustered with the linker for activation of T cells LAT (green). Scale bar: 2 μm. Adapted from (Sherman et al., 2013), with permission from John Wiley and Sons.
(E) Examples of AMPA receptors (GluA1/2 subunits) clustered in “nanodomains” inside dendritic spines, imaged by PALM, STED, or PAINT (top to bottom panels, respectively). Scale bar: 1 μm. Adapted from (Nair et al., 2013), CC BY-NC-SA 3.0.
(F) Right: PALM images of single clusters of Gag HIV assembly sites (red) colocalizing with ESCRT-I subunits Tsg101 (green) from axial and lateral views (top and bottom panels, respectively). Scale bar: 50 nm. Left: 3D single-cluster averaging of Gag and Tsg101 demonstrate the presence of ESCRT subunits within the interior of the HIV Gag lattice. Adapted from (Van Engelenburg et al., 2014), with permission from AAAS.
(G–I) Determining molecular interactions through colocalization analyses. (G) Top: 3D STORM image of RIM1/2 (red) and PSD-95 (blue) nanoclusters, with a pixel size of 10 nm, compared with a widefield composite (bottom corner) with a pixel size of 100 nm. Scale bar: 2 μm. Bottom: enlargement of the boxed region in the original and rotated angles (left and right, respectively). Scale bar: 200 nm. Adapted from (Tang et al., 2016), with permission from Springer Nature. (H) Left: confocal (top) and 3D SIM (bottom) images of a synapse, labeled for Gephyrin (green), GABAA receptors (red), and the vesicular glutamate transporter VGAT (blue). Scale bars: 1 μm (top) and 500 nm (bottom). Right: 3D reconstruction of a single synapse. Scale bar: 500 nm. Adapted from (Crosby et al., 2019), with permission from Elsevier. (I) Multiplex STORM images showing five different synaptic targets images in the same section (out of 16 targets in total). Left, top to bottom: actin marked by phalloidin; the endosome-associated protein Rab5; Golgi marked by GM130. Right, top to bottom: clathrin marked by CHC17; the endosome-associated protein EEA1; overlay of all five channels. Scale bar: 2 μm. Adapted from (Klevanski et al., 2020).
Biological insights from quantitative super-resolution microscopy. The scheme shows three types of quantitative SRM analyses and example studies.(A–C) Counting molecules and determining molecular stoichiometries. (A) PALM images of PI3P binding sites on endocytic vesicles colocalizing with markers of different maturation stages (top to bottom: clathrin, the GTPase Vps21, the GTPase Ypt7Scale). The images show that increasingly mature vesicles have a higher PI3P content. Scale bar: 100 nm. Adapted with permission from (Puchner et al., 2013). (B) Left panel: STED and confocal images of internalized TfRs (axial summation of 0.9 μm). H is the maximal intensity value (number of photon counts) per pixel. Right panel: 3D molecular map generated by photon statistics of STED and confocal recordings. Inset: isosurfaces of the molecular map (corresponding to the boxed region, encompassing ∼70% of these molecules). The colors represent the number of molecules in the corresponding region. Scale bars: 1 μm. Adapted from (Ta et al., 2015), CC-BY-4.0. (C) Exchange qPAINT image of RyR receptors (red) and JPH2 (green) co-clusters. The ratio between the number of JPH2 and RyR molecules is determined for each cluster. Scale bar: 250 nm. Adapted from (Jayasinghe et al., 2018), CC-BY-4.0. (D–F) Spatial organization/clustering of molecules.(D) 2-color PALM image of TCRs (red) co-clustered with the linker for activation of T cells LAT (green). Scale bar: 2 μm. Adapted from (Sherman et al., 2013), with permission from John Wiley and Sons.(E) Examples of AMPA receptors (GluA1/2 subunits) clustered in “nanodomains” inside dendritic spines, imaged by PALM, STED, or PAINT (top to bottom panels, respectively). Scale bar: 1 μm. Adapted from (Nair et al., 2013), CC BY-NC-SA 3.0.(F) Right: PALM images of single clusters of Gag HIV assembly sites (red) colocalizing with ESCRT-I subunits Tsg101 (green) from axial and lateral views (top and bottom panels, respectively). Scale bar: 50 nm. Left: 3D single-cluster averaging of Gag and Tsg101 demonstrate the presence of ESCRT subunits within the interior of the HIV Gag lattice. Adapted from (Van Engelenburg et al., 2014), with permission from AAAS.(G–I) Determining molecular interactions through colocalization analyses. (G) Top: 3D STORM image of RIM1/2 (red) and PSD-95 (blue) nanoclusters, with a pixel size of 10 nm, compared with a widefield composite (bottom corner) with a pixel size of 100 nm. Scale bar: 2 μm. Bottom: enlargement of the boxed region in the original and rotated angles (left and right, respectively). Scale bar: 200 nm. Adapted from (Tang et al., 2016), with permission from Springer Nature. (H) Left: confocal (top) and 3D SIM (bottom) images of a synapse, labeled for Gephyrin (green), GABAA receptors (red), and the vesicular glutamate transporter VGAT (blue). Scale bars: 1 μm (top) and 500 nm (bottom). Right: 3D reconstruction of a single synapse. Scale bar: 500 nm. Adapted from (Crosby et al., 2019), with permission from Elsevier. (I) Multiplex STORM images showing five different synaptic targets images in the same section (out of 16 targets in total). Left, top to bottom: actin marked by phalloidin; the endosome-associated protein Rab5; Golgi marked by GM130. Right, top to bottom: clathrin marked by CHC17; the endosome-associated protein EEA1; overlay of all five channels. Scale bar: 2 μm. Adapted from (Klevanski et al., 2020).The improved accuracy of SRM also allows more meticulous investigations of spatial organizations of molecular structures. One topic of investigation where this has been particularly instructive is that of receptor clustering at the plasma membrane, which is known to be an important step in the initiation of intracellular signaling events. Studies employing a variety of SRM techniques have exploited clustering analysis to demonstrate, for example, the formation of heterogeneous clusters of T cell receptors (TCRs) in response to activation (PALM) (Sherman et al., 2013) (Figure 2D), the induction of arrestin clustering in response to G-protein-coupled receptor stimulation (STORM) (Truan et al., 2013) and the clustering of AMPA-type glutamate receptors at the synapse (combined PALM, STED and PAINT) (Nair et al., 2013) (Figure 2E). The use of similar analyses has also revealed previously unknown molecular mechanisms, for example, in the field of virology and in particular for the replication cycle of human immunodeficiency virus 1 (HIV1). For example, STED imaging of HIV particles demonstrated that during maturation, the viral envelope protein (Env) forms clusters that assist the entry of the virus into cells (Chojnacki et al., 2017). Another study used 3D PALM to quantify the accumulation of the ESCRT (Endosomal Sorting Complex Required for Transport) protein complex at the head of the budding virus, showing its function in releasing the virus from infected cells (Van Engelenburg et al., 2014) (Figure 2F). In addition to protein structures, SRM has also shed light on meaningful organizations of RNA and DNA (Boettiger et al., 2016; Moffitt et al., 2016; Ricci et al., 2015).Finally, advanced multiplexing capabilities, in combination with colocalization analyses, have revealed previously unknown molecular interactions. For example, 3D 3-color STORM imaging of synapses led to the discovery that clusters of pre-synaptic proteins that mediate synaptic vesicle release localize in apposition to clusters of post-synaptic receptors, forming trans-synaptic “nanocolumns” (Tang et al., 2016) (Figure 2G). In a recent study, 3D SIM revealed similar nanodomains in inhibitory synapses consisting of post-synaptic clusters of and GABAA-type receptors and gephyrin scaffolds in apposition to pre-synaptic clusters of the active-zone protein Rab3-interacting molecule (RIM) (Crosby et al., 2019) (Figure 2H). Beyond the interrogation of a handful of molecular species, sequential SRM techniques can measure a theoretically limitless number of targets by employing repeated steps of labeling and acquisition. For example, a multiplexed implementation of PAINT was used to map the precise distributions of five receptor tyrosine kinases (RTKs) in breast cancer cells, revealing previously unknown ligand-dependent interactions (Werbin et al., 2017). More recently, a sequential implementation of STORM reached unprecedented scales through autonomous labeling and acquisition of over a dozen separate synaptic targets, showing that such approaches can reveal previously unknown functional roles (Klevanski et al., 2020) (Figure 2I).
Challenges (and solutions) in achieving quantitative SRM
Challenge 1: preparing optimal samples for SRM
Fixation
Subtle flaws in biological samples that are invisible to conventional microscopy become readily apparent at nanoscale resolution. One of the biggest sources of imaging artifacts is the fixation of biological samples, which is intended to immobilize or “fix” the molecules in place for subsequent imaging. One approach uses organic solvents, such as methanol, to precipitate proteins. Although this treatment has been widely used in fluorescence microscopy before labeling cytoskeletal components such as microtubules, SRM has since revealed that these structures are poorly preserved at the nanoscale (Jimenez et al., 2020). Furthermore, treatment with organic solvents can lead to a loss of membranes and soluble cytosolic proteins. Fixation in immunolabeling experiments is more frequently achieved through the use of aldehydes to cross-link neighboring peptides, primarily by bridging between the side chains of lysine residues. Paraformaldehyde (PFA) is generally favored for these applications because it produces a relatively fast and mild reaction, which is less likely to distort cellular epitopes and disrupt their immunogenicity (Hopwood, 1969). However, SRM has revealed latent artifacts of PFA fixation, such as an apparent clustering of membrane receptors, distortion of the cell cytoskeleton, and inefficient immobilization of membrane proteins (Brock et al., 1999; Pereira et al., 2019; Stanly et al., 2016). These effects can be somewhat mitigated by fixing cells with a combination of PFA and glutaraldehyde (GA) at varying concentrations, facilitating more extensive cross-linking by virtue of the two aldehyde groups on the GA molecule (versus the single aldehyde group on PFA). This combination was shown to produce significantly better preservation of native membrane protein distributions and cytoskeletal structures (Tanaka et al., 2010; Whelan and Bell, 2015). However, the exhaustive cross-linking can often distort or block epitopes, making GA inapplicable to many immunolabeled samples (Farr and Nakane, 1981). A worthy alternative is the small dialdehyde glyoxal, which was recently shown to outperform PFA fixation for a wide range of cellular targets in SRM. As part of an extensive study, Richter and colleagues tested the performance of glyoxal with STED microscopy and demonstrated superior preservation of a large variety of cellular targets, including membrane-bound and organelle-resident proteins, as well as nucleic acids (Richter et al., 2018). Although glyoxal is not widely applied as of yet, it is likely to become a valuable tool for SRM experiments. As with any approach, however, there is no “one-size-fits-all” reagent, and different cell types and targets are visualized best with different varieties or combinations of fixatives (Figure 3). It should also be noted that besides the choice of fixative, additional aspects of the fixation reaction, such as the length, temperature, and fixative concentrations, should also be optimized to maximize structural preservation. It is also strongly advised to follow-up the fixation reaction with a treatment of sodium borohydride, glycine, or ammonium chloride solution to “quench” the unreacted aldehydes and avoid morphological distortions due to overfixation. Fortunately, several studies have undertaken the task of comparing and optimizing sample preparation procedures for SRM applications (e.g. (Black, 2016; Halpern et al., 2015; Richter et al., 2018; Whelan and Bell, 2015)), and where literature is lacking, it is advisable to invest the time in performing such optimizations.
Figure 3
Importance of optimal sample fixation for super-resolution
(A) Top: ground state depletion (GSD) images of actin marked by phalloidin, following fixation with PFA in PBS or PEM buffer (cytoskeleton-protective buffer). The latter results in more uniform staining. Scale bar: 10 μm and 1 μm for closeups. Bottom: GSD images of the actin-binding protein WAVE2 after fixation with GA or PFA in PEM. The number of localized particles per unit of area is higher for PFA. Scale bar: 1 μm. Adapted from (Leyton-Puig et al., 2016).
(B) STED images of the ER chaperone calreticulin following fixation with 4% PFA or 3% glyoxal. As shown in the plot, more organelle-like structures are visible with glyoxal fixation. Scale bar: 1 μm. Adapted from (Richter et al., 2018).
(C) STORM images of CD4 receptors following fixation with 4% PFA in PEM buffer at 4°C, 23°C, or 37°C (left to right). Plot: CD4 cluster density is highest for fixation at 4°C, demonstrating that suboptimal fixation can induce artificial clustering. Scale bar: 5 μm. Adapted from (Pereira et al., 2019).
(D) STORM images of microtubules following fixation with 4% PFA, −20°C methanol, or 3% GA after pre-extraction using 0.3% Triton X-100 (left to right). White arrows indicate a discontinuity in the filaments. The blue arrow indicates abnormal curvature of the filaments. Scale bar: 1 μm. Adapted from (Whelan and Bell, 2015).
(E) STORM images of mitochondria marked by TOM20. Top: fixation with 4% GA. Some clustering is observed as well as shrinkage of the overall structure. Bottom: fixation with 3% PFA and 0.5% GA. Little clustering is observed, and the staining distribution appears homogeneous. Scale bar: 1 μm. Adapted from (Whelan and Bell, 2015).
(F) Airyscan images of the centromere marker CENP-A following fixation with 3% glyoxal, 3.5% PFA, or 3.5% PFA with 0.1% Triton X-100 (left to right). Glyoxal fixation results in a strong non-specific cytoplasmic staining, which is also visible (to a lesser extent) with PFA fixation alone. Fixation with PFA/Tx results in the highest staining specificity. Scale bar: 10 μm. Adapted from (Celikkan et al., 2020), with permission from Springer Nature.
Importance of optimal sample fixation for super-resolution(A) Top: ground state depletion (GSD) images of actin marked by phalloidin, following fixation with PFA in PBS or PEM buffer (cytoskeleton-protective buffer). The latter results in more uniform staining. Scale bar: 10 μm and 1 μm for closeups. Bottom: GSD images of the actin-binding protein WAVE2 after fixation with GA or PFA in PEM. The number of localized particles per unit of area is higher for PFA. Scale bar: 1 μm. Adapted from (Leyton-Puig et al., 2016).(B) STED images of the ER chaperone calreticulin following fixation with 4% PFA or 3% glyoxal. As shown in the plot, more organelle-like structures are visible with glyoxal fixation. Scale bar: 1 μm. Adapted from (Richter et al., 2018).(C) STORM images of CD4 receptors following fixation with 4% PFA in PEM buffer at 4°C, 23°C, or 37°C (left to right). Plot: CD4 cluster density is highest for fixation at 4°C, demonstrating that suboptimal fixation can induce artificial clustering. Scale bar: 5 μm. Adapted from (Pereira et al., 2019).(D) STORM images of microtubules following fixation with 4% PFA, −20°C methanol, or 3% GA after pre-extraction using 0.3% Triton X-100 (left to right). White arrows indicate a discontinuity in the filaments. The blue arrow indicates abnormal curvature of the filaments. Scale bar: 1 μm. Adapted from (Whelan and Bell, 2015).(E) STORM images of mitochondria marked by TOM20. Top: fixation with 4% GA. Some clustering is observed as well as shrinkage of the overall structure. Bottom: fixation with 3% PFA and 0.5% GA. Little clustering is observed, and the staining distribution appears homogeneous. Scale bar: 1 μm. Adapted from (Whelan and Bell, 2015).(F) Airyscan images of the centromere marker CENP-A following fixation with 3% glyoxal, 3.5% PFA, or 3.5% PFA with 0.1% Triton X-100 (left to right). Glyoxal fixation results in a strong non-specific cytoplasmic staining, which is also visible (to a lesser extent) with PFA fixation alone. Fixation with PFA/Tx results in the highest staining specificity. Scale bar: 10 μm. Adapted from (Celikkan et al., 2020), with permission from Springer Nature.As a possible alternative to chemical fixation, samples can also be vitrified by rapid-freezing, which acts to fix the molecules in place, while preventing the formation of ice crystals that might damage the cells. This technique is routinely used for high-resolution imaging methods such as electron and X-ray microscopy because it preserves cellular structures in their native states (Vénien-Bryan et al., 2017). Recently, a number of groups have implemented specialized low-temperature cryo-stages and cryo-immersion objectives (Faoro et al., 2018), which enable fluorescence imaging of vitrified samples (Hoffman et al., 2020; Tuijtel et al., 2019). One caveat of this technique, however, is that standard immunolabeling, which necessitates permeabilization of the cell membrane, is no longer a possibility. That being said, intracellular targets can be visualized with genetically encoded fluorescent proteins (see Challenge 2, below), provided that they are both sufficiently small and are reliable at cryogenic temperatures (Johnson et al., 2015; Tuijtel et al., 2019). The success of these preliminary experiments, as well as the ongoing development of cryo-compatible optics and probes, make cryo-SRM an exciting possibility for the future.
Labeling and mounting
For subsequent labeling of aldehyde-fixed samples, membrane permeabilization for access to intracellular targets is frequently achieved with detergents such as Triton X-100, Tween 20, digitonin, and saponin. The type and concentration of detergent should be optimized, because harsher treatments can lead to a loss of membrane-bound proteins. When available, small probes such as nanobodies can be used without the need for the permeabilization step, because the fixation process itself creates small ruptures in the membrane. When performing immunostainings for SRM it is often beneficial to use a higher-than-normal concentration of antibody and longer incubation times to achieve a sufficient labeling density. Nevertheless, titrations of antibody amounts should be performed to avoid artifacts due to an overlabeling of the target (Lau et al., 2012; Whelan and Bell, 2015). Because a higher antibody concentration can potentially increase the amount of unspecific staining, it is crucial to incubate the cells in a “blocking” buffer prior to the staining, containing serum from the source species of the secondary antibody or proteins such as bovine serum albumin (BSA). Beyond these general recommendations, specific criteria such as the temperature and length of incubation, as well as the precise concentrations of detergents and blocking agents should be optimized for the desired target and technique. An additional consideration for SRM sample preparation is the uniformity of the light path between the sample and the objective lens, to ensure that a maximal amount of light reaches the objective lens. For fixed samples, it is necessary to use a mounting medium that has a similar refractive index to the glass and the objective and contains an effective anti-fade reagent. A thorough comparison of mounting media for SRM can be found in (Birk, 2017). It is also strongly advised to use high-performance glass coverslips with a low variance in thickness and to mount specimens as close as possible to the coverslip.
Challenge 2: super-resolution requires super probes
Choosing a labeling strategy can be particularly challenging for quantitative applications. It is important to bear in mind that the measured signals do not arise from the target molecules themselves but from fluorophores that are positioned a full label's length away. If the labels are larger than the resolution of the technique, any assertion of molecule locations cannot be entirely accurate. When the labeled molecules are to be counted, the number of fluorophores labeling a target molecule should be known, as well as the properties of the fluorescence emission. The researcher should therefore have a good working knowledge of the chosen labeling strategy and perform careful calibrations to account for its inherent variables (Table 1).
Table 1
Comparison of fluorescent proteins and affinity probes for quantitative SRM
Fluorescent proteins
Protein/peptide tags
Affinity probes + organic dyes
Selection of labels suitable for SRM
Limited selection
Large selection
Large selection
Specificity
High (genetically encoded)
Variablea
Variableb
Binding affinity
Covalent
Covalent
Variable
Labeling efficiency
All expressed proteins are labeled (exogenous for transient expression, endogenous for genome editing with CRISPR/Cas)
Variablec
Variable (higher for smaller probes)c
Label:target stoichiometry
Controlled (1:1)d
Controlled (1:1)d,e
Difficult to control
Typical photophysical properties
Brightness
Dim
Bright
Bright
Photostability
Low
High
High
Overcounting Σ(labels) > Σ(molecules)
Multiple blinking eventsf
Multiple blinking eventsf
Multiple blinking eventsfOverlabeling when using secondary antibodies and/or polyclonal antibodies
Undercounting Σ(labels) < Σ(molecules)
Co-presence of unlabeled endogenous proteingMisfoldinghFailure to matureh
Co-presence of unlabeled endogenous proteingIncomplete labeling reactionc
Incomplete labeling due to low affinity or low accessibility to epitopesc
Primary advantage
Suitable for quantification
Suitable for quantification, wide choice of dyes
Flexible labeling
Fluorescent substrates that are quenched in their non-binding state alleviate signals from non-specific deposition of substrates (Hori and Kikuchi, 2013).
Selection of antibodies that have been verified to not cross-react with other species.
Novel calibration standards (Cella Zanacchi et al., 2019; Thevathasan et al., 2019).
Does not account for endogenous unlabeled proteins.
Does not account for incomplete labeling of protein tags.
Calibration experiments to model fluorophore blinking kinetics (Golfetto et al., 2018).
Endogenous protein amounts can be assessed with biochemical methods.
Use reported values or measure (Dunsing et al., 2018; Köker et al., 2018).
Comparison of fluorescent proteins and affinity probes for quantitative SRMFluorescent substrates that are quenched in their non-binding state alleviate signals from non-specific deposition of substrates (Hori and Kikuchi, 2013).Selection of antibodies that have been verified to not cross-react with other species.Novel calibration standards (Cella Zanacchi et al., 2019; Thevathasan et al., 2019).Does not account for endogenous unlabeled proteins.Does not account for incomplete labeling of protein tags.Calibration experiments to model fluorophore blinking kinetics (Golfetto et al., 2018).Endogenous protein amounts can be assessed with biochemical methods.Use reported values or measure (Dunsing et al., 2018; Köker et al., 2018).
The direct option—genetically encoded labels
Cellular targets can be visualized directly by covalently binding them to fluorescent proteins (FPs) through targeted genetic modifications. Aside from the obvious advantage of enabling live-cell imaging, genetic tagging with FPs is an ideal approach for quantitative applications (specifically molecule counting) because the invariable label-target stoichiometry of 1:1 means that there is no risk of molecules being “overcounted.” That being said, several considerations should be kept in mind when using FPs for quantitative measurements. Firstly, although most FPs are not particularly large (∼25 kDa), it should be verified that their addition does not change the functionality and localization of the target proteins or affect the stoichiometry of complex formation (especially in relation to molecule counting) (Weill et al., 2019). An additional consideration is that the imaged regions are likely to include endogenous, unlabeled proteins that will not be accounted for in fluorescent measurements. Biochemical techniques such as quantitative immunoblotting can be used to determine the ratio between the labeled and unlabeled proteins to correct the quantifications derived from the FPs alone (Wood, 1983; Wu, 2005). Alternatively, the need for exogenous protein expression can be eliminated altogether by generating knock-in fluorescent fusion proteins that label the entire endogenous population (Ratz et al., 2015). Lastly, any quantification should also take into account the “skipped” emissions of FPs due to misfolding or failure to mature, in order to avoid undercounting the number of molecules. Several approaches exist for measuring the maturation and folding efficiencies of FPs, and these values have already been reported for a number of proteins under various experimental conditions (Dunsing et al., 2018; Köker et al., 2018). Rather than expressing an FP, target proteins can be genetically conjugated to a scaffold that binds free fluorescent substrates in solution. The most commonly used “self-labeling” proteins are SNAP (Keppler et al., 2003) and HaLo (Los et al., 2008) tags, which can be covalently reacted with benzylguanine (BG) and chloralkane (CA) molecules, respectively. The major advantage is that these substrates are conjugated to organic dyes, which are often brighter, more photostable, and significantly more diverse than FPs. Nevertheless, these substrates are often as large as directly encoded FPs (∼20 kDa and ∼30 kDa for SNAP and HaLo, respectively) and can similarly influence protein function or localization. To mitigate this, smaller peptide tags have been developed, such as a short tetracysteine motif that binds to fluorescein arsenical hairpin (FlAsH) (Adams et al., 2002). Because fluorescein can undergo photoswitching, FlAsH tags can be readily applied to STORM or PALM (“FlAsH-PALM”) (Lelek et al., 2012). Another approach is the incorporation of non-canonical amino acids (ncAAs) with small functional groups such as strained alkenes into a chosen position in the target protein, using amber codon suppression (Brown et al., 2018). The ncAAs can be subsequently bound to SRM-compatible fluorophores containing complementary functional groups, such as tetrazines (Beliu et al., 2019; Kozma et al., 2017). A major pitfall of this technique is the difficulty of targeting multiple proteins, because only the amber codon can be efficiently employed in mammalian cells. However, this problem can be circumvented by expressing the target proteins in separate sets of cells, and subsequently fusing these cells, allowing the proteins to intermix (Saal et al., 2018). Despite the appeal of this approach, the process of incorporating ncAAs can be technically challenging and depends on a multitude of factors, including the properties of the target protein and the location of the mutation site. Therefore, further improvements in the efficiency of ncAA incorporation are needed in order to make this technique more widely applicable (Chemla et al., 2018). An important point to note when using self-labeling tags is that the addition of a second tagging step introduces new variables to be accounted for: an incomplete reaction may result in molecule undercounting, whereas an unspecific deposition of the substrate may result in overcounting (Bosch et al., 2014). The latter is particularly problematic for live imaging applications, because unbound substrate cannot be readily washed out. This can be mitigated to a certain extent by using fluorescent substrates that are significantly dimmer or quenched in their non-binding state, reducing the amount of unspecific background emissions (Carlson et al., 2013; Hori and Kikuchi, 2013). For peptide tags such as FlAsH, a high background would rather indicate an unspecific binding of the substrate, because by nature of the reaction, non-bound fluorophores are significantly dimmer. If this becomes an issue, alternative binding motifs may perform better (Martin et al., 2005). In any case, the degree of labeling should be determined under the specific experimental conditions and accounted for during analyses. Recent work has been done to develop easily applicable calibration methods for quantifying fluorescent substrate labeling efficiencies. For example, Zanacchi and colleagues generated a cell-line expressing a DNA origami structure attached to a known number of binding sites, which can be implemented across all imaging and experimental conditions (Cella Zanacchi et al., 2019). Alternatively, existing biological structures with known stereotypical arrangements can be exploited to yield accurate reference data (e.g., nuclear pores (Thevathasan et al., 2019)).Overall, FPs and protein tags are favorable for quantitative experiments because the level of FP expression and protein labeling can largely be controlled. Nevertheless, this approach is severely limited by the need for laborious genetic modification, and many researchers resort to more flexible labeling techniques.
The flexible option—affinity probes
A flexible approach for fluorescent labeling relies on the use of labels with a high binding affinity for the target molecule. The most commonly used labels are antibodies derived by immunizing animals against a structural motif in the target molecule (Stadler et al., 2013). However, additional labels can be derived from synthetic or naturally occurring peptides with a high affinity for specific proteins, lipids, or nucleic acids. When applying affinity probes to SRM, it is especially important to use validated labels that have been proven not to cross-react with similar molecules in order to minimize unspecific background signals. Another point to consider is that affinity probes can often be very large, thus limiting the effective resolution in SRM. For example, the most commonly used immunoglobulin (IgG)-type antibodies are typically ∼150 kDa or ∼12 nm in length. Because most immunolabeling experiments make use of additional “secondary” antibodies to bring the fluorophores near the labeling antibodies, each fluorophore is ultimately distanced ∼20 nm away from its target molecule, which is already on the order of the attainable resolution. Furthermore, the use of large labels can also cause steric hindrances that diminish the labeling density, potentially resulting in an undercounting of molecules. As a result, SRM experiments often utilize much smaller probes such as antibody fragments (∼50 kDa/∼9 nm), small recombinant fragments (∼30 kDa/∼6 nm), aptamers, and nanobodies (∼15 kDa/3 nm) (Sahl et al., 2017). Nanobodies are particularly suitable for quantitative measurements because they possess a single antigen-binding site and can be labeled with a known number of fluorophores, allowing a large amount of control over the labeling stoichiometry (Pleiner et al., 2018), and have already been thoroughly demonstrated to fulfill the needs of most super-resolution applications (Ries et al., 2012). Unfortunately, as of yet nanobodies are only available for a limited number of cellular targets. A good alternative is the use of small non-immuno probes such as RNA aptamers or affimers (peptide aptamers). Aptamers are produced rapidly in vitro in identical copies with precise fluorophore stoichiometry, making them ideal for quantitative applications (Ta et al., 2015). However, even with a known stoichiometry for the labeled molecules, there is still the inevitable pool of molecules that will remain unlabeled. At the same time, affinity probes do not bind their targets covalently and can have widely varying degrees of labeling efficiencies stemming from different labeling affinities and avidities, as well as steric hindrance in the case of larger probes such as IgGs. As a result, the portion of unlabeled molecules in an experiment will not always be negligible. In practice, it is not straightforward to estimate the labeling efficiency of a probe, and although values such as the binding affinity and avidity can be determined with various in vitro assays, the results are unlikely to reflect the true values in a biological sample. Due to this limitation, many researchers resort to using FPs for quantitative applications. However, efforts have been made toward the development of dedicated calibration standards that can be used in conjunction with SRM (see earlier discussion). With the gradual incorporation of such invariant calibration standards, affinity probes are likely to become an increasingly common tool for quantitative measurements.
Special considerations for SMLM: fluorophore blinking kinetics
SMLM relies on the stochastic blinking of photoswitchable/activatable proteins (PALM) or organic dyes (STORM). For quantification purposes, it is preferable to use photoactivatable fluorophores whose state transition is irreversible, meaning that each emitter is counted just a single time. Unfortunately, irreversible photoactivation is not always successful, and the fluorophores can often exhibit reversible blinking (Annibale et al., 2010), which may result in an overcounting of molecules. To mitigate this problem, it is preferable to choose fluorophores with a low duty cycle, meaning they spend only a small fraction of their time in the on state (ideally <1%), thus reducing the likelihood of multiple emissions. Nevertheless, a variety of statistical analyses and algorithms modeling blinking kinetics can be used to construct blinking-corrected images (see Challenge 4, below). Alternatively, the blinking can be decoupled from the photophysical properties of the fluorophore altogether by resorting to a quantitative implementation of DNA-PAINT (qPAINT) as an imaging approach (Jungmann et al., 2016). Here, the apparent blinking of the fluorophores is a function of the probe/target binding kinetics rather than a stochastic photoswitching of fluorophores. By measuring the time between sequential blinks at a given location, the number of molecules can be deduced using prior knowledge of the binding kinetics, without any need to resolve the spatial location of the molecules.An additional requirement for SMLM is a high contrast ratio, or a large difference in brightness between the on and off states of the fluorophore (ideally ∼100 times brighter), in order to ensure an accurate localization among competing background emissions (Li and Vaughan, 2018). This is particularly important when the labeling density is high, and the chances of nearby fluorophores simultaneously emitting is increased (van de Linde et al., 2010). Suffice to say, there are several considerations for selecting fluorophores for SMLM experiments, and these become particularly pertinent for quantitative measurements. Because a comparison of suitable fluorophores is beyond the scope here, we encourage the reader to refer to more thorough reviews (Li and Vaughan, 2018; Turkowyd et al., 2016). Furthermore, it should be noted that even when adequate fluorophores are selected, their behavior is dependent on the imaging conditions. A number of studies have attempted to characterize fluorophore blinking kinetics under different imaging conditions, but these have not been applied systematically to cellular models. It is therefore strongly advised to perform calibration experiments, for example, by carrying out a titration series of the fluorescent labels (Baumgart et al., 2016; Ehmann et al., 2014). However, such calibrations require repeated imaging sessions and can therefore be extremely time-consuming. Recently, Golfetto and colleagues developed an assay to precisely determine the photophysical properties of a fluorophore for any given optical setup, without the need for any prior knowledge of the fluorophore blinking kinetics (Golfetto et al., 2018). Such systematic solutions for characterizing fluorophore photoblinking allow for highly accurate molecular quantifications with SMLM.
Challenge 3: multiplexing
Choosing the right combination of fluorophores
Multicolor SRM opens the door to unprecedented investigations of molecular interactions and organizations. However, these experiments pose stringent limitations on the selection of fluorescent labels. In practical terms, it is necessary to ensure that (1) the emission/excitation spectra of the fluorophores are sufficiently distinct, (2) the photophysical requirements for SRM are met (e.g. bright/photoswitchable/photostable), and (3) the dyes fluoresce optimally under similar imaging conditions. The latter is particularly pertinent to STORM because different organic dyes exhibit optimal blinking in different imaging buffers (determined in large by the presence or absence of oxygen in solution). Many successful dye pairings have already been reported in the literature for STED, STORM, and additional SRM techniques utilizing organic fluorophores (e.g. (Dempsey et al., 2011; Kolmakov et al., 2014; Yang and Specht, 2020)). Generally, dyes such as rhodamine (most Alexa Fluor dyes) and cyanine derivatives are usually favored because they have narrow emission spectra and are particularly bright (Dempsey et al., 2011). Although they often have very different requirements for imaging buffers, palpable differences in their behavior can be mitigated by using novel buffers that are capable of promoting blinking for a wide range of fluorescent dyes (Nahidiazar et al., 2016). As an alternative, the blinking efficiency of each of the selected dyes can be measured under the specific experimental conditions and accounted for in post-hoc analyses (see Challenge 4, below). For methods such as PALM, which utilize FPs, finding successful fluorophore combinations is an even greater challenge because FPs are dimmer and span a smaller spectral range as compared with organic dyes. Nevertheless, successful pairings have been described (Kozma and Kele, 2019), and new FPs have been introduced to facilitate multicolor imaging (Gunewardene et al., 2011). That being said, it is nonetheless important to conduct appropriate controls and correct for bleed-through/cross-talk effects in multicolor experiments (Bolte and Cordelieres, 2006), for which various approaches have been described (Kim et al., 2013; Maddipatla and Tankam, 2020). When an adequate combination of fluorophores cannot be found, as is often the case when imaging three or more colors, different techniques can be used to separate the overlapping emissions. One common approach is spectral imaging, which relies on the measurement of emission spectra, rather than a narrow band of wavelengths at every pixel. Different fluorescent dyes have unique patterns of emission at different wavelengths (known as spectral “signatures”), making it possible to algorithmically decipher which fluorophore(s) is present in a given pixel. The most commonly used algorithm is linear unmixing, a method originally developed for analyzing multiband satellite images (Zimmermann, 2005). More recent approaches rely on machine learning-based algorithms to differentiate overlapping spectra, even when the spectral signatures of the dyes are not known in advance (McRae et al., 2019). Most major SRM techniques have successfully applied spectral imaging to two or more colors simultaneously (MINFLUX (Gwosch et al., 2020); STED (Winter et al., 2017); PALM (Dong et al., 2016); STORM (Zhang et al., 2019); SIM (Liao et al., 2019)). Recently, a purely computational approach applied a deep-learning-based algorithm to differentiate multiple colors imaged simultaneously, without the need for any spectral filters or prior knowledge about the emitters (Hershko et al., 2019). As an alternative to complex experiment planning or spectral separating algorithms, expansion microscopy offers an attractive solution for fixed samples (Cho et al., 2018). In this case, the standard limitations of conventional microscopy apply, affording a significantly longer list of possible color combinations.
Or (ideally): multiplexing with a single fluorophore
As an alternative to multicolor imaging, it is possible to reuse the same probe over and over again for different targets within the same sample. This not only circumvents the need for finding adequate color combinations but also eliminates the inaccuracies stemming from photophysical differences between dyes, making it particularly favorable for quantitative applications. Because the number of possible targets is theoretically limitless, these techniques are only limited by the patience of the biologist. Single-probe multiplexing has been implemented with several SRM methods, for example, with STORM in combination with irreversible photobleaching of the fluorophores (Lin et al., 2018; Valley et al., 2015) or elution of antibodies (Yi et al., 2016). However, these techniques can be very lengthy, because they require not only a sequential removal of the fluorophores but also successive immunostaining steps. An improved technique is a multiplexed implementation of DNA-PAINT, where the targets are pre-labeled with antibodies linked to different DNA docking strands, and the complementary imager strands are sequentially washed into and out of the imaging solution (exchange-PAINT) (Werbin et al., 2017). As an alternative to successive washing steps, “quencher” strands that prevent the binding of the imager strands to their targets can be added to the solution without any need for repeated fluid exchanges (Lutz et al., 2018) (Figure 4). Because several hundreds of non-interacting DNA labeling sequences can be utilized for imager/docking strand binding, the number of targets is only limited by the length of the imaging steps. However, recent improvements in the design of the DNA sequences (Strauss and Jungmann, 2020) and imaging buffer composition (Civitci et al., 2020) have improved the acquisition speed up to 100-fold. Furthermore, improvements on existing DNA-conjugated probes such as the replacement of antibodies with nanobodies (Sograte-Idrissi et al., 2019) and the use of signal-amplifying DNA concatemers capable of binding multiple imaging strands (Saka et al., 2019) have significantly improved the localization precision.
Figure 4
Single fluorophore multiplexing with Exchange-PAINT. Multiplexing with a single fluorophore is the ideal choice for quantitative applications
(A–C) Classic Exchange-PAINT implementation, which involves sequentially washing imager strands into and out of the imaging solution, is demonstrated in fixed BT20 cells. (A) Scheme of the experiment. Each target is labeled with an antibody conjugated to a unique docking strand (a, b, c, d, e). Five orthogonal imaging strands labeled with the fluorophore Atto655 (a∗, b∗, c∗, d∗, e∗) are sequentially washed in, imaged, and washed out. (B) Exchange-PAINT images from each sequential imaging step. Scale bar: 5 μm. (C) Merged image for all targets and enlargements of boxed regions (inset). Scale bars: 5 μm for the full image and 1 μm for the insets. Adapted from (Werbin et al., 2017), CC-BY-4.0.
(D and E) Quencher-Exchange PAINT implementation, which involves sequentially imaging and then bleaching the imager strands. (D) Middle: the blinking event rate is proportional to the concentration of free imager strands. Top: in conventional exchange-PAINT, the concentration is tuned by exchanging the imaging solution. Bottom: in quencher-exchange, the concentration is tuned by adding competitive complementary strands (“quenchers”). (E) Top left: a full Exchange-PAINT cycle using the P1+ imager and quencher for β-tubulin and the P2+ imager for TOM20 in fixed COS-7 cells. Top right: widefield and Exchange-PAINT image of TOM20 (red) and β-tubulin (green). Scale bar: 2 μm. Bottom: scheme of the experiment. These steps can theoretically be repeated with any number of imager–quencher pairs. Adapted from (Lutz et al., 2018), CC-BY-4.0.
Single fluorophore multiplexing with Exchange-PAINT. Multiplexing with a single fluorophore is the ideal choice for quantitative applications(A–C) Classic Exchange-PAINT implementation, which involves sequentially washing imager strands into and out of the imaging solution, is demonstrated in fixed BT20 cells. (A) Scheme of the experiment. Each target is labeled with an antibody conjugated to a unique docking strand (a, b, c, d, e). Five orthogonal imaging strands labeled with the fluorophore Atto655 (a∗, b∗, c∗, d∗, e∗) are sequentially washed in, imaged, and washed out. (B) Exchange-PAINT images from each sequential imaging step. Scale bar: 5 μm. (C) Merged image for all targets and enlargements of boxed regions (inset). Scale bars: 5 μm for the full image and 1 μm for the insets. Adapted from (Werbin et al., 2017), CC-BY-4.0.(D and E) Quencher-Exchange PAINT implementation, which involves sequentially imaging and then bleaching the imager strands. (D) Middle: the blinking event rate is proportional to the concentration of free imager strands. Top: in conventional exchange-PAINT, the concentration is tuned by exchanging the imaging solution. Bottom: in quencher-exchange, the concentration is tuned by adding competitive complementary strands (“quenchers”). (E) Top left: a full Exchange-PAINT cycle using the P1+ imager and quencher for β-tubulin and the P2+ imager for TOM20 in fixed COS-7 cells. Top right: widefield and Exchange-PAINT image of TOM20 (red) and β-tubulin (green). Scale bar: 2 μm. Bottom: scheme of the experiment. These steps can theoretically be repeated with any number of imager–quencher pairs. Adapted from (Lutz et al., 2018), CC-BY-4.0.
Preventing and correcting registration errors between channels
A crucial requirement for multiplexing is the precise alignment of images from different channels/probes. Samples can drift as a result of small instabilities in the microscope stage or experimental manipulations such as buffer changes (e.g. in the case of exchange-PAINT), which can lead to a misalignment of sequential images. Furthermore, the use of multiple fluorescence filters and cameras can lead to disparate aberrations for the different color channels, further increasing the difficulty of image alignment. Besides stabilizing the microscope setup and maintaining a constant ambient temperature, it is often necessary to correct for drift during and/or after image acquisition. Several approaches have been employed for this, the most straightforward being to simply secure the sample in place more effectively, for example, by mounting it in a sturdy support (Holden et al., 2014). Nevertheless, additional techniques are frequently used to correct for misalignments. One common approach is the use of fiducial markers such as fluorescent beads, quantum dots, gold nanoparticles, or fluorescent nanodiamonds, which are tracked in all channels throughout the recording (Betzig et al., 2006; Georgieva et al., 2016; Kukulski et al., 2012; Yi et al., 2016). For example, the use of fluorescent nanodiamonds as markers achieved an alignment error of only 2 nm in STORM imaging (Yi et al., 2016). An important consideration when using fiducials is that their amount should be optimized for a given sample, too little, and the efficiency of the drift correction will be reduced, too much, and the labeled structures of interest may be masked. Ideally, one should aim to record ∼2–3 beads per field of view (FOV) (Yang and Specht, 2020). As an alternative to fiducials, it is possible to use the imaged structures themselves as information for drift compensation. This is most applicable to images of well-known, repeating structures such as cytoskeletal components. Image-based drift correction can be implemented with approaches such as image cross-correlation (Mlodzianoski et al., 2011) or Bayesian inference (Elmokadem and Yu, 2015). Alternatively, some technologies have been developed that eliminate the need for drift correction altogether. For example, by using lifetime-based separation rather than spectral separation, Bückers and colleagues imaged two fluorescent dyes simultaneously with a single STED beam, rendering the measurement insensitive to drift (Bückers et al., 2011). All things considered, the approach for drift correction depends on the nature of the experiment and the data. For large-scale experiments involving significant stage movement, it would be practical to both physically secure the sample in place and, if possible, to perform real-time drift correction (e.g. (Grover et al., 2015)). Furthermore, in all circumstances, it is strongly advised to perform a posteriori drift correction for increased precision, especially when examining co-localizations, either with content-based correction when the images are highly structured or with fiducial markers.
Challenge 4: mining SRM data
The new information afforded by SRM calls for novel analysis approaches as well as careful consideration of experimental factors. Most analyses require the spatial coordinates of the emitters as input, rather than a rendered super-resolution image. As the name suggests, molecule localizations are the primary output of SMLM measurements. These are typically computed by Gaussian fitting (Deschout et al., 2014), but additional approaches such as wavelet segmentation (Izeddin et al., 2012) or deep-learning-based localizations (Nehme et al., 2020) can be used for improved accuracy and/or computation speed. For non-SMLM techniques, the spatial localizations should first be deduced from the super resolution image with segmentation algorithms. A detailed description of localization and segmentation approaches is beyond the scope here, but a thorough comparison of various localization approaches can be found in the following review (Sage et al., 2019).
Counting molecules: avoiding over- and undercounting
Although molecule counting in SMLM experiments seems conceptually straightforward (at any given position, Σmolecules = Σlocalizations), several practical matters complicate this symmetry. First, a fluorophore may blink multiple times throughout a recording, leading to an overestimation of the number of molecules. Second, incomplete labeling or failure of a fluorophore to blink will lead to an underestimation of the number of molecules. Therefore, knowledge of the fluorophore blinking kinetics under the specific experimental conditions, as well as prior knowledge of the labeling efficiency, is needed to avoid over- or undercounting of the labeled molecules.To account for a possible overcounting of molecules, a calibration experiment should be performed to determine how many times a fluorophore will typically blink within a given time frame (tcutoff) (Figure 5A). This should be performed on a sparsely distributed control sample under the same imaging conditions as the experimental sample. During the experiment, all emissions at a given location occurring within a time frame t < tcutoff will be deemed a single localization (e.g. (Annibale et al., 2011; Coltharp et al., 2012)). Alternatively, this cutoff can also be drawn from pre-existing models of the photoblinking of different fluorophores. A problem arises, however, when the sample is particularly dense, and the blinking of one molecule overlaps in time with the new photoactivation of another. In such cases, tcutoff can also be adjusted to be a function of the protein density, rather than an absolute value (Lee et al., 2012). Statistical analyses can also be used to correct for overcounting, for example, by calculating a pair correlation function, which gives the probability of finding a certain localization at a given distance from another localization. First, the PCF is calculated for multiple appearances of the same probe, which can be computed from a control sample in which the probe is randomly distributed. When blinking occurs, it will lead to higher correlations at small distances. This can then be used to correct the PCF of the correlations of the real protein distribution in the experimental data (Veatch et al., 2012).
Figure 5
Approaches for molecule counting
(A) In standard SMLM techniques (PALM/STORM), the number of blinking events indicates the number of molecules in a diffraction-limited area. To avoid under-/overcounting, the value should be corrected by the stoichiometry of the labeling and multiple blinking events (e.g., by merging events close in time).
(B) In qPAINT, the number of molecules can be determined from the time in between sequential blinking events, together with knowledge of the docking/imager strand binding kinetics.
(C) In STED, the number of molecules can be calculated by photon statistics. Because a molecule can only emit one photon at a given time, detecting multiple photons simultaneously (coincidence) indicates that multiple molecules are located in the diffraction-limited area. The number of molecules is determined with a confocal scan, and the spatial locations are assigned using STED imaging.
Approaches for molecule counting(A) In standard SMLM techniques (PALM/STORM), the number of blinking events indicates the number of molecules in a diffraction-limited area. To avoid under-/overcounting, the value should be corrected by the stoichiometry of the labeling and multiple blinking events (e.g., by merging events close in time).(B) In qPAINT, the number of molecules can be determined from the time in between sequential blinking events, together with knowledge of the docking/imager strand binding kinetics.(C) In STED, the number of molecules can be calculated by photon statistics. Because a molecule can only emit one photon at a given time, detecting multiple photons simultaneously (coincidence) indicates that multiple molecules are located in the diffraction-limited area. The number of molecules is determined with a confocal scan, and the spatial locations are assigned using STED imaging.To account for molecule undercounting, it is important to determine the label:target stoichiometry when this value is not known (see Challenge 2, above). Furthermore, undercounting can be caused by failed photoactivations of the fluorophores, which can be accounted for by measuring the percentage of successful activation on calibration standards with a known number of fluorophores (e.g. (Durisic et al., 2014)). Lastly, because the number of activatable fluorophores within a diffraction-limited area decreases throughout the measurement, it is necessary to adapt the laser intensity and decrease it gradually to avoid the simultaneous emission of two nearby fluorophores and thus incorrectly being classified as a single localization.An alternative technique that avoids the need to account for stochastic blinking behavior is qPAINT, in which the number of molecules is determined based on the predictable DNA binding kinetics of the imager and docking strands. In this approach, the number of molecules is determined by comparing the time interval between sequential blinks with the value expected from the binding kinetics (Jungmann et al., 2016) (Figure 5B). Besides the localization of blinking events, the number of emitters can be determined by quantifying the photon statistics using sensitive detectors. Simply put, because a fluorophore can only emit a single photon during an excitation cycle, the coincident arrival of multiple photons at multiple detectors must mean they arise from different fluorophores. Performing this measurement with STED/RESOLFT microscopy and delivering short pulses of excitation can determine the number of fluorophores within a diffraction-limited area (Keller-Findeisen et al., 2020) (Figure 5C).
Spatial organization: clustering analysis
Several approaches exist for detecting molecular clusters and analyzing their spatial properties. Perhaps the simplest metric to determine whether a given distribution is clustered is the mean nearest-neighbor distance (NND) between molecules, which should be significantly lower for clustered versus random distributions. Although not very informative for the cluster properties (particularly when they are heterogeneous), NND was found to be especially accurate for discriminating clustering in electron microscopy (EM) images and should be equally applicable to SRM. A more informative approach is the use of Ripley's K statistic, which calculates the average number of molecules that exist within a given radius r from another molecule. A random distribution would mean that the number of molecules should scale with increasing radii, and a departure from this curve would indicate clustering (Dixon, 2006). A similar statistic is the PCF (see above), a conceptual derivative of Ripley's K. Similarly, a deviation from a PCF curve for uniformly distributed molecules would indicate molecular clustering (Sengupta et al., 2011). As with molecule counting, these calculations should also account for overcounted localizations, which can lead to artificial clustering (see earlier discussion). Furthermore, Ripley's K and PCF exhibit edge effects when the r exceeds the boundaries of the image, leading to an underestimation of clustering. Therefore, they should not be used for radii large than ∼1/3 of the smallest image dimension, or else edge correction should be applied (Haase, 1995). Lastly, using these functions is only recommended when the molecular clusters are expected to be homogeneous because clusters with varying properties in a single image can lead to functions that are difficult to interpret (Kiskowski et al., 2009). When a homogeneity cannot be assumed, a different approach can be used which first segments the image into individual clusters and then performs an analysis on each cluster individually. For example, density-based spatial clustering of applications with noise (DBSCAN) (Khater et al., 2020) segments clusters in the data based on two user-defined parameters: a neighborhood radius and a minimum number of localizations within this radius. As with the Ripley's and PCF, this also requires some prior knowledge about the properties of the clusters. In cases where the molecules form more complex structures that cannot be easily predicted, parameter-free approaches, such as Bayesian cluster analysis, can be used. For a more detailed explanation of this approach, as well as an extensive list of clustering analyses for SRM data, we refer the reader to the following review (Khater et al., 2020). It should be noted that although the majority of these long-standing approaches are successful at analyzing 2D clusters, they are not always sufficient for extracting meaningful information from 3D data. Recently, a deep-learning-based approach achieved unprecedented accuracy in identifying and analyzing 3D clusters in SMLM data (Khater et al., 2019), presenting a promising new direction for clustering analysis.
Colocalization analysis
Standard colocalization analyses for diffraction-limited microscopy largely rely on the physical overlap between fluorescence signals (Bolte and Cordelieres, 2006; Manders et al., 1993). Although these can successfully be applied to SRM (e.g. (Bielopolski et al., 2014; Zhao et al., 2013)), it is often more instructive to assess whether two molecules can be found in the vicinity of each other, rather than truly overlapping (for resolutions approaching the molecular scale, there should be practically no overlap for any given pair of molecules). Ripley's K and PCF (see earlier discussion) can be used to analyze colocalizations as well as clustering. In this case, the value computed is the probability of finding a given molecule from one species at a certain distance from a molecule of another species (Lagache et al., 2015). In the case of colocalization analyses, these measures are not prone to artifacts caused by overcounting, because they do not correlate one type of molecule with itself. This also implies that by labeling one molecular species with two different fluorophores, it is possible to analyze whether this molecule self-clusters.
Exploratory analyses
Advances in deep learning and larger data volumes allow for more exploratory analyses of SRM images with artificial neural networks, such as the detection of subtle visual phenotypes associated with genetic or pharmacological manipulations. Such approaches are already widely applied in the field of biomedical imaging, for example in the diagnosis and prognosis of tumors, for disease prediction and for drug discovery (Lee et al., 2017). It is therefore conceivable that the high level of information contained in SRM images could be exploited in a similar manner. Such applications are beginning to be realized with conventional fluorescence microscopy and only very recently with SRM (Kraus et al., 2017; Laine et al., 2018; Long et al., 2020; Lu et al., 2018). In one such study, SIM combined with deep learning to image and classify a large population of viruses. By measuring the morphological features of the various classification groups, new connections could be drawn between viral structures and their mechanism of action (Laine et al., 2018). In another study, a neural network was trained to differentiate 3D STED images of healthy versus Zika-virus-infected cells, and identified morphological changes to the ER associated with viral infection (Long et al., 2020). Although promising, a major limitation of deep learning applications is that the training of a neural network requires large datasets of high-quality images, which are not always readily available. In the future, efforts to enhance the transferability of pre-trained networks to new and/or different microscopy datasets would make such analyses more feasible.
Challenge 5: increasing throughput
Acquiring large imaging datasets is essential for averaging out the high variability of noisy biological processes, and it further allows to conduct unique analyses such as high-throughput screening (e.g. for drug profiling) (Beghin et al., 2017) or “electron-microscopy-like” structural reconstructions (Sigal et al., 2015). However, the longer acquisition times and larger data volumes inherent to SRM experiments make it particularly challenging to scale up such experiments.
Acquire larger areas faster
In laser-scanning techniques such as STED and RESOLFT, there is an inherent trade-off between the size of the FOV and the overall acquisition time. Nevertheless, recent improvements in STED optics have, to a certain extent, circumvented this problem. For example, spatial light modulators (SLMs) have been applied to correct chromatic aberrations that would otherwise lead to a misalignment of the excitation and depletion beams at the periphery of FOV and limit its size, allowing to acquire ∼100 × 100 μm2 in only 30 min (Görlitz et al., 2018; Gould et al., 2013; Lenz et al., 2014). Further optical improvements accelerate acquisition by increasing the speed at which the laser beam is scanned over the sample. In conventional laser-scanning microscopy, the beam is typically moved using galvanometer-controlled mirrors, whose speed is largely limited by inertia. Several recently implemented alternatives such as fast-oscillating resonant mirrors (Wu et al., 2015), acousto-optic deflectors (AODs) that deflect the beam with sound waves (Chen et al., 2011), or electro-optical deflectors that deflect the beam with electric current (Schneider et al., 2015), have significantly increased the scanning speed. The latter approach was shown to permit a speed of >1,000 frames per second for standard FOV sizes—a rate so high that the pixel dwell times approach the lifetime of the fluorescent states (Schneider et al., 2015). Another approach to decrease the overall acquisition time is the implementation of parallelized scanning to image multiple FOVs simultaneously. This has been realized with a variety of methods, including the use of optical lattices (Yang et al., 2014), the employment of multiple simultaneous STED beams (Bingen et al., 2011), the use of widefield excitation in combination with a patterned illumination for off-switching created by two interfering beams (orthogonally and incoherently crossed standing waves) (Bergermann et al., 2015), and the use of electro-optical phase modulators (Girsault and Meller, 2020). One disadvantage of highly parallelized STED, however, is that the number of effective doughnuts is limited by the available laser power. Because RESOLFT requires ∼105 less intensity, it can be parallelized for far greater areas (>100,000 RESOLFT doughnuts), making it possible to acquire >100 x 100 μm2-sized field in <1 s in living cells (Chmyrov et al., 2013). Recently, Masullo and colleagues developed this approach even further, allowing for parallelized RESOLFT in three dimensions. Their technique, termed Molecular Nanoscale Live Imaging with Sectioning Ability (MoNaLISA), enables optical sectioning by using different light patterns with optimized shapes and periodicities for on/off switching and for reading out FP emissions. Using this approach, they were able to record entire cell volumes at sub 50 nm resolution, in <2 min (Masullo et al., 2018).In contrast to point-scanning techniques, widefield microscopy is inherently parallelized because the entire FOV is illuminated simultaneously, meaning that the FOV size is limited solely by the size of the illumination area and the camera readout time. Techniques such as flat-field illumination have been developed to supply uniform and high-intensity light to the sample, achieving an FOV of over 200 × 200 μm2 (Zhao et al., 2017). Another approach for illuminating samples is the use of waveguides: silicon chips that are geometrically molded in such a way that a focused beam of light that enters will propagate by total internal reflection, creating an evanescent wave at the surface of the chip (as is the case in a TIRF microscope). Placing the sample on a waveguide chip, rather than a glass coverslip, allows to implement TIRF-based SRM measurements such as STORM/PALM and PAINT, with lower magnification objectives, which significantly increases the FOV size (Figures 6A and 6B) (Archetti et al., 2019; Helle et al., 2019). With that being said, an increase in the FOV size is inconsequential without a parallel increase in readout speed. The gold-standard charge-coupled device (CCD) cameras used for widefield microscopy have a relatively slow readout time, meaning that imaging very large FOVs can be a lengthy process. Therefore, for high-throughput applications, it is desirable to use faster scientific complementary metal-oxide-semiconductor (sCMOS) cameras, which can potentially increase the imaged area by ∼15-fold (Almada et al., 2015).
Figure 6
Fast large field-of-view super-resolution imaging
(A and B) Waveguide PAINT. (A) With classical objective TIRF, the FOV size (red) is limited by the size of the objective lens and the magnification. In waveguide TIRF, the light undergoes total internal reflection at the interface with the solution, which produces an optical sectioning illumination that can reach up to 2000 μm. (B) Left: a reconstructed single-FOV image of COS-7 cells cultured on a waveguide, labeled for α-tubulin. Scale bar: 10 μm. Scale bar for magnifications of boxed regions: 500 nm. Right: intensity profiles along the lines in the magnified images show two peaks, indicating that the microtubules are distinguished. Adapted from (Archetti et al., 2019), CC-BY-4.0.
(C) ExLLSM. Top: maximum intensity projection of the adult Drosophila brain, labeled for pre-synapses with the marker Bruchpilot. The subset of pre-synapses associated with dopaminergic neurons (DAN) is shown, color-coded by the local density. Scale bar: 100 μm. Inset, top: all pre-synapses, color-coded by the local density. Scale bar: 100 μm. Inset, middle: local density distributions pre-synapses associated with DAN (green), and those not associated with DAN (orange). Inset, bottom: distribution of nearest neighbor distances between DAN-associated pre-synapses (magenta), DAN-associated pre-synapses and all pre-synapses (green), and non-DAN-associated synapses and all pre-synapses (orange). Bottom: maximum intensity projection of DAN and non-DAN-associated pre-synapses in 13 representative brain regions (colors indicate regions). Scale bar: 100 μm. Insets show magnified views of the protocerebral bridge (top) and the ellipsoid body. Brain region acronyms: ATL: antler; CA: calyx; EB: ellipsoid body; FB: fan-shaped body; LAL: lateral accessory lobe; LH: lateral horn; LO: lobula; LOP: lobula plate; MB: mushroom body; ME: medulla; NO: noduli; OTU: optical tubercle; PB: protocerebral bridge; VLPR: ventrolateral protocerebrum; SP: superior protocerebrum. L/R indicate the left and right hemispheres. Adapted from (Gao et al., 2019), with permission from AAAS.
Fast large field-of-view super-resolution imaging(A and B) Waveguide PAINT. (A) With classical objective TIRF, the FOV size (red) is limited by the size of the objective lens and the magnification. In waveguide TIRF, the light undergoes total internal reflection at the interface with the solution, which produces an optical sectioning illumination that can reach up to 2000 μm. (B) Left: a reconstructed single-FOV image of COS-7 cells cultured on a waveguide, labeled for α-tubulin. Scale bar: 10 μm. Scale bar for magnifications of boxed regions: 500 nm. Right: intensity profiles along the lines in the magnified images show two peaks, indicating that the microtubules are distinguished. Adapted from (Archetti et al., 2019), CC-BY-4.0.(C) ExLLSM. Top: maximum intensity projection of the adult Drosophila brain, labeled for pre-synapses with the marker Bruchpilot. The subset of pre-synapses associated with dopaminergic neurons (DAN) is shown, color-coded by the local density. Scale bar: 100 μm. Inset, top: all pre-synapses, color-coded by the local density. Scale bar: 100 μm. Inset, middle: local density distributions pre-synapses associated with DAN (green), and those not associated with DAN (orange). Inset, bottom: distribution of nearest neighbor distances between DAN-associated pre-synapses (magenta), DAN-associated pre-synapses and all pre-synapses (green), and non-DAN-associated synapses and all pre-synapses (orange). Bottom: maximum intensity projection of DAN and non-DAN-associated pre-synapses in 13 representative brain regions (colors indicate regions). Scale bar: 100 μm. Insets show magnified views of the protocerebral bridge (top) and the ellipsoid body. Brain region acronyms: ATL: antler; CA: calyx; EB: ellipsoid body; FB: fan-shaped body; LAL: lateral accessory lobe; LH: lateral horn; LO: lobula; LOP: lobula plate; MB: mushroom body; ME: medulla; NO: noduli; OTU: optical tubercle; PB: protocerebral bridge; VLPR: ventrolateral protocerebrum; SP: superior protocerebrum. L/R indicate the left and right hemispheres. Adapted from (Gao et al., 2019), with permission from AAAS.A creative solution for achieving high-throughput SRM without the need for expensive or complex imaging setups is ExM (Figure 1) (Chang et al., 2017; Cho et al., 2018; Truckenbrodt et al., 2018). Although the sample is physically larger and thus takes longer to image, it benefits from the speed of conventional diffraction-limited widefield microscopes (Tillberg and Chen, 2019). For example, ExM was recently combined with lattice light-sheet microscopy to image both the entire drosophila brain and the full width of the mouse cortex in ∼2–3 days at an effective resolution of 60 × 60 × 90 nm (Gao et al., 2019) (Figure 6C).As an alternative to improvements in microscope setups, imaging throughput can be increased with computational approaches. One such strategy would be to allow the reconstruction of super-resolution images from sparser, noisier data, which can be acquired in a shorter amount of time. By applying artificial neural networks to the reconstruction of images from sparse, rapidly acquired PALM measurements (ANNA-PALM), the acquisition time was significantly reduced, allowing to image >1,000 cells in ∼3 h (Ouyang et al., 2018). Similarly, deep-learning assisted SIM (DL-SIM) allows reconstruction of super-resolution images from a significantly lower number of raw images with no visible decrease in resolution (Jin et al., 2020). In a conceptually similar manner, deep learning has been used to successfully “forge” super-resolution images out of diffraction-limited equivalents. Using a generative adversarial network (GAN) model to train a deep neural network, Wang and colleagues were able to transform confocal images into super-resolution images matching those acquired with STED (Wang et al., 2019). The further possibility of applying such an approach to widefield images would lead to substantially faster acquisition times.
Automating the SRM workflow
Automatic acquisition: commercial software can often have limited flexibility in terms of experiment planning and real-time updates of imaging parameters. At the same time, it is becoming increasingly common to include a software development kit (SDK) for writing custom routines and/or an application programming interface (API) for communicating with the hardware using common programming languages, such as MATLAB (Mathworks) or Python. In addition, several microscope-control interfaces have been developed that are compatible with a wide range of commercial microscopes (e.g. micromanager (Edelstein et al., 2014)). For high-throughput experiments, it is often desirable to automate the microscope stage movement to acquire a large number of images hand-free. Nowadays, most setups have motorized stages that can easily be programmed through the commercial software (whereas more elaborate routines can be further tweaked through APIs). Unfortunately, a downside of constant stage movement is that the sample can easily be shifted out of focus because the high-N.A. apertures used in SRM have a very limited depth of field. The focal drift is further exacerbated by the fact that moving to a new region of a sample often means that the focal plane itself has changed due to small variations in the coverslip glass or uneven mounting (Bravo-Zanoguera et al., 1998). For these reasons, it is necessary to use real-time autofocusing systems to re-find the focal plane at each sequential image. Many commercial software packages offer inbuilt hardware-based autofocus systems that work by measuring the distance from the objective lens to the sample (e.g. measuring the reflection of a laser off the coverslip glass) (Nikon's perfect focus, Zeiss' definite focus, Leica's Adaptive Focus Control, and Olympus's Z-Drift Compensator) (Bathe-Peters et al., 2018; Zhang et al., 2018). Although these techniques are fast, they can be highly sensitive to small irregularities in the sample, such as varying size or uneven mounting. Therefore, it is also advised to follow-up with software-based autofocusing, which relies on the acquisition of images at multiple focal planes, and the extraction of quantitative parameters to determine the location of the focal plane (Firestone et al., 1991). Many such algorithms exist, and common ones are implemented in most commercial microscopy software packages. The software API or SDK can be used to modify the parameters of these algorithms further or to implement different techniques that are more suited to the experimental conditions (e.g., the use of multiwell plates instead of coverslips (Liron et al., 2006)) or to the available hardware (e.g., the use of additional imaging modalities to assist focusing (Shen et al., 2006)). An important point to consider is that software-based autofocus is significantly slower and exposes the sample to large amounts of light and is therefore not recommended for easily bleaching dyes or for live-cell imaging applications. In such cases, it is better to resort to approaches that require a smaller number of images for auto-focusing (Brázdilová and Kozubek, 2009; Pinkard et al., 2019; Yazdanfar et al., 2008).An additional consideration for user-free imaging is the automatic fine-tuning of imaging parameters. This is particularly important in SMLM, where the rate of photoswitching is largely dependent on the laser intensity. As the measurement progresses, more fluorophores are shifted to the “off” state, resulting in a lower emitter density. In order to maintain a fixed density of active emitters, the laser intensity needs to be reduced accordingly. This has been achieved with several approaches, such as fast online localization of emitters (Kechkar et al., 2013; Mund et al., 2018), the summation of the “on” time of active emitters in the FOV (Holden et al., 2014), or the use of machine learning to estimate the density of emitters in the FOV (Štefko et al., 2018). For point-scanning techniques, additional parameters such as pixel dwell time and depletion laser intensity must also be optimized. Recently, Durand, and colleagues developed a fully-automated-machine-learning-based system that performs online optimization of imaging parameters for STED, relying on a neural network trained to differentiate between low- and high-quality images (Durand et al., 2018). By implementing such approaches, the acquisition can be carried out without any need for user intervention.Lastly, in order to save time and resources when realizing user-free imaging, it can also be beneficial to avoid imaging uninteresting fields of view that do not contain any cells or structures of interest. This can be done, for example, by acquiring low-quality widefield/bright-field/phase-contrast images followed by an online segmentation to determine if cells or structures of interest are contained within a FOV (Holden et al., 2014). Furthermore, capturing an image of the cell can also assist with relating the molecules/structures in the super-resolved image to the rest of the cell in subsequent analyses.Automatic analysis: being that a single SRM image already contains a substantial amount of data, large-scale experiments would be expected to yield datasets that are too large and too complex to be analyzed with conventional (and largely manual) methods. ImageJ (http://imagej.nih.gov/ij/) (Schneider et al., 2012), one of the most commonly used packages, is not particularly well suited for handling big data: despite having a “batch analysis” option, it is geared toward smaller-scale analyses rather than high-throughput projects where hundreds to thousands of images need to be analyzed. There are many alternative open-source options, one of the most noteworthy being Icy, a community-oriented software developed by Institut Pasteur in Paris (http://icy.bioimageanalysis.org) (de Chaumont et al., 2012). The software allows users to build a custom analysis workflow graphically (code-free) by selecting preset algorithms and procedures to apply to the image and interconnecting them with arrows. Another platform is Cellprofiler (http://www.cellprofiler.org) (Carpenter et al., 2006), a Python-based and highly customizable software developed by the Broad Institute of Harvard and MIT. One attractive aspect of CellProfiler is that it can be integrated with high-performance computing (HPC) cluster environments, allowing significantly faster processing of large datasets. Notable commercial software options include Imaris (Oxford Instruments), which is superior at analyzing 3D and time-series data, and KNIME® (http://www.knime.org/). All of the aforementioned software packages can perform a multitude of automatic analysis routines and support end-to-end analysis, starting from raw input data and ending with meaningful quantifications. A typical workflow would start by reading the raw image files, preprocessing the images to correct for artifacts such as noise or uneven illumination, and then segmenting the images to find structures of interest or individual localizations (e.g. (Soliman, 2015)). Once the objects or spots are found, their coordinates alongside the raw intensity data can be fed into a variety of algorithms for statistical analysis of quantitative attributes such as morphology, size, clustering, and colocalization (see Challenge 4, above). In addition, these software packages also have APIs for additional common image analysis software, allowing the user to mix and match analysis modules as per their needs or write their own modules from scratch.A number of recent studies have successfully realized fully automated high-throughput SRM, leading to interesting biological insights. For example, Mund and colleagues implemented a fully automated acquisition and data analysis pipeline to perform PALM imaging of 23 endocytic proteins from over 100,000 endocytic sites in yeast and found that their nanoscale organization can spatially control the actin nucleation required for endocytosis (Mund et al., 2018). Another group automated not only the acquisition but also a sequential re-staining of a sample by using a pipetting robot to acquire 3D STORM images of 16 samples in neuronal tissue samples (Figure 2I), followed by a virtually user-free analysis. Using this, they systematically investigated the 3D architecture of synaptic active zones, finding, for example, a potential role of the motor protein myosin Va (MyoVa) in synaptic vesicle trafficking (Klevanski et al., 2020). A high-throughput application was also developed to create electron-microscopy-like 3D reconstructions of multiple proteins, using automated STORM imaging of axial stacks followed by a machine-learning-based analysis (Sieben et al., 2018).
Conclusion and outlook
The precision of super-resolution imaging has reached unprecedented levels, with techniques such as MINFLUX achieving resolutions as small as single nanometers. Accordingly, a shift in attitude toward more quantitative approaches has been observed, and quantitative metrics have been introduced for the majority of SRM techniques. While the optical capabilities of the techniques have proven to be robust, other practical obstacles have now become apparent. One crucial aspect of SRM experiments is that the samples themselves must be preserved near-perfectly. Still, a thorough comparison of different fixation techniques is lacking. It would therefore be beneficial to corroborate fine structural analyses with additional high-resolution techniques such as transmission electron microscopy (TEM) or atomic force microscopy (AFM). Nowadays, such comparisons could even be made on the same biological sample with recent advances in correlative SRM technologies (Cosentino et al., 2019). Along these same lines, artifacts arising from the use of suboptimal labels should also be avoided, and it is expected that small probes such as nanobodies will become the gold standard for SRM experiments.With regard to measurement quantification, one of the major goals of quantitative SRM is counting molecule copy numbers. A number of techniques, such as qSMLM, qPAINT, and photon counting statistics, have demonstrated proof of principle, but their application is not straightforward. Although molecule counting experiments should ideally yield absolute copy numbers, most techniques (with the exception of photon-counting statistics) require extensive calibration experiments or standards. So far, little effort has been made to standardize such calibrations, and we hope that in the near future, simple and easily integrated approaches will be introduced.The shift toward quantitation has incentivized an increase in the scale of SRM experiments, and efforts are underway to develop fast, autonomous image acquisition and analysis systems. Deep learning is an emerging technology, which has been successfully applied to a wide range of biological image processing tasks, most commonly (and very successfully) to the field of biomedical image processing (Shen et al., 2017). Although deep learning approaches in SRM are, as of yet, few and far between, applications such as SMLM emitter localization, image denoizing, image reconstruction, and spatial analyses have proven faster and more accurate than standard approaches. In the coming years, we expect that deep learning will also be applied to accelerate and automate image acquisition, as well as guide more exploratory analyses. With that being said, we would like to caution against an over-reliance on sophisticated (and oftentimes uninterpretable) methods for processing SRM data, without a good understanding of the underlying assumptions of these approaches, and whether they are applicable to the data in hand.SRM is rapidly being established as a robust and quantitative research tool, which is allowing researchers to tackle increasingly complex biological questions. Although the research possibilities are seemingly endless, it is important to remember that more complexity means more opportunities for artifacts and misinterpretations. Nevertheless, with a good knowledge of the applied techniques and their limitations, the informed researcher will gain access to a wealth of data that promise new and exciting biological discoveries.
Authors: Johanna Bückers; Dominik Wildanger; Giuseppe Vicidomini; Lars Kastrup; Stefan W Hell Journal: Opt Express Date: 2011-02-14 Impact factor: 3.894
Authors: J Vangindertael; R Camacho; W Sempels; H Mizuno; P Dedecker; K P F Janssen Journal: Methods Appl Fluoresc Date: 2018-03-16 Impact factor: 3.009
Authors: Leila Nahidiazar; Alexandra V Agronskaia; Jorrit Broertjes; Bram van den Broek; Kees Jalink Journal: PLoS One Date: 2016-07-08 Impact factor: 3.240
Authors: Raffaele Faoro; Margherita Bassu; Yara X Mejia; Till Stephan; Nikunj Dudani; Christian Boeker; Stefan Jakobs; Thomas P Burg Journal: Proc Natl Acad Sci U S A Date: 2018-01-22 Impact factor: 11.205