Literature DB >> 33376795

Single-Nanoparticle Orientation Sensing by Deep Learning.

Jingtian Hu1, Tingting Liu1, Priscilla Choo1, Shengjie Wang2, Thaddeus Reese3, Alexander D Sample1, Teri W Odom1,3.   

Abstract

This paper describes a computational imaging platform to determine the orientation of anisotropic optical probes under differential interference contrast (DIC) microscopy. We established a deep-learning model based on data sets of DIC images collected from metal nanoparticle optical probes at different orientations. This model predicted the in-plane angle of gold nanorods with an error below 20°, the inherent limit of the DIC method. Using low-symmetry gold nanostars as optical probes, we demonstrated the detection of in-plane particle orientation in the full 0-360° range. We also showed that orientation predictions of the same particle were consistent even with variations in the imaging background. Finally, the deep-learning model was extended to enable simultaneous prediction of in-plane and out-of-plane rotation angles for a multibranched nanostar by concurrent analysis of DIC images measured at multiple wavelengths.
© 2020 American Chemical Society.

Entities:  

Year:  2020        PMID: 33376795      PMCID: PMC7760486          DOI: 10.1021/acscentsci.0c01252

Source DB:  PubMed          Journal:  ACS Cent Sci        ISSN: 2374-7943            Impact factor:   14.553


Automated single-particle tracking techniques[1−3] have played key roles in studying biological processes ranging from cellular motion[4] to targeted drug delivery.[5−7] These methods analyze cellular dynamics by first recording a video of an optical probe (fluorescent molecules[8−10] or semiconductor quantum dots[11,12]) and then extracting the translational trajectories by analysis algorithms.[13,14] Although rotational dynamics has not been studied in such tracking processes, they can provide additional molecular level information on cellular activities such as protein diffusion and cytoskeleton formation.[15−18] One major drawback for visualizing rotational motion is that common fluorescence probes either have orientation-invariant emission or limited signal intensity.[19] Also, existing algorithms using Gaussian fitting or cross-correlation cannot identify probe orientation.[13,14,20] The development of next-generation, particle-tracking platforms for rotational dynamics requires simultaneous advances in both the optical probe and the analysis method. Optical imaging approaches using anisotropic metal nanoparticle probes can resolve rotational motion because of their polarization-dependent optical responses.[21−23] For example, gold nanorods show a different scattering intensity under polarized dark-field microscopy based on orientation[24,25] but cannot be used in cellular environments with strong background scattering.[26] Alternatively, differential interference contrast (DIC) microscopy[27,28] can generate orientation-dependent patterns (bright and dark pixels) from anisotropic plasmonic nanoparticles.[23] However, because of the symmetry of gold nanorods, their orientation can only be tracked during in-plane rotation because DIC image signals decrease dramatically with out-of-plane rotation.[29] Gold nanostars[30−32] (AuNS) are three-dimensional (3D) optical probes that have multiple branches oriented in different directions. Compared to nanorods, AuNS show more complicated DIC patterns[33,34] that can be correlated with their 3D nanostructure and orientation.[35] Data-driven machine-learning algorithms can assist single-particle tracking in physiological environments when fast and accurate pattern analyses are needed.[36−39] Statistical-learning approaches determine particle trajectories from optical microscopy images by locating the Haar-like features[36,37,40] (developed for object detection) that identify particle position but not orientation. A different set of features must be established between DIC image pattern and particle orientation to solve the inverse problem. Deep convolutional networks[41,42] are universal machine learning tools that can automatically identify robust features associated with a response or category in an image data set.[43,44] To reduce data acquisition time, data-augmentation strategies[45,46] have been developed so that data sets of sufficient sizes (>104 labeled images) can be obtained from only a small set of original images. Therefore, deep-learning models may become an effective tool to predict the orientation of nanoparticle probes from their diffraction-limited microscopy images. Here we show a deep-learning platform that can identify the 3D orientation of optical probes used in DIC microscopy. We constructed DIC image data sets of both Au nanorods and AuNS with labeled orientations to establish the deep-learning models. The optimized model predicted the in-plane orientation of nanorods from their DIC images with an accuracy limited only by the inherent angular resolution of the imaging-and-probe system. The angular range of in-plane orientation sensing was expanded from 0–180° to 0–360° using anisotropic AuNS probes with a lower structural symmetry than the nanorods. We further confirmed the robustness of our deep-learning model by showing that predictions were accurate even with different backgrounds. Finally, we determined the 3D orientation of a multibranched AuNS by the simultaneous detection of in-plane and out-of-plane angles. Figure depicts a scheme of the prediction process for nanoparticle orientations based on DIC microscopy in the de Sénarmont configuration[47,48] and convolutional neural networks. An unpolarized light source is converted into elliptically polarized light by a linear polarizer and a quarter-wave plate, which is then split into two orthogonally polarized beams that are spatially separated by 120 nm at the first Nomarski prism (Figure a). The beams experience different phase changes at the nanoparticle before being combined by a second Nomarski prism to form either bright or dark pixels in the DIC images. At each in-plane orientation of the nanoparticle probe, we collected raw DIC images to produce a data set labeled by the corresponding angle (Figure b). Convolutional neural networks were then constructed by the PyTorch toolbox[49,50] based on the library of DIC images (Figure c).
Figure 1

Intelligent orientation sensing by differential interference contrast (DIC) microscopy images and deep learning. Schemes depicting (a) DIC imaging setups, (b) libraries of DIC images, and (c) convolutional networks for deep learning.

Intelligent orientation sensing by differential interference contrast (DIC) microscopy images and deep learning. Schemes depicting (a) DIC imaging setups, (b) libraries of DIC images, and (c) convolutional networks for deep learning. Our deep-learning model was optimized iteratively using a training data set to minimize the prediction errors defined by a cost function.[51,52]Figure a summarizes our procedure to prepare DIC data sets for model training and testing. We processed raw DIC images by our multistep thresholding method that produced clean black–white patterns from both calculated and measured DIC data (Figures S1–S2). By a data-augmentation process,[45,46] the images corresponding to each particle orientation were converted to a data set class (∼1500 images) labeled by angle (Figure S3). In this step, the DIC patterns in the original images were randomly resized and spatially shifted to produce image copies in the class. The image-scaling process ensured that the optimized models are insensitive to changes in the DIC pattern size so that different imaging setups can use this deep-learning approach. Random noise was also added to the images to improve the tolerance of the model to pixel-level imaging defects. A fraction of classes with evenly distributed orientations was reserved as the testing data set, while the remaining classes were split into training and validation data sets with a 4:1 ratio. The training data set was used to optimize the weights in the convolutional network during model training, and the validation data set was used to monitor overfitting errors. The testing data set evaluated the accuracy of the optimized models.
Figure 2

Deep-learning models for orientation prediction of anisotropic gold nanoparticles based on DIC microscopy images. (a) Scheme of data set preparation from raw DIC images with labeled nanoparticle orientations. (b) Definition of cost function based on labeled and predicted angles. (c) Training and testing of the artificial neural networks in the deep learning model by batches of images in the training data set. Data set preparation steps in (a) include (1) extracting DIC patterns from raw images by computer vision methods (Python Scikit-Image Package) and (2) expanding images in each orientation into a data set by tailoring randomly the pattern size, position, and noise level. In each epoch described in (c), the artificial neural network calculated the predicted angle for batches of images in the training data sets, evaluated the corresponding error based on (b), and adjusted the weights in the model in a back-propagation process for the next batch.

Deep-learning models for orientation prediction of anisotropic gold nanoparticles based on DIC microscopy images. (a) Scheme of data set preparation from raw DIC images with labeled nanoparticle orientations. (b) Definition of cost function based on labeled and predicted angles. (c) Training and testing of the artificial neural networks in the deep learning model by batches of images in the training data set. Data set preparation steps in (a) include (1) extracting DIC patterns from raw images by computer vision methods (Python Scikit-Image Package) and (2) expanding images in each orientation into a data set by tailoring randomly the pattern size, position, and noise level. In each epoch described in (c), the artificial neural network calculated the predicted angle for batches of images in the training data sets, evaluated the corresponding error based on (b), and adjusted the weights in the model in a back-propagation process for the next batch. Figure b shows the cost function to be minimized by the deep-learning model. For a general prediction model of in-plane rotations in the range 0–360°, the prediction error was defined based on polar coordinates aswhere φ and φ0 are the predicted and labeled angles corresponding to each image in the training set, respectively.Figure c depicts the training process of the deep-learning model starting from a convolutional network with randomly initialized weights (Code section S1). In each epoch of the optimization, the model-training function (1) predicts the particle orientations for all images in the training data set with the model; (2) calculates the corresponding errors by eq ; and (3) adjusts the model weights to reduce the total error (Figures S4–S5). After each training epoch, the model also makes orientation predictions for images in the validation data set, and the average validation error (per image) is compared to the training error. If the validation error was below 10% of the error at the initial epoch, the convergence criterion was satisfied, and the model was evaluated using the testing data set. We first tested the deep-learning method for tracking the in-plane rotation of a gold nanorod (length l = 90 nm, width w = 40 nm) with a localized surface plasmon (LSP) resonance at λ = 620 nm (Figure S6). Because of contrast inversion[33] when the imaging wavelength is tuned across the LSP, DIC images are typically collected at a wavelength shorter or longer than this resonance. We collected the raw images at λ = 700 nm and constructed a library of DIC patterns of the nanorod with 36 in-plane angles (relative to the x axis) φ0 = 0°, 5°, ..., 175° by the data augmentation process. Among the 36 angles, images at φ0 = 0°, 20°, ..., 160° were selected as the testing data set, and the remaining angles were split randomly into training (∼70% images) and validation (∼30% images) data sets. Because of the 2-fold symmetry of the nanorod, the cost function was modified from eq toso that the error was a periodic function of φ between 0 and 180°. Figure a–b shows the separate training processes of the neural network for calculated and measured DIC images, respectively, which are consistent in bright–dark contrast but can have differences in the patterns.[33] A model using a three-layer convolutional network can reach convergence in 50 epochs for the simulated data sets (Figure S7). In comparison, training using the (noisy) measured data required four convolutional layers to converge within the same time (Figure S8). Figure c–d shows the performance of these optimized networks on predicting in-plane particle orientation from images in their corresponding testing data set (φ0 = 0°, 20°, ..., 160°). Without learning directly from DIC patterns acquired at these angles, the model could determine particle orientations with errors below ±20° for the simulated data set, which is the experimental angular resolution of DIC and plasmonic optical probes (Figures S9–S10). These results indicate that convolutional networks can predict the in-plane orientation of nanoparticles.
Figure 3

Deep learning models can predict in-plane angles of gold nanorods. Mean-square errors of the neural networks during the training process based on (a) simulated and (b) measured DIC images. Prediction of nanorod in-plane orientations by the deep learning model based on (c) calculated images (d) experimental results. The image library consisted of DIC images calculated by finite-difference time-domain (FDTD) simulations or measured experimentally for a nanorod (length l = 90 nm, width w = 40 nm) with in-plane angles φ0 = 0°, 5°, ..., 175° at wavelength λ = 700 nm. DIC images at φ0 = 0°, 20°, ..., 160° were used for testing the model, and the images at remaining angles were used for training and validation. Scanning electron microscopy (SEM) in (d) shows the structure of the nanorod.

Deep learning models can predict in-plane angles of gold nanorods. Mean-square errors of the neural networks during the training process based on (a) simulated and (b) measured DIC images. Prediction of nanorod in-plane orientations by the deep learning model based on (c) calculated images (d) experimental results. The image library consisted of DIC images calculated by finite-difference time-domain (FDTD) simulations or measured experimentally for a nanorod (length l = 90 nm, width w = 40 nm) with in-plane angles φ0 = 0°, 5°, ..., 175° at wavelength λ = 700 nm. DIC images at φ0 = 0°, 20°, ..., 160° were used for testing the model, and the images at remaining angles were used for training and validation. Scanning electron microscopy (SEM) in (d) shows the structure of the nanorod. To realize in-plane orientation tracking in the full 0–360° range, we tested our deep-learning approach with a low-symmetry AuNS having two long branches positioned in the imaging plane. Compared to nanorods, the angular separation in the raw data set was increased to 15° to reduce the data collection time. Figure a shows our selected AuNS, where DIC images were collected at three wavelengths (λ = 600, 680, 750 nm) around the LSP resonance at λ = 730 nm. Imaging was not conducted around the other LSP resonance at λ = 880 nm because the silicon detector has poor sensitivity. To increase the accuracy of the orientation predictions, we developed a model to account for the three wavelengths simultaneously based on three convolutional layers (Figures S11–S12). Figure b shows the performance of the optimized neural network on predicting in-plane orientation from images at φ0 = 0°, 45°, ..., 315°. The predictions were accurate at all angles except for φ0 = 225°, where the DIC pattern was similar to φ0 = 45°. Figure c shows the accuracy of the multiwavelength model averaged over ∼600 test images at each angle. At all angles except φ0 = 45° and 225°, the multiwavelength model showed reduced average errors compared to the single-wavelength models that used only the images at λ = 680 nm (Figures S13–S14). At most angles, both models showed an average error below 20°, which indicates that the orientation of appropriately shaped anisotropic probes can be predicted over the 0–360° range. The prediction accuracy of the model can be improved further by accounting for DIC images at multiple wavelengths simultaneously.
Figure 4

Low-symmetry imaging probe enables full 0–360° orientation sensing. (a) SEM image of the gold nanostar (AuNS) probes and the scattering spectrum measured by dark-field microscopy. (b) The predicted angles based on the testing data sets collected at three wavelengths λ = 600, 680, 750 nm. (c) Prediction errors of the deep-learning models at each angle. The image library consisted of DIC images for the AuNS with in plane angles φ0 = 0°, 15°, ..., 360°. DIC images at angles φ0 = 0°, 45°, ..., 315° were used for testing the model, and the images at remaining angles were used for training and validation.

Low-symmetry imaging probe enables full 0–360° orientation sensing. (a) SEM image of the gold nanostar (AuNS) probes and the scattering spectrum measured by dark-field microscopy. (b) The predicted angles based on the testing data sets collected at three wavelengths λ = 600, 680, 750 nm. (c) Prediction errors of the deep-learning models at each angle. The image library consisted of DIC images for the AuNS with in plane angles φ0 = 0°, 15°, ..., 360°. DIC images at angles φ0 = 0°, 45°, ..., 315° were used for testing the model, and the images at remaining angles were used for training and validation. We tested the robustness of the orientation prediction platform under different background imaging conditions. Figure a shows the raw DIC images of four AuNS, including our selected optical probe (dashed box) with two LSP resonances at λ = 770 and 910 nm (Figure S15) and its extracted patterns under different background conditions. We prepared microscale patterns in 5 nm Cr films on replaceable coverslips that produced background noises in the imaging field of view (Figure S16). We trained the multiwavelength deep-learning model by four data sets prepared from images with clean or randomly textured background. Figure b–c shows the test results of the optimized model for orientations of the AuNS based on DIC images collected with and without backgrounds. For both tested data sets, the model exhibited average errors below ±20° for 75% of the selected angles; large discrepancies at some angles were observed because of difficulties in distinguishing between ±180° orientations. These consistent predictions of in-plane angles with tolerance to imaging conditions will be important for imaging in live cells.
Figure 5

Robust orientation sensing under complex imaging backgrounds. (a) Examples of (left) raw DIC images and (right) extracted multiwavelength patterns. (b) The predicted angles from the testing data sets collected at three wavelengths λ = 660, 700, and 750 nm and (c) the corresponding error analysis for testing data sets with clean and random backgrounds.

Robust orientation sensing under complex imaging backgrounds. (a) Examples of (left) raw DIC images and (right) extracted multiwavelength patterns. (b) The predicted angles from the testing data sets collected at three wavelengths λ = 660, 700, and 750 nm and (c) the corresponding error analysis for testing data sets with clean and random backgrounds. Finally, we demonstrated that out-of-plane orientation sensing is possible using an anisotropic, multispectral optical probe based on calculated DIC images. Figure a–b shows the structure and optical responses of our AuNS probe. This AuNS showed three LSP resonances (λ = 725, 785, 815 nm), each corresponding to a branch at a different spatial orientation (Figure S17). We prepared data sets of calculated DIC images at all combinations of in-plane angles φ0 = 0°, 5°, ..., 355° and out-of-plane angles θ0 = −90°, −85°, ..., 90° for four wavelengths (λ = 710, 750, 800, 825 nm) that spanned the range of the LSP resonances. To account for both rotation angles (φ0 and θ0), the cost function was defined based on a spherical coordinate aswhere θ and θ0 are the predicted and labeled out-of-plane angles corresponding to each image in the training set, respectively. Images at φ0 = 0°, 20°, ..., 340° were selected as the testing data set, and the remaining angles were used for training and validation of the model. Figure c shows the performance of the deep-learning model for all combinations of φ0 and θ0. For 80% of the orientations in the testing data set, the average prediction errors were below 20° (Figure S18). We believe that our multispectral optical probe can be realized experimentally with developments in synthesis[32] and sorting methods[53] that improve control over AuNS shape and homogeneity.
Figure 6

Multispectral AuNS enables three-dimensional (3D) orientation sensing. (a) Scattering spectra and (b) near-field electric-field intensity map of the selected AuNS with LSP resonances at λ = 725, 785, and 815 nm. (c) Prediction errors evaluated with a test data set consisting of DIC images at all combinations of in-plane angles φ0 = 0°, 20°, ..., 340° and out-of-plane rotation θ0 = −80°, −60°, ..., 80°.

Multispectral AuNS enables three-dimensional (3D) orientation sensing. (a) Scattering spectra and (b) near-field electric-field intensity map of the selected AuNS with LSP resonances at λ = 725, 785, and 815 nm. (c) Prediction errors evaluated with a test data set consisting of DIC images at all combinations of in-plane angles φ0 = 0°, 20°, ..., 340° and out-of-plane rotation θ0 = −80°, −60°, ..., 80°. In summary, we demonstrated a deep-learning approach to determine the orientation of optical nanoparticle probes from their microscope images. Innovations in anisotropic probe design enabled the sensing of in-plane orientations in the full 0–360° range with an accuracy at the intrinsic limit of the DIC technique with a AuNS probe. We also showed the prediction of the out-of-plane orientation for a multispectral AuNS by imaging simultaneously at multiple wavelengths. The model is robust against noise in imaging background and has the potential to achieve fast, fully automated tracking of particle rotations during live-cell interactions. We expect that this deep-learning platform can resolve cellular interactions involving 3D rotational dynamics that are not accessible by existing imaging techniques but are critical for understanding and optimizing next-generation therapeutic systems.
  39 in total

1.  Rotational movement of the formin mDia1 along the double helical strand of an actin filament.

Authors:  Hiroaki Mizuno; Chiharu Higashida; Yunfeng Yuan; Toshimasa Ishizaki; Shuh Narumiya; Naoki Watanabe
Journal:  Science       Date:  2010-12-09       Impact factor: 47.728

2.  Using gold nanorods to probe cell-induced collagen deformation.

Authors:  John W Stone; Patrick N Sisco; Edie C Goldsmith; Sarah C Baxter; Catherine J Murphy
Journal:  Nano Lett       Date:  2007-01       Impact factor: 11.189

3.  Wavelength-Dependent Differential Interference Contrast Inversion of Anisotropic Gold Nanoparticles.

Authors:  Priscilla Choo; Alexander J Hryn; Kayla S Culver; Debanjan Bhowmik; Jingtian Hu; Teri W Odom
Journal:  J Phys Chem C Nanomater Interfaces       Date:  2018-11-01       Impact factor: 4.126

4.  Plasmonic nanorod absorbers as orientation sensors.

Authors:  Wei-Shun Chang; Ji Won Ha; Liane S Slaughter; Stephan Link
Journal:  Proc Natl Acad Sci U S A       Date:  2010-02-01       Impact factor: 11.205

Review 5.  Deep learning.

Authors:  Yann LeCun; Yoshua Bengio; Geoffrey Hinton
Journal:  Nature       Date:  2015-05-28       Impact factor: 49.962

6.  Convolutional neural networks automate detection for tracking of submicron-scale particles in 2D and 3D.

Authors:  Jay M Newby; Alison M Schaefer; Phoebe T Lee; M Gregory Forest; Samuel K Lai
Journal:  Proc Natl Acad Sci U S A       Date:  2018-08-22       Impact factor: 11.205

7.  Direct observation of nanoparticle-cancer cell nucleus interactions.

Authors:  Duncan Hieu M Dam; Jung Heon Lee; Patrick N Sisco; Dick T Co; Ming Zhang; Michael R Wasielewski; Teri W Odom
Journal:  ACS Nano       Date:  2012-03-22       Impact factor: 15.881

8.  Super-resolution differential interference contrast microscopy by structured illumination.

Authors:  Jianling Chen; Yan Xu; Xiaohua Lv; Xiaomin Lai; Shaoqun Zeng
Journal:  Opt Express       Date:  2013-01-14       Impact factor: 3.894

9.  Classification of diffusion modes in single-particle tracking data: Feature-based versus deep-learning approach.

Authors:  Patrycja Kowalek; Hanna Loch-Olszewska; Janusz Szwabiński
Journal:  Phys Rev E       Date:  2019-09       Impact factor: 2.529

10.  Metallic Nanostructures as Localized Plasmon Resonance Enhanced Scattering Probes for Multiplex Dark Field Targeted Imaging of Cancer Cells.

Authors:  Rui Hu; Ken-Tye Yong; Indrajit Roy; Hong Ding; Sailing He; Paras N Prasad
Journal:  J Phys Chem C Nanomater Interfaces       Date:  2009       Impact factor: 4.126

View more
  3 in total

Review 1.  Optical Microscopy Systems for the Detection of Unlabeled Nanoparticles.

Authors:  Ralf P Friedrich; Mona Kappes; Iwona Cicha; Rainer Tietze; Christian Braun; Regine Schneider-Stock; Roland Nagy; Christoph Alexiou; Christina Janko
Journal:  Int J Nanomedicine       Date:  2022-05-13

2.  Nanoparticle Shape Determines Dynamics of Targeting Nanoconstructs on Cell Membranes.

Authors:  Priscilla Choo; Tingting Liu; Teri W Odom
Journal:  J Am Chem Soc       Date:  2021-03-18       Impact factor: 15.419

Review 3.  Human Monoclonal Antibodies: On the Menu of Targeted Therapeutics Against COVID-19.

Authors:  Junsen Chen; Rui Huang; Yiwen Nie; Xinyue Wen; Ying Wu
Journal:  Virol Sin       Date:  2021-01-04       Impact factor: 4.327

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.