| Literature DB >> 34322033 |
Melissa L Knothe Tate1, Abhilash Srikantha2, Christian Wojek2, Dirk Zeidler3.
Abstract
"Brainless" cells, the living constituents inhabiting all biological materials, exhibit remarkably smart, i.e., stimuli-responsive and adaptive, behavior. The emergent spatial and temporal patterns of adaptation, observed as changes in cellular connectivity and tissue remodeling by cells, underpin neuroplasticity, muscle memory, immunological imprinting, and sentience itself, in diverse physiological systems from brain to bone. Connectomics addresses the direct connectivity of cells and cells' adaptation to dynamic environments through manufacture of extracellular matrix, forming tissues and architectures comprising interacting organs and systems of organisms. There is imperative to understand the physical renderings of cellular experience throughout life, from the time of emergence, to growth, adaptation and aging-associated degeneration of tissues. Here we address this need through development of technological approaches that incorporate cross length scale (nm to m) structural data, acquired via multibeam scanning electron microscopy, with machine learning and information transfer using network modeling approaches. This pilot case study uses cutting edge imaging methods for nano- to meso-scale study of cellular inhabitants within human hip tissue resected during the normal course of hip replacement surgery. We discuss the technical approach and workflow and identify the resulting opportunities as well as pitfalls to avoid, delineating a path for cellular connectomics studies in diverse tissue/organ environments and their interactions within organisms and across species. Finally, we discuss the implications of the outlined approach for neuromechanics and the control of physical behavior and neuromuscular training.Entities:
Keywords: cell; cell memory; cellular epidemiology; connectomics; imaging; machine learning
Year: 2021 PMID: 34322033 PMCID: PMC8313296 DOI: 10.3389/fphys.2021.647603
Source DB: PubMed Journal: Front Physiol ISSN: 1664-042X Impact factor: 4.566
FIGURE 1Imaging cellular networks and environments across length scales, from nano to meso, using cross length scale imaging (multi-beam scanning electron microscopy) of the cellular inhabitants of the human femoral neck, i.e., osteocytes, as a case study. (A) Organism (A1) to tissue (A2) to cellular (osteocyte, A3) length scales demonstrating the most prevalent cellular inhabitants of bone, osteocytes. During development, cells manufacture the tissues comprising the femoral head and neck (proximal femur, A1,D); the cellular inhabitants of bone, cartilage and other tissues model (during growth) and remodel (enabling adaptation) the respective tissues of their local environment through up- and down-regulation of structural protein transcription, and secretion into the extracellular matrix. (B) The cellular network of bone’s resident osteocytes changes throughout life, in health and disease (B1: healthy, B2,B3: diseased). (B,C) The loss in network connectivity reflects the health status of the cells (C1, live and dead osteocytes visualized using an ethidium bromide assay) as well as the patency of the network (C2—stochastic network model with nodes representing cells, C3—calculation of loss in information transfer with loss in network nodes); loss of viable cells within the network results in loss in network connectivity and subsequent diminished information transfer capacity across and within the network. (D–J) mSEM imaging, combined with image stitching and Google Maps API geonavigation applications, enables high resolution imaging of inhabitant cells within tissues, as well navigation and analysis of single cells and their complex networks, seamlessly across length scales (D, femoral head and neck; E, section through the femoral head created by stitching together of many images, comprising arrays; F, of hexagons; G, themselves made of arrays of electron beams). Rather than using single electron beams as in traditional electron microscopy, mSEM uses arrays of 61 and more beams (F,G) to capture large areas of tissue (mesoscale, E) with nanoscale resolution (H–J). Through inorganic and organic etching procedures adapted from atomic force microscopy, the third dimension of cellular networks may be captured (H,I) and the local and global environment of tissues’ cellular inhabitants (Ot, Osteocyte; BLC, Bone Lining Cell) can be explored within tissue contexts (BV, upper half of oval Blood Vessel, above which bony matrix is seen). Images adapted and used with permission (A, Knothe Tate et al., 2010; B, Knothe Tate et al., 2002; C, Anderson et al., 2008; D–J, Knothe Tate et al., 2016c).
Dataset metrics from three generations of mSEM maps from three different human hip samples obtained with IRB approval.
| Total area imaged (mm2) | 5.69 | 13.1 | 1,810 |
| Total images in area | 54,717 | 100,589 | 7,335,982 |
| Multibeam FOVs | 897 | 1649 | 120,262 |
| Pixels (megapixels) | 75,276 | 857,086 | 1.07 × 1010 |
| Size (Terabytes) | 0.08 | 0.87 | 10.98 |
FIGURE 2Automated detection algorithm to classify osteocytes using manual methods and the You Only Look Once algorithm (YOLO; Redmon et al., 2015). (A,B) Training and testing data for the machine learning algorithm using the 1st generation mSEM map. The manually acquired training data set, comprising 629 examples of osteocytes (A—stitched electron microscopy scan on left and manually located and “pinned,” using Google Maps API, osteocytes, where green indicates viable and red marks pyknotic), was scaled up to an augmented training set of 106 examples (B) using digital permutations of translation and rotation, scale and illumination. From the augmented dataset, 75% of the data was used for training and 25% of the data was used for testing of the YOLO algorithm, all using the data from the 1st generation map. The algorithm was then run independently on the 2nd (Figure 3) and 3rd generation maps. Note: the red bounding boxes (A) indicate detected cells prior to classification, i.e., not indicative of cells’ health status.
FIGURE 3Automated detection algorithm applied to the 2nd generation map (A), where detections with greater than 70% confidence (quantitative prediction of match to “ground truth” defined by training data) are depicted. (B) The entire map, depicting higher resolution details at increasing levels of zoom (C,D) in the Google Maps API. Note: the red circles indicate detected cells which are changed to green when the classifier (three or more processes) is met. Red circles above the dotted yellow line delineating the edge of the tissue surface (B, upper corner) are cells, vessels and artifacts outside of the femoral head and neck tissue. The map depicted here can be navigated and explored like Google Maps at http://www.mechbio.org/sites/mechbio/files/maps7/index.html to access the map, type in user: mechbio, password: #google-maps.
FIGURE 4Processing of images and application of the machine learning classification algorithm. (A) Module A preprocesses the mSEM output by stitching individual images into region-wide panoramas by the virtue of recorded image coordinates. In the interest of computational efficiency, the resulting image is down-scaled such that individual cells occupy circa 200 × 200 pixels. Here, the 11TB mSEM output from the 3rd generation map is stitched into 30 image regions amounting to 150 GB of data after downscaling. (B) Module B applies the pretrained object detector. (C) The output is a file listing that includes the location of each detected cell as a bounding box (X,Y,W,H), class (viable/pyknotic, annotated as “deceased”) and associated confidence p. Each of N detections results in a set of five predictions for each corresponding bounding box, including the x and y coordinates of the center of the bounding box (X,Y), width and height of the bounding box (W,H), and the confidence of the detection (p where a confidence or probability of 0 means no object was detected and 1.0 means a perfect match with “ground truth”). The object is then further classified as live or deceased (C). The object detector is pretrained using 600 living and 50 pyknotic examples (Knothe Tate et al., 2016b) and took 12 h to train on a single graphics processing unit (GTX1080). The testing phase on the 150 GB dataset lasted 100 h. Note: modules A and B can be combined into a single module.