Literature DB >> 28966836

Three-dimensional Imaging and Scanning: Current and Future Applications for Pathology.

Navid Farahani1, Alex Braun1, Dylan Jutt1, Todd Huffman1, Nick Reder2, Zheng Liu3, Yukako Yagi4, Liron Pantanowitz5.   

Abstract

Imaging is vital for the assessment of physiologic and phenotypic details. In the past, biomedical imaging was heavily reliant on analog, low-throughput methods, which would produce two-dimensional images. However, newer, digital, and high-throughput three-dimensional (3D) imaging methods, which rely on computer vision and computer graphics, are transforming the way biomedical professionals practice. 3D imaging has been useful in diagnostic, prognostic, and therapeutic decision-making for the medical and biomedical professions. Herein, we summarize current imaging methods that enable optimal 3D histopathologic reconstruction: Scanning, 3D scanning, and whole slide imaging. Briefly mentioned are emerging platforms, which combine robotics, sectioning, and imaging in their pursuit to digitize and automate the entire microscopy workflow. Finally, both current and emerging 3D imaging methods are discussed in relation to current and future applications within the context of pathology.

Entities:  

Keywords:  Computational pathology; three-dimensional imaging; three-dimensional reconstruction; three-dimensional scanning; volumetric histopathology

Year:  2017        PMID: 28966836      PMCID: PMC5609355          DOI: 10.4103/jpi.jpi_32_17

Source DB:  PubMed          Journal:  J Pathol Inform


INTRODUCTION

Biomedical practitioners have leveraged a plethora of imaging tools to measure physiologic and phenotypic details. Many of these imaging modalities were highly manual and either very time consuming or cost prohibitive. For example, in the pioneering years of radiology practice, plain X-ray radiography employed time consuming, and arduous chemical processes to develop film-screen images.[1] Today, medical professionals have a large arsenal of advanced imaging tools available. Such imaging tools rely on computer vision, algorithms, and graphics. Imaging methods that dominate the biomedical field include ultrasonic evaluation, X-ray computed tomography (CT), positron emission tomography––CT (PET-CT), and magnetic resonance imaging (MRI). These methods are capable of producing sets of two-dimensional (2D) images and three-dimensional (3D) reconstructions for interpretation.[234] 3D imagery provides a better way to visualize and accurately measure a patient's phenotypic characteristics.[56] Within the context of pathology, volumetric display, and mesh reconstruction techniques are particularly alluring for examination of clinical tissue specimens. 3D imaging could also enhance the study of disease processes, especially those involving structural changes, and in which spatial relationships are relevant.[789] Currently, there is no clear “best” 3D imaging method, especially in regards to medical imaging. Rather, each of the available methods involves tradeoffs in image size, accuracy, and resolution. Thus, determining which 3D imaging method is most appropriate often depends on the medical question under investigation. Herein, we summarize current 3D imaging and 3D scanning methods, with an emphasis on techniques that enable 3D histopathologic reconstruction, such as serial 2D scanning, 3D scanning, and whole slide imaging (WSI). Emerging platforms that combine robotics, sectioning and imaging in their goal of digitizing, and automating the entire microscopy workflow are discussed. Future applications of current and novel 3D imaging methods within the context of pathology are also addressed.

THREE-DIMENSIONAL SCANNING AND THREE-DIMENSIONAL IMAGING

It is important to review the semantic differences between 3D imaging and 3D scanning. In computer science, “image scanning,” often abbreviated to just “scanning,” describes the process by which a detector traverses an object, surface, or body part and uses electromagnetic radiation (EMR) to obtain images and convert them into a digital format. 3D scanners typically contain image sensors that capture light reflected off an object as pixel data. In this sense, an image is a 2D arrangement of pixels, which often corresponds to the resolution of the image sensor. Laser technology was initially introduced in the 1960s. Following their invention, lasers were coupled with image sensors and used by computer vision software for image segmentation and reconstruction. Popular usage of the term “3D scanner” denotes a specific type of 3D laser scanner, which relies on nonionizing EMR, primarily visible light. In contrast to these 3D scanners, there are other 3D devices that employ high (X-ray, PET-CT) or low (radio, ultrasound) frequency EMR. The majority of this review is concerned with 3D scanners of the former type, as defined above since they are becoming increasingly popular, cheap, and relatively easy to use. However, before moving on to 3D scanners, it is worthwhile to review more conventional methods of 3D histopathologic analysis, like those offered by WSI platforms.[10]

WHOLE SLIDE IMAGING

Whole slide images are the digital equivalent of traditional glass slides and contain high-resolution representations of the same scanned material found on glass slides.[11] In the past, WSI was mostly focused on 2D analysis at the expense of 3D structural analysis.[1213] More recently, 3D reconstruction of whole slide histological data has demonstrated value in the visualization and diagnosis of disease.[14] High-resolution 3D histopathologic imagery is, especially advantageous in discovering diagnostic patterns, due to its improved correlation between imaging modalities such as MRI, conventional CT, and WSI.[15161718] The WSI process begins with the creation of serial, glass slide-mounted tissue sections, obtained either through traditional or automated histology sectioning. Automated robotic microtomes, which automatically trim and section blocks, are particularly useful for 3D reconstruction of tissue sections. Benefits afforded by automated sectioning compared to traditional manual sectioning include the near-uniform thickness of sections, uniform orientation of sections (i.e., alignment of tissue between sections), and fewer sectioning artifacts.[1920] All these factors facilitate interpolation of structure between sections, resulting in high fidelity 3D reconstructions. Serial sections are typically acquired at a thickness of 4–6 μ. They are then mounted on glass slides. While a minimum of fifty sections are recommended, an optimal 3D reconstruction is obtained with at least 100-200 serial sections. Such sections all need to be stained, using routine histologic and/or immunohistochemical techniques. Next, the stained glass slides need to be digitized using a WSI scanner, to generate a series of digital images, each corresponding to a different scanned level of the tissue block. These serial digital images are then run through commercially available or custom software, to generate 3D models [Figure 1]. Examples of WSI-compatible 3D reconstruction software include Voloom (microDimensions, Munich, Germany) and Image-Pro Premier 3D (Media Cybernetics, Rockville, MD, USA). In general, 3D reconstruction software involves the following steps: Registration, segmentation, interpolation, and volumetric rendering.[11]
Figure 1

Three-dimensional reconstruction of lung adenocarcinoma from serial two-dimensional whole slide images. H and E-stained glass slide-derived whole-slide images are run through commercially available or custom software, in order to generate three-dimensional models through registration, segmentation, interpolation and volumetric rendering or serial two-dimensional sections

Three-dimensional reconstruction of lung adenocarcinoma from serial two-dimensional whole slide images. H and E-stained glass slide-derived whole-slide images are run through commercially available or custom software, in order to generate three-dimensional models through registration, segmentation, interpolation and volumetric rendering or serial two-dimensional sections

LASER SCANNING

3D laser scanning, often abbreviated to just 3D scanning, has been used for reverse engineering and part inspection in the manufacturing industry, as well as a digital actor, prop, and set recreation in the visual effects industry. However, 3D scanners, like other emerging technologies, have experienced a dramatic decrease in equipment costs, making 3D scanning more accessible to a wider audience.[2122] 3D scanners analyze real-world objects to gather data on their shape and color. Data collected from the scanner is then used to construct a 3D mesh, which can be printed using various additive manufacturing (3D printing) methods. 3D models, which include 3D meshes, are best defined as “numerical description(s) of an object that can be used to render images of the object from arbitrary viewpoints and under arbitrary lighting conditions.”[23] A list of common terminology, within the realm of 3D scanning, is provided in Table 1.
Table 1

Common terminology for three-dimensional scanning

Common terminology for three-dimensional scanning

THREE-DIMENSIONAL RECONSTRUCTION

The 3D reconstruction process for laser scanners begins with the conversion of raw data elements into point clouds (or vertices) of geometric samples from the surface of the object, which are viewed and manipulated using graphics applications [Figure 2a]. A meshing process then takes place, whereby the vertices (points) in the point cloud are algorithmically connected to form a manifold surface called a mesh [Figure 2b]. That mesh is then generally stored as a series of components which define each polygon that make up its surface.[2425] At this point, the 3D model, which is typically a polygonal mesh, is further refined using one of several different 3D modeling applications. Next, images called textures are mapped onto the surface of the mesh in order to faithfully represent the original color of the object that was scanned [Figure 2c]. This is achieved by mapping each 3D vertex coordinate onto a corresponding coordinate within a 2D parametric (UV) unit plane. During the rendering process, this UV mapping is used to broadcast a 2D texture across the 3D surface of the model.
Figure 2

Three-dimensional model generation pipeline through point clouds, meshes, and texture Maps for three-dimensional scanners. (a) For three-dimensional scanners, the reconstruction process begins with the conversion of the raw data elements into point clouds of geometric samples from the surface of the object. (b) A meshing process then takes place, whereby the points are algorithmically connected to form a manifold surface called a mesh. (c) Next, UV mapping is used to broadcast and map two-dimensional images, called textures, onto the surface of the three-dimensional polygonal mesh in order to faithfully represent the original color of the object scanned

Three-dimensional model generation pipeline through point clouds, meshes, and texture Maps for three-dimensional scanners. (a) For three-dimensional scanners, the reconstruction process begins with the conversion of the raw data elements into point clouds of geometric samples from the surface of the object. (b) A meshing process then takes place, whereby the points are algorithmically connected to form a manifold surface called a mesh. (c) Next, UV mapping is used to broadcast and map two-dimensional images, called textures, onto the surface of the three-dimensional polygonal mesh in order to faithfully represent the original color of the object scanned For many 3D scanners, multiple scans are required to produce a high fidelity and complete 3D representation of the object being scanned. Generally, in between scans, the object is oriented along a different axis, to ensure that point clouds are obtained from as many different directions as possible; this ensures that information is obtained from all sides of the object. After multiple scans are obtained, the individual scans are brought into a common reference system through a process that is usually referred to as alignment or registration. After the scans are registered, they are subsequently merged to create a more complete 3D model (i.e., more dense point cloud).[24] There are a wide range of potential approaches to 3D scanning, each with its own advantages and limitations. In this review, we mostly focus on nondestructive methods in which the object is left largely unaltered during digitization. Within this limited scope, 3D scanners fall into one of two broad categories: contact and noncontact data capture methods.[26] Figure 3 contains a broad classification scheme for both contact and noncontact 3D digitization methods. Table 2 provides additional characteristics associated with commonly used noncontact, volumetric, and surface scanning methods.
Figure 3

Classification of three-dimensional digitizing method. Three-dimensional scanners fall into broad contact and noncontact based categories. Contact three-dimensional scanners are sub-classified according to whether or not the scanning method is a destructive (i.e., knife-edge scanning microscopy, focused ion-beam scanning electron microscopy, etc.) or nondestructive process (i.e., coordinate measuring machine, articulate arms with encoders, etc.). Noncontact three-dimensional scanners are sub-classified by the type of electromagnetic radiation utilized. Within visible light-based noncontact scanning, methods can be further divided devices that emit or absorb radiation

Table 2

Characteristics of commonly used noncontact three-dimensional technologies

Classification of three-dimensional digitizing method. Three-dimensional scanners fall into broad contact and noncontact based categories. Contact three-dimensional scanners are sub-classified according to whether or not the scanning method is a destructive (i.e., knife-edge scanning microscopy, focused ion-beam scanning electron microscopy, etc.) or nondestructive process (i.e., coordinate measuring machine, articulate arms with encoders, etc.). Noncontact three-dimensional scanners are sub-classified by the type of electromagnetic radiation utilized. Within visible light-based noncontact scanning, methods can be further divided devices that emit or absorb radiation Characteristics of commonly used noncontact three-dimensional technologies

CONTACT THREE-DIMENSIONAL SCANNING

Contact 3D scanners probe objects through physical touch, usually while the objects are mounted or laid upon a flat surface.[2728] The touching of the contact probe to various points on the surface of the object results in data capture. This method of data collection is generally more accurate for defining the geometric form of an object rather than organic freeform shapes. Mechanical contact-based digitizing is also more suitable for highly reflective, mirroring, or transparent objects and for objects with difficult-to-reach areas.[29] Contact 3D scanners are, especially, useful for industrial reverse engineering applications when precision is the most important factor. Limitations of contact scanning include the relatively slow scan speed and the necessity for physical contact, which may modify or permanently damage the object. As mentioned above, contact 3D digitization requires physically interacting with the object, such that contact 3D scanners are further split into one of two subtypes: destructive and nondestructive [Figure 3]. Nondestructive scanners require physical touch but leave the object largely intact. Many popular, commercially available 3D scanners, especially those employed for industrial applications, are of the nondestructive type. An example of such would include a coordinate-measuring machine, which is commonly employed for reverse engineering, rapid prototyping, and large-scale part inspection. Destructive scanners, like automated serial block-face or serial section microscopy, produce volumetric data by consecutively removing minute layers of material, while digitizing each layer as it is processed. The process is repeated until the entire object has been fully digitized, and thus fully destroyed. Examples of destructive contact scanning include knife-edge scanning microscopy (KESM), micro-optical serial tomography, light-sheet microscopy, and focused-ion-beam scanning electron microscopy (FIBSEM). These platforms combine robotics, computer vision, and advanced optics for high-throughput imaging and computational analysis.[303132] For example, 3Scan's (San Francisco, CA, USA) commercial KESM platform [Figure 4] couples automated sectioning with light microscopy for imaging whole-mounted organs and large tissue volumes at speeds that are 1000 times faster than manual histology. These methods are popular among researchers in the medical and health sciences like connectomics, wherein high-resolution images are used to create structural maps of neural connections.[333435]
Figure 4

Mark I knife-edge scanning microscopy platform (3Scan, Inc.)

Mark I knife-edge scanning microscopy platform (3Scan, Inc.)

NONCONTACT THREE-DIMENSIONAL SCANNING

Noncontact methods offer a faster and more simple option for obtaining 3D scans. Since the 1980s, the optical (or light-based) noncontact scanners have become the preferred method for certain kinds of objects. Some include large, freeform, flexible or fragile objects, objects with numerous features, and objects where probe contact is not feasible (e.g., rare artifacts).[36] Optical, noncontact 3D scanning is further divided into active and passive subtypes. For both subtypes, the concept is more or less the same. Light is reflected off an object's surface through an array of lenses and then onto an image sensor. Passive scanners illuminate objects using an undirected light source, such as ambient light. Passive scanning methods are simple to set up, have rapid measurement times, and some commercial versions provide automated surface matching. In contrast, active scanners employ a directed light source, such as lasers and light patterns. Computer's are able to calculate the 3D coordinates of points of an object's surface by comparing the image of an object light by directed light to what would have been captured under known conditions (i.e., no object). Active scanning is the more popular method. Many commonly used noncontact, active 3D scanning microscopes use fluorescence imaging to provide contrast. Confocal, multi-photon, and light-sheet microscopy is often used in research laboratories to image small tissue samples at a limited depth. A fundamental challenge for currently available 3D fluorescence microscopy systems is the need to image large volumes of tissue at high resolution in a reasonable time frame. Confocal and multi-photon microscopy systems provide excellent resolution and contrast but can be prohibitively slow for imaging clinical specimens. For example, a recent study required 30 h to image a single kidney biopsy specimen.[37] Light-sheet microscopy [Figure 5] offers superior speed of imaging compared to confocal and multi-photon microscopy systems while maintaining good resolution. These properties of light-sheet microscopy have led to high impact studies in neuroscience and developmental biology.[3839] However, current commercially available light-sheet microscopy systems are ill-suited to imaging larger clinical specimens, which is an area of active research.
Figure 5

Three-dimensional light-sheet microscopy image of a prostate biopsy measuring (2 cm in length by 1 mm in diameter). The biopsy specimen was chemically cleared with 2, 2' thiodiethanol to enable three-dimensional imaging, then stained with DRAQ5 (nuclear) and eosin (cytoplasmic) fluorescent dyes. A custom-built light-sheet microscope imaged the biopsy in three-dimensions. The total time for clarification, staining, and imaging was <20 min. The nuclear and cytoplasmic channels were false-colored and volume rendered using Imaris software

Three-dimensional light-sheet microscopy image of a prostate biopsy measuring (2 cm in length by 1 mm in diameter). The biopsy specimen was chemically cleared with 2, 2' thiodiethanol to enable three-dimensional imaging, then stained with DRAQ5 (nuclear) and eosin (cytoplasmic) fluorescent dyes. A custom-built light-sheet microscope imaged the biopsy in three-dimensions. The total time for clarification, staining, and imaging was <20 min. The nuclear and cytoplasmic channels were false-colored and volume rendered using Imaris software

FILE FORMATS

Once an object is “captured” by a 3D scanner, it is turned into a 3D model through a computer-based reconstruction process. 3D modeling and computer-aided design software can be used for further modification of the model. There are three primary methods of 3D modeling: organic modeling, hard surface modeling, and procedural modeling [Table 3].[40] These models are then formatted in one of many 3D file types, some of which are compatible with 3D printers and commercial 3D printing vendors [Table 4]. It is important to note that only a few file formats will support the full gamut of geometry, colors, and textures. For example, the stereolithography format (.stl), which is arguably the most popular 3D file format for 3D printing, only supports geometric features. In addition, many commonly used 3D microscopy visualization software packages including Vaa3D[41] and ImageJ[42] use raster image formats such as tiff. These raster image formats are much more computationally expensive than the vector formats listed in Table 4 and are not accelerated by the rapid advances in graphical processing units.[43]
Table 3

Various types of three-dimensional computer-aided design modeling

Table 4

Popular 3D File Formats

Various types of three-dimensional computer-aided design modeling Popular 3D File Formats

PRACTICAL APPLICATIONS

Research and clinical pathology both use 3D reconstruction of whole slide images. Recent clinical examples include classification of lung adenocarcinomas, diagnosis of colorectal pathologies from small biopsies, and metastasis of breast cancer to lymph nodes.[74445] Other applications include anatomical and micro-architectural features of normal tissue, tumor invasion, growth factor expression, and localization of therapeutic targets in relation to microvasculature.[946] The reconstruction of whole slide images is hindered by digital artifacts from tissue sectioning and image capture, inconsistency of image qualities, and the arduous process of manual tissue sectioning. Serial tissue sectioning is the most significant obstacle due to the labor and time-intensive nature associated with optimizing the process of alignment of tissue sections.[9] Due to these limitations, careful analysis of the cost to benefit is need when considering this method for scientific inquiry. 3D scanning is relatively prevalent across many nonmedical domains. High-end commercial scanners are used by archeologists and preservationists to acquire models of remains, historical artifacts and large excavations.[474849] Aerospace, mechanical, and structural engineering sectors rely on 3D scanners to document structural dimensions, monitor structural deformations, and for reverse engineering of objects.[505152] Industrial manufacturing utilizes 3D scanning for quality assurance and inspection.[505354] 3D scanning has also been a major part of the visual effects and gaming industry for over 20 years.[55] In the medical field, 3D scanners are also used for several reasons, including the modeling of intricate anatomical structures, planning of complex surgical procedures, custom fabrication of medical devices, and the diagnosis of rare medical conditions.[56575859606162636465666768] However, 3D scanners are currently nearly absent in anatomic and clinical pathology. An exception to this would be their use in forensic pathology, for the documentation of specific injuries and as means of virtual autopsy.[6970] Virtual autopsies (virtopsy) routinely combine surface (i.e., photogrammetric) and volumetric (i.e., CT or MRI) scans as a means of examining deceased tissues in a digital environment.[71] Recent investigations into the use of 3D printing in anatomic pathology have driven the use of 3D scanners for gross surgical specimen capture.[72] 3D scanners for automated capture of gross surgical specimens is potentially feasible for digital archiving of specimens (e.g., in the laboratory information system), telepathology, education, medicolegal documentation, and experimental research. Presumably, 3D scanning of gross surgical pathology specimens can reproduce realistic models of pathologic entities. These models can be used in medical training, clinical research, education, and clinicopathological correlation at multidisciplinary conferences. Furthermore, the application of 3D scanning techniques need not be confined to the macro level. Destructive 3D scanning of entire tissue blocks, through KESM, serial block face scanning-electron microscopy and FIBSEM, can also provide microscopic and ultrastructural 3D models of patient specimens. Datasets of varying levels of resolution, from sub-millimeter radiographic studies to sub-micron pathologic investigations, can be combined and rendered into an integrated, fully-comprehensive 3D model. These models would undoubtedly prove useful for many processes including tumor staging, margin assessment, pathologic-radiologic correlation, macro-microscopic correlation, and better insights into disease processes.[73]

CONCLUSION

3D scanning and 3D imaging are emerging disruptive technologies. Their broad range of applications has the potential to expand into pathology practice. Driven by technological advances these tools continue to get cheaper, smaller, more reliable, and easier to use. Future 3D scanners will benefit from significant gains in scan rates. Currently, low scan rates represent a major technical bottleneck for many low-end desktop and handheld 3D scanners. Recently, Kadambi et al. described an innovative method of 3D scanning using high-quality depth sensing with polarization cues.[74] Their findings allow for the creation of high-resolution 3D images, from only a sparse number of 2D pictures, taken by cameras with polarized lenses. Furthermore, the resolution of the images produced by this form of 3D scanning is much higher than that produced by high-precision laser scanners. While still in development, techniques like polarized 3D scanning will inevitably usher in the higher-resolution 3D models that can be acquired rapidly and cost effectively. Related technologies are also materializing and poised to profoundly change how we view and interact with 3D data. Chief among them are virtual and augmented reality wearable headsets and controllers (e.g., Oculus rift, HoloLens).[75]

SUPPLEMENTARY

Downloadable 3D image of murine vasculature from the forebrain, created using the KESM platform: https://sketchfab.com/models/9757aed6ddf14265aaf94f936086c372.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.
  40 in total

1.  Design and fabrication of custom mandible titanium tray based on rapid prototyping.

Authors:  Sekou Singare; Li Dichen; Lu Bingheng; Liu Yanpu; Gong Zhenyu; Liu Yaxiong
Journal:  Med Eng Phys       Date:  2004-10       Impact factor: 2.242

2.  Three-dimensional reconstruction of sentinel lymph nodes with metastatic breast cancer indicates three distinct patterns of tumour growth.

Authors:  E C Paish; A R Green; E A Rakha; R D Macmillan; J R Maddison; I O Ellis
Journal:  J Clin Pathol       Date:  2009-03-19       Impact factor: 3.411

3.  High-resolution whole-brain staining for electron microscopic circuit reconstruction.

Authors:  Shawn Mikula; Winfried Denk
Journal:  Nat Methods       Date:  2015-04-13       Impact factor: 28.547

Review 4.  The big data challenges of connectomics.

Authors:  Jeff W Lichtman; Hanspeter Pfister; Nir Shavit
Journal:  Nat Neurosci       Date:  2014-10-28       Impact factor: 24.884

5.  Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain.

Authors:  Hans-Ulrich Dodt; Ulrich Leischner; Anja Schierloh; Nina Jährling; Christoph Peter Mauch; Katrin Deininger; Jan Michael Deussing; Matthias Eder; Walter Zieglgänsberger; Klaus Becker
Journal:  Nat Methods       Date:  2007-03-25       Impact factor: 28.547

6.  LIVER WHOLE SLIDE IMAGE ANALYSIS FOR 3D VESSEL RECONSTRUCTION.

Authors:  Yanhui Liang; Fusheng Wang; Darren Treanor; Derek Magee; George Teodoro; Yangyang Zhu; Jun Kong
Journal:  Proc IEEE Int Symp Biomed Imaging       Date:  2015-04

7.  Linear accuracy and reliability of cone beam CT derived 3-dimensional images constructed using an orthodontic volumetric rendering program.

Authors:  Danielle R Periago; William C Scarfe; Mazyar Moshiri; James P Scheetz; Anibal M Silveira; Allan G Farman
Journal:  Angle Orthod       Date:  2008-05       Impact factor: 2.079

8.  Introducing 3-Dimensional Printing of a Human Anatomic Pathology Specimen: Potential Benefits for Undergraduate and Postgraduate Education and Anatomic Pathology Practice.

Authors:  Amr Mahmoud; Michael Bennett
Journal:  Arch Pathol Lab Med       Date:  2015-08       Impact factor: 5.534

9.  Improving Patient Care by Incorporation of Multidisciplinary Breast Radiology-Pathology Correlation Conference.

Authors:  Seema Prakash; Shambhavi Venkataraman; Priscilla J Slanetz; Vandana Dialani; Valerie Fein-Zachary; Nancy Littlehale; Tejas S Mehta
Journal:  Can Assoc Radiol J       Date:  2015-11-26       Impact factor: 2.248

10.  Machine-based morphologic analysis of glioblastoma using whole-slide pathology images uncovers clinically relevant molecular correlates.

Authors:  Jun Kong; Lee A D Cooper; Fusheng Wang; Jingjing Gao; George Teodoro; Lisa Scarpace; Tom Mikkelsen; Matthew J Schniederjan; Carlos S Moreno; Joel H Saltz; Daniel J Brat
Journal:  PLoS One       Date:  2013-11-13       Impact factor: 3.240

View more
  11 in total

Review 1.  Artificial intelligence and algorithmic computational pathology: an introduction with renal allograft examples.

Authors:  Alton B Farris; Juan Vizcarra; Mohamed Amgad; Lee A D Cooper; David Gutman; Julien Hogan
Journal:  Histopathology       Date:  2021-03-08       Impact factor: 5.087

2.  Differentiation of benign and malignant regions in paraffin embedded tissue blocks of pulmonary adenocarcinoma using micro CT scanning of paraffin tissue blocks: a pilot study for method validation.

Authors:  Ayten Kayı Cangır; Serpil Dizbay Sak; Gökalp Güneş; Kaan Orhan
Journal:  Surg Today       Date:  2021-03-01       Impact factor: 2.549

3.  Commentary: What Can Augmented Reality Do for You?

Authors:  Emilio Madrigal
Journal:  J Pathol Inform       Date:  2018-06-13

4.  Nondestructive, multiplex three-dimensional mapping of immune infiltrates in core needle biopsy.

Authors:  Steve Seung-Young Lee; Vytautas P Bindokas; Mark W Lingen; Stephen J Kron
Journal:  Lab Invest       Date:  2018-11-06       Impact factor: 5.662

5.  Nucleus-specific X-ray stain for 3D virtual histology.

Authors:  Mark Müller; Melanie A Kimm; Simone Ferstl; Sebastian Allner; Klaus Achterhold; Julia Herzen; Franz Pfeiffer; Madleen Busse
Journal:  Sci Rep       Date:  2018-12-14       Impact factor: 4.379

6.  Pan-cancer association of a centrosome amplification gene expression signature with genomic alterations and clinical outcome.

Authors:  Bernardo P de Almeida; André F Vieira; Joana Paredes; Mónica Bettencourt-Dias; Nuno L Barbosa-Morais
Journal:  PLoS Comput Biol       Date:  2019-03-11       Impact factor: 4.475

7.  3D histopathology of human tumours by fast clearing and ultramicroscopy.

Authors:  Inna Sabdyusheva Litschauer; Klaus Becker; Saiedeh Saghafi; Simone Ballke; Christine Bollwein; Meraaj Foroughipour; Julia Gaugeler; Massih Foroughipour; Viktória Schavelová; Viktória László; Balazs Döme; Christine Brostjan; Wilko Weichert; Hans-Ulrich Dodt
Journal:  Sci Rep       Date:  2020-10-19       Impact factor: 4.379

Review 8.  Artificial Intelligence-Empowered 3D and 4D Printing Technologies toward Smarter Biomedical Materials and Approaches.

Authors:  Raffaele Pugliese; Stefano Regondi
Journal:  Polymers (Basel)       Date:  2022-07-08       Impact factor: 4.967

9.  Characterization of human nasal organoids from chronic rhinosinusitis patients.

Authors:  Mahnaz Ramezanpour; Harrison Bolt; Karen Hon; Gohar Shaghayegh; Hadi Rastin; Kevin Aaron Fenix; James Psaltis Alkis; Peter-John Wormald; Sarah Vreugde
Journal:  Biol Open       Date:  2022-08-16       Impact factor: 2.643

10.  Banff Digital Pathology Working Group: Going digital in transplant pathology.

Authors:  Alton B Farris; Ishita Moghe; Simon Wu; Julien Hogan; Lynn D Cornell; Mariam P Alexander; Jesper Kers; Anthony J Demetris; Richard M Levenson; John Tomaszewski; Laura Barisoni; Yukako Yagi; Kim Solez
Journal:  Am J Transplant       Date:  2020-04-19       Impact factor: 8.086

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.