| Literature DB >> 30653713 |
Thomas L Semple1, Rod Peakall1, Nikolai J Tatarnic2,3.
Abstract
Methods for 3D-imaging of biological samples are experiencing unprecedented development, with tools such as X-ray micro-computed tomography (μCT) becoming more accessible to biologists. These techniques are inherently suited to small subjects and can simultaneously image both external and internal morphology, thus offering considerable benefits for invertebrate research. However, methods for visualising 3D-data are trailing behind the development of tools for generating such data. Our aim in this article is to make the processing, visualisation and presentation of 3D-data easier, thereby encouraging more researchers to utilise 3D-imaging. Here, we present a comprehensive workflow for manipulating and visualising 3D-data, including basic and advanced options for producing images, videos and interactive 3D-PDFs, from both volume and surface-mesh renderings. We discuss the importance of visualisation for quantitative analysis of invertebrate morphology from 3D-data, and provide example figures illustrating the different options for generating 3D-figures for publication. As more biology journals adopt 3D-PDFs as a standard option, research on microscopic invertebrates and other organisms can be presented in high-resolution 3D-figures, enhancing the way we communicate science.Entities:
Keywords: Blender; Drishti; Meshlab; PDF; computed tomography
Mesh:
Year: 2019 PMID: 30653713 PMCID: PMC6590182 DOI: 10.1002/jmor.20938
Source DB: PubMed Journal: J Morphol ISSN: 0022-2887 Impact factor: 1.804
Figure 1Framework for 3D‐data manipulation and visualisation, including four workflows (two basic and two advanced) for creating PDF figures from 3D‐data using the programs Drishti, MeshLab and Blender. The four images provide examples of the figures achievable from each of the corresponding workflows. Software used at each step is highlighted in bold, and file formats in italics
Figure 23D‐volume rendering of a female thynnine wasp head (Hymenoptera: Thynninae: Ariphron sp.) produced using the Basic Drishti workflow. This image exemplifies the detail available in μCT scan data, with fine setae and antennal sensory pores clearly visible
Figure 3Note: .
(a) 3D‐rendering of the terminal abdominal segments of a male thynnine wasp (Hymenoptera: Thynninae: Catocheilus sp.). (b) Virtual dissection and segmentation of the male genitalia (red), generated using the Advanced Drishti workflow. Scale bar = 1 mm
Figure 4Note: To enable the interactive function of this figure, open the PDF in Adobe Reader program or web plug‐in.
Interactive 3D‐PDF of a female thynnine wasp head (Thynninae: Ariphron sp.) generated using the basic MeshLab workflow. This 3D‐mesh has been annotated to provide a self‐contained resource for science communication and education. A key benefit of presenting such data from primary research in the PDF format, is that those images can subsequently be used in education or for science communication without specialised software
Figure 5Note: To enable the interactive function of this figure, open the PDF in Adobe Reader program or web plug‐in.
Interactive 3D‐PDF of a female thynnine wasp head (Thynninae: Ariphron sp.) generated using the advanced blender workflow. This 3D‐mesh has been segmented and labeled to provide a highly informative scientific figure. Here, we also provide a basic example of how ‘rigging’ can be used to virtually reposition limbs or other components (in this case the mandibles of the wasp head). This can be thought of as ‘virtual insect pinning’, providing a standardized layout to facilitate examination, with the same reasoning as traditional insect pinning
Glossary of terms
| 3D‐reconstruction | The process of converting raw image projections (e.g., from X‐ray computed tomography) into cross‐sectional stacks of images, which resemble traditional tomographic sections. |
| 3D‐surface mesh | A series of 2D polygons (typically triangles or quadrangles) linked together to recreate the surface of a 3D‐object. This format is required for creating the interactive 3D‐PDFs described in this article, and for 3D‐printing. |
| Computed tomography | Commonly known as CT or ‘CAT’ scanning, a process for virtually recreating a three‐dimensional object from a series of sequential, cross‐sectional image slices, traditionally with microtome sectioning, but now more commonly with X‐ray imaging. |
| Projections | The raw images recovered from X‐ray CT imaging as the X‐ray source and camera rotate 360° around the subject (or the subject rotates as in μCT scanners). A higher number of projections results in a smaller angle between each projection, and therefore less noise in the reconstructed 3D‐image (but also a longer acquisition time). |
| Rendering | The process of adding colours, textures and lighting to a 3D‐object, which then determine the appearance of the final image or ‘render’. When imaging biological specimens, lighting is particularly important for visualising the true textures (e.g., from X‐ray CT) and colours (e.g., from photogrammetry) of the subject. |
| Rigging | The process of adding a virtual ‘skeleton’ to a 3D‐surface mesh in order to articulate joints and move or animate sections of the mesh independently. Used in this article to virtually reposition the mandibles of a scanned insect. |
| Segmentation (of 3D‐data) | The process of separating different regions of volumetric or surface mesh‐based 3D‐data. Typically used to aid visual differentiation, or for animation, but now also particularly useful for virtual dissections of internal morphology of invertebrates, which cannot easily be isolated by adjusting the visible range of densities (as one would for a vertebrate skeleton). |
| Volumetric 3D‐data | A cloud of ‘voxels’ (three‐dimensional pixels) that make up a virtually reconstructed 3D‐object generated by X‐ray CT scanning or similar, where each voxel contains information about the opacity of the original material (e.g., X‐ray absorption), thus providing measurable volumetric data of any part of the object. |