Duncan J Irschick1, Fredrik Christiansen2, Neil Hammerschlag3,4,5, Johnson Martin6, Peter T Madsen7,2, Jeanette Wyneken8, Annabelle Brooks9, Adrian Gleiss10,11, Sabrina Fossette12, Cameron Siler13, Tony Gamble14,15,16, Frank Fish17, Ursula Siebert18, Jaymin Patel19, Zhan Xu20, Evangelos Kalogerakis20, Joshua Medina1, Atreyi Mukherji21, Mark Mandica22, Savvas Zotos23,24, Jared Detwiler25, Blair Perot26, George Lauder27. 1. Department of Biology, 221 Morrill Science Center, University of Massachusetts, Amherst, MA 01003, USA. 2. Aarhus Institute of Advanced Studies, Høegh-Guldbergs Gade 6B, Aarhus C, Denmark. 3. Rosenstiel School of Marine and Atmospheric Science, University of Miami, Miami, FL, USA. 4. Leonard and Jayne Abess Center for Ecosystem Science and Policy, University of Miami, Coral Gables, FL, USA. 5. Shark Research and Conservation Program, University of Miami, Miami, FL, USA. 6. 329 E Main Street, Wilmore, KY 40390, USA. 7. Zoophysiology, Department of Biology, Aarhus University, Aarhus, Denmark. 8. Department of Biological Sciences, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431-0991, USA. 9. Cape Eleuthera Institute, PO Box EL-26029, Rock Sound, Eleuthera, The Bahamas. 10. Centre for Sustainable Aquatic Ecosystems, Harry Butler Institute, Murdoch University, 90 South Street, Murdoch, WA 6150, Australia. 11. College of Science, Health, Engineering and Education, Murdoch University, 90 South St, Murdoch, WA 6150, Australia. 12. Biodiversity and Conservation Science, Department of Biodiversity, Conservation and Attractions, 17 Dick Perry Avenue, Kensington, WA 6151, Australia. 13. Sam Noble Oklahoma Museum of Natural History and Department of Biology, University of Oklahoma, 2401 Chautauqua Avenue, Norman, OK 73072-7029, USA. 14. Marquette University, Wehr Life Sciences 109, 1428 W. Clybourn Street, Milwaukee, WI 53233, USA. 15. Milwaukee Public Museum, 800 W. Wells Street, Milwaukee, WI 53233, USA. 16. Bell Museum of Natural History, University of Minnesota, 1987 Upper Buford Circle, St. Paul, Minnesota 551088, USA. 17. Department of Biology, West Chester University, West Chester, PA 19383, USA. 18. Institute for Terrestrial and Aquatic Wildlife Research, University of Veterinary Medicine, Hannover, Werftstrasse 6, 25761 Buesum, Germany. 19. Temple University Kornberg School of Dentistry, 3223 North Broad Street Philadelphia, PE 19140, USA. 20. College of Information and Computer Sciences, University of Massachusetts, 140 Governors Dr., Amherst, MA, USA. 21. University of Texas Southwestern Medical Center, 5323 Harry Hines Boulevard, Dallas, TX 75390, USA. 22. Amphibian Foundation, 4055 Roswell Road NE, Atlanta, GA 30342, USA. 23. Terra Cypria-the Cyprus Conservation Foundation, Agiou Andreou 341, 3035 Limassol, Cyprus. 24. School of Pure and Applied Sciences, Open University of Cyprus, PO Box 12794, 2252 Nicosia, Cyprus. 25. 22 Hooker Avenue Unit 1, Northampton, MA 01060, USA. 26. Department of Mechanical and Industrial Engineering, University of Massachusetts, Amherst, MA 01003-2210, USA. 27. Museum of Comparative Zoology, Harvard University, Cambridge, MA 02138, USA.
Abstract
The study of biological form is a vital goal of evolutionary biology and functional morphology. We review an emerging set of methods that allow scientists to create and study accurate 3D models of living organisms and animate those models for biomechanical and fluid dynamic analyses. The methods for creating such models include 3D photogrammetry, laser and CT scanning, and 3D software. New multi-camera devices can be used to create accurate 3D models of living animals in the wild and captivity. New websites and virtual reality/augmented reality devices now enable the visualization and sharing of these data. We provide examples of these approaches for animals ranging from large whales to lizards and show applications for several areas: Natural history collections; body condition/scaling, bioinspired robotics, computational fluids dynamics (CFD), machine learning, and education. We provide two datasets to demonstrate the efficacy of CFD and machine learning approaches and conclude with a prospectus.
The study of biological form is a vital goal of evolutionary biology and functional morphology. We review an emerging set of methods that allow scientists to create and study accurate 3D models of living organisms and animate those models for biomechanical and fluid dynamic analyses. The methods for creating such models include 3D photogrammetry, laser and CT scanning, and 3D software. New multi-camera devices can be used to create accurate 3D models of living animals in the wild and captivity. New websites and virtual reality/augmented reality devices now enable the visualization and sharing of these data. We provide examples of these approaches for animals ranging from large whales to lizards and show applications for several areas: Natural history collections; body condition/scaling, bioinspired robotics, computational fluids dynamics (CFD), machine learning, and education. We provide two datasets to demonstrate the efficacy of CFD and machine learning approaches and conclude with a prospectus.
For as long as humans have existed on the earth, we have engaged in new ways to visualize and depict the living world around us. This characteristic is reflected in some of the earliest cave drawings as well as the first forays into the formal scientific depiction of organisms in natural history drawings. The earliest studies of microscopy revealed new forms of life that were not then known to exist, beginning in earnest with innovations in lens manufacturing in the 1500s in Europe and the Middle East. Further innovations in camera technology enabled new 2D and 3D techniques for visualizing shape and color in living organisms (e.g., Baqersad et al., 2017; Chiari et al., 2008). Today, we boast a wide range of visualization techniques for organisms of varying sizes, from the smallest bacteria to massive whales (e.g., Bot and Irschick, 2019; Gignac and Kley, 2014; Gignac et al., 2016; Kim and Kim, 1999; Laha et al., 2014; Silverstein et al., 2006; Szaflik, 2007). The urgency of visualizing the shape of life is felt perhaps more deeply today than ever before because of the extinction crisis for biodiversity during the Anthropocene (Turvey and Crees 2019). Simply put, many organisms are disappearing, and thus, there is increased interest in their documentation. More broadly, defining the “shape” or “form” of organisms is central to all concepts of biodiversity (Adams et al., 2009; Brown et al., 2013; Falkingham, 2012; Gaston, 2003; Pimm et al., 1995). As one of the primary goals of ecology and evolution is to understand the origins and maintenance of biodiversity, methods for defining the shape of organisms and phenotypic diversity are important.Several scientific fields have aimed to describe the mechanistic, ecological, and evolutionary causes of variation in body shape. First, the field of evolutionary developmental biology (i.e., evo-devo) has focused on elucidating the genetic and developmental basis of organismal body form (Albertson et al., 2003; Irschick et al., 2013; Müller, 2007; Raff, 2000). Second, the field of biometry has employed quantitative and statistical methods for describing differences in body shape among and within species (Boyer et al., 2015; Zelditch et al., 2012). Third, evolutionary ecologists and eco-morphologists have examined the origins of diversity in the context of resource use (Bock, 1994; Irschick et al., 1997; Pianka, 2001; Wainwright and Reilly, 1994). The common thread for these lines of inquiry is explaining why and how variation in animal form exists, with the bulk of this interest focused on external features of organisms, such as dimensions of the head, limbs, and body axis. This interest is justified not only because it is the external shape that humans most obviously witness, but also because of the importance of external shape for how animals interact with their environment, such as the importance of wing shape for flight, or limb and body dimensions for locomotion (Irschick and Higham, 2016). Although the internal structures and anatomy of organisms also play a vital role in all the above aspects, quantifying external aspects of shape represents a valuable goal and is especially important for emerging areas such as bioinspired robotics and computational analyses of animal locomotion.Here, we discuss an exciting new approach to the study of organismal shape, the creation of 3D models of living organisms, which provides a new platform for the study of body shape. The focus here is on relatively non-invasive methods of 3D scanning and reconstruction, such as creating a 3D digital photogrammetry scan from an animal eating while at a zoo, or in the wild. This non-invasive angle opens opportunities for working with rare and elusive megafauna that are challenging or impossible to keep in captive situations, such as most whales. Our focus is on approaches taken by several of us, but we note that the broader goal of creating life-like 3D organisms is built on a foundation from many other scientists that we discuss and cite. It is important to contrast this scientific goal with the already established practice of computer graphics animal modeling within the entertainment industry, which blends artistic and scientific approaches and is often proprietary. We emphasize that while the techniques used to recreate 3D digital animals, such as CT scanning, 3D digital photogrammetry, and laser scanning, are well-established, how these tools, in combination with software (e.g., Blender, Maya) are used to recreate living animals in a scientifically accurate manner, remains a work in progress.Rather than supplanting prior scientific methods, such as biometry, this approach provides a new venue for this and other methods to operate in. We argue that in an ideal world, scientists would have access to a digital replica of living animals so that they could iteratively measure their shape and color. However, is such an approach feasible? Furthermore, what additional value do such 3D models provide for ecology, evolution, and functional morphology that is not already being met with current methods? We aim to answer these questions and argue that the above-stated goal is now within the reach of scientists, with far-reaching implications. We discuss how new techniques and conceptual methods in 3D reconstruction are opening new opportunities for scientists in a wide range of disciplines.
New methods for creating digital avatars of living organisms
To appreciate the potential value of creating digital avatars of living animals, let us imagine a few of the numerous scientific applications of such an approach. Before doing so, it is important to state that we focus this review on the external shape of living animals, and not on color or internal anatomy. The quantification of color and internal anatomy each have a long and rich history, and while the 3D modeling approach has implications for each, such treatment is beyond the scope of this perspective. There are ongoing efforts to use 3D photogrammetry approaches to studying color. Like any photograph, a 3D model can be analyzed in terms of its color profile in the same manner as a 2D digital photograph, and it is also possible to link the physical position of colors onto a shape to investigate basic ecological and evolutionary questions. Also, our coverage here is germane to all living organisms, although the taxonomic sampling described in this article is relatively limited, and consists mostly of reptiles, amphibians, fish, and marine mammals. Modifications of the techniques and concepts discussed here could be, in theory, applied to the vast bulk of living organisms.There are several potential scientific applications for digital animal avatars. First, a digital avatar would provide biometricians unprecedented ability to measure body shape with a range of innovative techniques. Instead of relying only on 2D images, or a preserved specimen, or perhaps brief access to a live specimen, a digital avatar could enable a nearly infinite range of measurement opportunities at the convenience of the investigator (Figure 1, Figure 2 and 2). Importantly, a digital avatar could comfortably dwell on an investigator’s computer or hard drive for over a lifetime, whereas a preserved specimen might have to be returned to a museum. Second, engineers interested in creating bioinspired robots would have direct access to a digital replica from which they could gain inspiration, and for which they could test the functional value of specific body parts, such as using flow-tank, or computational fluid-dynamics, simulations (Fish and Lauder, 2017; Laforsch et al., 2012; Liu, 2002; Miller et al., 2012). Third, scientists interested in body condition would be able to directly measure volumes and estimate mass, which is especially important for large or elusive megafauna (e.g., marine mammals, sharks, large herbivores, and so forth), which often travel long distances and expend large amounts of energy (Christiansen et al., 2018, 2019; Jakob et al., 1996). Finally, functional morphologists and biomechanists would be able to digitally manipulate the motions and shapes of these digital avatars in computational simulations, which would allow them to better understand the adaptive basis of morphological variation (Biewener, 2003; Biewener and Patek, 2017; Irschick and Higham, 2016; Lauder, 1990). Behaviorists could also use the 3D models as playbacks for manipulating receivers (Fleishman and Endler, 2000).
Figure 1
3D photogrammetry methods for reconstructing live animals
Multi-camera scanning methods for photo-scanning various kinds of animals using 3D digital photogrammetry, such as small reptiles; (A) a tokay gecko, Gekko gecko, taken in the Irschick lab at the University of Massachusetts at Amherst, USA); (B) a medium sized sea turtle (a green sea turtle, Chelonia mydas, taken at the Loggerhead MarineLife Center, USA); (C) a large mammal (a southern white rhino (Ceratotherium simum, taken at the Perth zoo, Australia); A typical process for 3D photogrammetry of live specimens, including multiple photos (D), creation of a 3D surface using 3D software (E), and finally a full-color version (F) which closely replicates the original specimen. Photos are of a marine toad (Rhinella marina) taken in the Philippines.
Figure 2
3D shapes of various animal species
3D whole body meshes of the 14 described species in Table 1, which are: (A) southern right whale (Eubalaena australis), (B) southern white rhino (Ceratotherium simum simum), (C) blacktip shark (Carcharhinus limbatus), (D) harbor porpoise (Phocoena phocoena), (E) loggerhead sea turtle, Caretta caretta, (F) flatback sea turtle (Natator depressus), (G) green sea turtle (Chelonia mydas), (H) kemps Ridley sea turtle (Lepidochelys kempii, (I), hawksbill sea turtle (Eretmochelys imbricata), (J) Cyprus racer snake (Dolichophis jugularis), (K) sling-tailed agama (Stellagama stellio cypriaca), (L) tokay gecko (Gekko gecko), (M) flying gecko (Ptychozoon kuhli), (N) house gecko (Hemidactylus platyurus).
3D photogrammetry methods for reconstructing live animalsMulti-camera scanning methods for photo-scanning various kinds of animals using 3D digital photogrammetry, such as small reptiles; (A) a tokay gecko, Gekko gecko, taken in the Irschick lab at the University of Massachusetts at Amherst, USA); (B) a medium sized sea turtle (a green sea turtle, Chelonia mydas, taken at the Loggerhead MarineLife Center, USA); (C) a large mammal (a southern white rhino (Ceratotherium simum, taken at the Perth zoo, Australia); A typical process for 3D photogrammetry of live specimens, including multiple photos (D), creation of a 3D surface using 3D software (E), and finally a full-color version (F) which closely replicates the original specimen. Photos are of a marine toad (Rhinella marina) taken in the Philippines.3D shapes of various animal species3D whole body meshes of the 14 described species in Table 1, which are: (A) southern right whale (Eubalaena australis), (B) southern white rhino (Ceratotherium simum simum), (C) blacktip shark (Carcharhinus limbatus), (D) harbor porpoise (Phocoena phocoena), (E) loggerhead sea turtle, Caretta caretta, (F) flatback sea turtle (Natator depressus), (G) green sea turtle (Chelonia mydas), (H) kemps Ridley sea turtle (Lepidochelys kempii, (I), hawksbill sea turtle (Eretmochelys imbricata), (J) Cyprus racer snake (Dolichophis jugularis), (K) sling-tailed agama (Stellagama stellio cypriaca), (L) tokay gecko (Gekko gecko), (M) flying gecko (Ptychozoon kuhli), (N) house gecko (Hemidactylus platyurus).New elaborations on previously established methods and software now make these applications possible. It is noteworthy that the techniques for creating these digital avatars often do not fall neatly into one method (e.g., 3D photogrammetry), but represent a combination of hardware and software methods that continue to evolve. Indeed, there are exciting new opportunities for combining data from different scanning methods. On the hardware end, dramatic improvements in computers, camera, video, and laser and white-light scanning systems now provide an unparalleled set of tools that can be used to create 3D models (Baqersad et al., 2017; Chiari et al., 2008; Egels and Kasser, 2003; Falkingham, 2012; Irschick et al., 2020a, 2020b; Postma et al., 2015; Szaflik, 2007). Three frequently used techniques are computed tomography (CT) scanning, laser/white-light scanning, and 3D photogrammetry. CT scanning is very useful for providing high-resolution 3D scans at a range of animal sizes under standardized laboratory conditions, although it cannot be conducted easily on living animals, and the devices are expensive. This method can potentially provide accurate estimates of body shape for rigid-bodied organisms that undergo little physical alteration after death. Laser and white-light scanning both operate with the same basic principles of projecting either a laser or white light onto a surface and then measuring the degree of reflectance in 3D. The resulting surface map is used to create a 3D point cloud. Additional processing can be used to measure the intensity of color from the object and then projecting a color map onto the 3D model. One limitation of laser and white-light scanning is that both processes are relatively time-consuming and typically take multiple seconds to several minutes to scan a medium-sized object (∼10 cm). Scanning larger objects (e.g., >500 cm) can take much longer. Thus, laser and white light scanning are possibly more suitable for immobile living organisms but are less effective for larger or mobile organisms. 3D photogrammetry is the process by which 2D images are translated into 3D data (Evin et al., 2016; Linder, 2009). The history of 3D photogrammetry goes back hundreds of years, but on a practical level, this method has most rapidly matured and developed within the last decade (Amado et al., 2019; Linder, 2009). The modern incarnation of this method, 3D digital photogrammetry, consists of the direct conversion of 2D digital images, such as those captured by a handheld camera, to a 3D point cloud that is used to generate a texture map with overlying photographic color maps. Modern user-friendly software such as Meshroom, Colmap, and RealityCapture (e.g., Alicvision, 2018; Community, 2018), enable these reconstructions in relatively short time spans (e.g., minutes to hours).Unlike laser and white-light scanning, 3D photogrammetry works effectively across a range of size scales (e.g., mouse to elephant). Also, unlike CT scanning, 3D photogrammetry can be applied to mobile organisms and in remote areas. Limitations of 3D photogrammetry have been noted elsewhere (Baqersad et al., 2017), and include variability owing to camera resolution, lighting, photographic technique, and the choice of software for post-processing. For these reasons, many of the examples provided herein focused on various uses of 3D photogrammetry, but as noted later, combining various scanning techniques, such as laser/white light scanning, 3D photogrammetry, and CT scanning, offer exciting possibilities for live animal 3D reconstruction.Multi-camera systems offer great potential to scan live animals, and the Digital Life team (www.digitallife3d.org) at the University of Massachusetts at Amherst has used the Beastcam technology platform, which is a set of portable, flexible multi-camera systems to successfully scan a large range of organisms. The original Beastcam was a handheld, four-armed system, in which a single camera was mounted on each of four arms. Wires from each camera went to a central trigger which triggered all four cameras simultaneously. Because of the small number of cameras, however, this system proved to be too limited for the photocapture of a wider range of organisms. Other systems are the Beastcam ARRAY (15-arm static system), the Beastcam MACRO (12-arm static system), and the STAND system (movable, tripod system) which were for mid-sized (4-8 inches), smaller (1-4 inches), and larger (>8 inches) organisms, respectively. 3D photogrammetry is highly accurate under certain conditions (Aldridge et al., 2005; Bagersad et al., 2017; Bythell et al., 2001; Dai and Mu, 2010; Huising and Pereira, 1998; Weinberg et al., 2004). Some of these conditions include the avoidance of extreme wide-angle (fisheye) lenses and creating high-quality (in focus) images that allow the object to be reconstructed at a high level of detail. Our own analyses confirm this high level of accuracy when comparing live specimens versus their digital avatars, with accuracy levels of over 98% (Irschick et al., 2020b).The Digital Life team has successfully used various kinds of multi-camera rigs to scan a range of live mobile animals, including sea turtles, sharks, lizards, frogs, scorpions, and cockroaches, among others (Table 1, see also https://sketchfab.com/DigitalLife3D). These specimens include reconstructions of whole live specimens, as well as 3D scans of organisms such as frogs or lizards sitting on branches or a leaf. Notably, it was possible to use the same techniques across a range of sizes, as demonstrated through similar methods of multi-camera scanning of animals ranging in size from a five g gecko (Hemidactylus frenatus) to a >37 million g southern right whale (Eubalaena australis, Table 1, Figure 2). In some cases, a combination of techniques was used to accurately recreate the body shapes of animals. For example, Irschick et al. (2020a) included images from an aerial drone and multiple GoPro cameras to create 3D scans of a live harbor porpoise (Phocoena phocoena). As shown in Irschick et al. (2020a, 2020b, 2020c), the 3D reconstructions were very accurate, with error rates overall less than about 1%, once the digital avatar and the original live animal were compared.
Table 1
14 animal species used for 3D reconstruction
Common name
Species
Link
Southern right whalea
Eubalaena australis
https://skfb.ly/6Ryut
Southern white rhinoceros
Ceratotherium simum
https://skfb.ly/6WRvC
Blacktip shark
Carcharhinus limbatus
https://skfb.ly/6UpvY
Harbor porpoiseb
Phocoena phocoena
https://skfb.ly/6VWZx
Loggerhead sea turtle
Caretta
https://skfb.ly/6R7JL
Flatback sea turtle
Natator depressus
https://skfb.ly/6R7JK
Green sea turtle
Chelonia mydas
https://skfb.ly/6R7JJ
Kemps ridley sea turtle
Lepidochelys kempii
https://skfb.ly/6R7JM
Hawksbill sea turtle
Eretmochelys imbricata
https://skfb.ly/6R7JN
Cyprus racer snake
Dolichophis jugularis
https://skfb.ly/6RMRn
Sling-tailed agama lizard
Stellagama stellio cypriaca
https://skfb.ly/6ROpL
Tokay gecko lizard
Gekko gecko
https://skfb.ly/6SNEn
Flying gecko lizard
Ptychozoon kuhli
https://skfb.ly/6ROpW
House gecko
Hemidactylus platyurus
https://skfb.ly/6ROpX
See links for models, and more information on methods of reconstruction. With the exception of the harbor porpoise and the southern right whale, all species were reconstructed using multi-camera 3D photogrammetry rigs (see Irschick et al. 2020b,c for details).
Reconstructed using a combination of top and side images from drones.
Reconstructed through a combination of drone photos and a 3D photoscan of animal while jumping.
14 animal species used for 3D reconstructionSee links for models, and more information on methods of reconstruction. With the exception of the harbor porpoise and the southern right whale, all species were reconstructed using multi-camera 3D photogrammetry rigs (see Irschick et al. 2020b,c for details).Reconstructed using a combination of top and side images from drones.Reconstructed through a combination of drone photos and a 3D photoscan of animal while jumping.Like all digital data, 3D photogrammetry models can be conveniently shared in different ways, such as by exporting.OBJ or FBX files, which can be viewed in various Virtual Reality and Augmented Reality software and hardware because of their interest in the general public, the issues of licensing and proper data deposition for researchers are especially germane. A general principle of scientific publishing is that the raw data should be made publically accessible for non-profit use both for verification purposes, but also so that other scientists and educators can use and learn from the data. Because of the rise of commercial 3D sharing platforms such as Sketchfab (Sketchfab.com), it is now possible to easily share 3D models. However, unlike most scientific data, accurate 3D models of living organisms also have potential commercial value, so finding an appropriate license structure that allows unfettered non-profit use, but not commercial use, is challenging. For example, the standard creative commons (cc) license typically allows free-usage, but certain restrictions on commercial use can be stipulated. However, because some educational usage is also technically commercial, this is an additional complication. Clearly, scientists as a community need to establish perhaps a new kind of license structure that is tailored for 3D models with commercial value, which include 3D organisms. A final issue is a proper repository structure. Ideally, scientific data should be stored as part of an existing non-profit structure (e.g., Website, or natural history museum collection database) that is stable, and which supports the mission of sharing scientific data. Especially important is basic standards on metadata, such as methods of model creation, model parameters (e.g., model size, number of triangles, and so forth), and details on the scientific specimen (see Grayburn et al., 2019 for more discussion).
Natural history collections
One of the many applications of life-like 3D models is for complementing traditional specimen-based collections. The tradition of using physical museum samples for studies of systematics, anatomy, taxonomy, and ecomorphology dates back hundreds of years (Allmon 1994; Bradley et al., 2014; Davis 1996; Medina et al., 2020; Pleijel et al., 2008; Schindel and Cook, 2018; Shaffer et al., 1998; Suarez and Tsutsui, 2004). This practice typically involves the collection of live specimens or portions of dead specimens (e.g., skeletons) alongside deposition of metadata. Live specimens are typically euthanized, and then preserved in some fashion (e.g., formalin, ethanol) (Allmon, 1994; Davis, 1996; Huber, 2007; Suarez and Tsutsui, 2004) and maintained in museums as a resource for scientists and educators. The value of keeping physical whole (i.e., whole organism) specimens is clear. First, they typically enable species identification, such as through visual inspection, or through measurements of key parameters (e.g., morphometrics). Second, physical specimens can be shared with other scientists, and their status can, therefore, be compared among different individuals. Third, they provide the ability to access genetic information (e.g., tissue) as well as internal anatomy. However, there is also a growing digitization movement within natural history collections, which is also raising questions about how other kinds of data other than physical specimens can be stored in various kinds of natural history repositories, such as iDigBio (www.idigbio.org) or Morphosource (www.morphosource.org).In this regard, life-like 3D digital models offer a new tool for field-based collections, especially in areas where permits for collecting and exporting whole specimens are challenging or impossible to obtain. Increased effort toward field collection of 3D data would be a welcome addition to the ongoing efforts to catalog preserved museum specimens using 3D digital photogrammetry and CT scanning. Examples of ongoing efforts include the Florida Museum of Natural History (https://sketchfab.com/FloridaMuseum) and the Moore Lab of Zoology at Occidential College (https://sketchfab.com/MooreLab), among others. Medina et al. (2020) discuss in detail how 3D photogrammetry can be used as a tool for digitally preserving preserved field specimens, and many of those same lessons can be applied to field-based 3D life-like specimens. This broader effort of collecting 3D scans of living field specimens would fit in nicely with the priorities of museums for high-throughput digitization and sharing (Blagoderov et al., 2012; Hebert et al., 2013). One possibility would be to combine 3D digital scans of organisms taken in the field, such as a flower, frog, or insect, together with the collection of DNA or other tissue. In such a case, specimens could then be released live, which would allow sampling in areas where traditional collection methods are challenging. Given that the bulk of characters that define many species can be accurately quantified through the inspection of life-like 3D digital models, as well as a cross-reference with DNA and metadata, it should be possible to identify species from each sample. As an example, for a sample of 3D models of various species of Philippine frogs, and frogs from the Atlanta Zoo, we used a set of characters from Watters et al. (2016), which reviewed species identification characters for frogs. We used the 16 most common characters, among 42 in all, which Watters et al. (2016) have identified as most useful for frog species descriptions. Using these characters, we were able to correctly identify and measure the vast bulk of them from 3D models alone (85%, see Data S1).We note that the physical and digital 3D scans complement each other. For example, the physical specimens provide valuable information on anatomy, whereas the living 3D digital specimens provide the ability to more easily share and visualize the specimen, as well as to visualize its overall posture and color. As an example, it was possible to obtain high-quality scans of five different frog species from several islands in the Philippines, and a snake and lizard species from different locations in Cyprus (Irschick et al., 2020c, Data S2, see https://sketchfab.com/DigitalLife3D for models). As the infrastructure, training, and protocols for this kind of 3D field-based documentation proceeds, it should be possible to achieve higher throughput. Moreover, the creation of 3D “digital specimens” would also open new opportunities for museum-based outreach and education. Although museums regularly house thousands of physical specimens of various species, the ability to create “digital specimens”, especially for larger megafauna (e.g., marine mammals, sharks), that are not well-represented in museums, is enticing (Figure 5). Although many scientists are currently creating 3D scans of various kinds of existing museum specimens (e.g., bones), there is also the possibility of a museum hosting purely digital specimens, or working with existing online databases (e.g., morphosource.com, idigbio.com) to share such digital specimens.
Figure 5
Swimming parameters for scaled green sea turtle 3D model
Plots of swimming speed (x axis) versus swimming thrust force (A), swimming power (B), and standardized swimming power (C) based on computational fluids dynamics (CFD) simulations for three simulated age classes from 3D models of green sea turtles (Chelonia mydas).
Body condition and scaling
A significant challenge in working with living organisms is estimating basic parameters related to overall body size, which can be important for estimating body condition, and for understanding scaling, which examines how organisms change in various physical parameters as they grow, or as species evolve and become larger over evolutionary time. One challenge relates to the study of body condition, which can be defined as an animal’s overall mass or volume relative to its structural size (often body length, Jakob et al., 1996). In general, wild animals are rarely afforded opportunities to regularly consume more food than they need, and hence, most live on a dietary tightrope, with fuel input being barely sufficient (or insufficient) to cope with energetic demands. This problem is especially important for animals that migrate long distances and consequently, expend large amounts of locomotor energy, such as birds and some marine mammals and fishes (Labocha and Hayes, 2012). Body condition is commonly measured as body mass relative to body length, and while body mass is easily measured for smaller, more common organisms such as reptiles, fish, or insects, for larger megafauna such as marine mammals, sharks, and some terrestrial mammals, estimating mass accurately is challenging. The 3D reconstruction methods described here provide a new tool for scientists to estimate mass in such elusive megafauna, in combination with tools such as drones or remotely operated underwater vehicles (ROVs). For example, Christiansen et al. (2019) used aerial drones and photogrammetry methods to estimate the size and body mass of southern right whales. These images were used to create an accurate 3D model of one whale (https://skfb.ly/6Ryut), from which one can estimate volumetrics. Although volume is not typically a direct measure of condition, gaining data on volume could represent an important tool for more accurately measuring condition, especially in animals that cannot be easily weighed. It is important to note that even if one were able to directly weigh a large animal such as a whale, one could not gain estimates of volume or density from mass data. Some of the 3D models included in Table 1 are currently being used to investigate similar, new methods of reconstructing body shapes, masses, and volumes, especially for sea turtles, sharks, and marine mammals (Irschick, ms in prep., Christiansen, ms in prep). As more data emerge on the body shape and estimated mass of megafauna such as whales, it should be possible to estimate volumetrics and surface areas of a wide range of species (Figure 3).
Figure 3
Scaled volumetric 3D models of the 14 animal species in Table 1, showing the variation in body size, volume and surface area
Animals are shown from the largest to smallest. The southern right whale is not shown, as it is too large.
Scaled volumetric 3D models of the 14 animal species in Table 1, showing the variation in body size, volume and surface areaAnimals are shown from the largest to smallest. The southern right whale is not shown, as it is too large.With such data in hand, it would then be possible to test basic theories of energetics and life history, such as the theory that metabolic rate should scale in a particular manner with surface area and volume in mammals (Brown and Sibly, 2006; Enquist et al., 1999; Kleiber, 1961). Max Kleiber (1961) argued that for the majority of mammals, basal metabolic rate B tends to increase as the 3/4 power when scaled against body mass. Various theories for why this pattern occurs exist, but one popular theory suggests that smaller animals will respire more per unit of body mass than larger animals as a greater proportion of their body mass consists of metabolically active tissues that impose higher maintenance costs. This idea in turn is based on assumptions about an organism’s surface area to volume. Although there have been some efforts to estimate these parameters for a range of organisms, there are few good estimates of the surface areas and volumes of living animals, as measuring these on dead animals is likely to be often inaccurate. Although some elements of this theory have been tested across a small range of body sizes, expanding these data to larger marine mammals for example, and providing more accurate data on volumetrics, surface areas, and masses, would provide a stronger test of the theory. As an example of this approach, we estimated volumes, surface areas from 12 of the 14 species listed in Table 1 (for the flatback sea turtle, Natator depressus, and blacktip shark, Carcharhinus limbatus), masses were not available, see Table 2). Volumes and surface areas were calculated from the calibrated 3D meshes within Blender. By plotting volume versus mass (Figure 4), we can gain an understanding of the basic relationship among these variables for a disparate group of animals ranging from small geckos to a massive whale. In plotting surface area versus mass, one observes an estimated slope of 0.64 (y = 0.64x+1.12, F = 547.8, R2 = 0.98, p < 0.001), whereas for volume versus mass, the estimated slope is 1.01 (y = 1.01x-0.03, F = 959.1, R2 = 0.99, p < 0.001). Although this plot intermixes vastly different organisms, such as endotherms, ectotherms, and terrestrial and aquatic animals, it provides a glimpse of future opportunities for testing theories of scaling and energetics, ideally for a much wider group of species.
Table 2
Masses, volumes, and surface areas for the 14 animal species in Table 1
Common name
Mass (g)
Volume (cm3)
Surface area (cm2)
Southern Right Whale∗
37,041,435
49,085,723
1,414,121
Southern white rhinoceros
2,225,000
1,499,727
100,202
Blacktip shark
N/A
25,108
8,086
Harbor porpoise&
67,100
51,303
10,675
Loggerhead sea turtle
37,000
39,824
9,636
Flatback sea turtle
N/A
103,319
19,334
Green sea turtle
14,240
17,576
5,517
Kemps ridley sea turtle
2,850
3,775
2,053
Hawksbill sea turtle
220
243
335
Cyprus racer snake
479
534
1,130
Sling-tailed agama lizard
78
18
87
Tokay gecko lizard
52
104
236
Flying gecko lizard
14
17
100
House gecko
5
3
28
Figure 4
Scaling parameters from 3D models
A plot of estimated mass versus surface area (A), and mass versus volume (B) for 12 of the 14 animal species in Table 1 (no mass data were available for the flatback sea turtle or the blacktip shark).
Masses, volumes, and surface areas for the 14 animal species in Table 1Scaling parameters from 3D modelsA plot of estimated mass versus surface area (A), and mass versus volume (B) for 12 of the 14 animal species in Table 1 (no mass data were available for the flatback sea turtle or the blacktip shark).
Applications to bioinspiration and robotics
One of the most exciting implications of recreating living animals accurately is for the development of bioinspired materials and robotics (Fish and Lauder, 2017; Kim et al., 2013; Lauder et al., 2011; Vargas et al., 2008; Yang et al., 2016). On the one hand, an important part of bioinspired material research is the investigation of substrate properties and structural elements of surfaces at a nanoscale level. This aspect of bioinspired materials will continue to be relevant, but another aspect is the investigation of animal form and movement at a whole-organism level. 3D models allow researchers to not only test the efficacy of biological designs, but also “forbidden” phenotypes that don’t exist in the natural world, but which could exist, such as a ten-armed octopus. Over the past decade, there has been a large growth in the number and kinds of bioinspired robots, including those designed after manta rays, pike, snakes, flies, and kangaroos, among others (e.g., Vargas et al., 2008). The goal of such mimicry is to recreate some of the performance capacities of these animals, which include high levels of energetic efficiency, velocity, and acceleration. For example, one class of robots is largely inspired by fast-moving fish such as pike (Esox Lucius), tuna species, and sharks (Lauder et al., 2011; Lauder, 2015). Extensive research on the interaction between body shape and locomotor ability in fish has shown that certain body shapes are ideal for high-speed movements, such as the fusiform shapes of laminid sharks (Lauder, 2015). Therefore, recreating the body shapes of sharks provides a platform for scientists to test and recreate these shapes in physical (e.g., flow-tank) and computational environments. Indeed, it is possible to think of a base accurate 3D model for a particular species of a known size, age, and sex, as serving as a valuable template for many other studies. Although there is some attempt to replicate the accurate body shapes of animals using CT scanning or 3D photogrammetry, much of the field of bioinspired robots rely on approximations of animal shape. Although this approach has some validity, animal bodies often contain subtle features whose function may not be obvious upon first inspection. For example, computational and flow-tank studies showed that the scalloped edges of the pectoral fins of humpback whales (Megaptera novaeangliae) enable them to move more efficiently through the water (Fish and Lauder, 2017). It is also useful to consider that the widths, depths, and thicknesses of animal bodies, and various appendages can have significant advantages for certain locomotor tasks. Indeed, artistic estimation of body shapes has the potential to alter basic body proportions misleadingly. Furthermore, animal morphology often varies across different life stages, and thus creating a “generic” design that is not clearly tied to a particular life stage or sex, is problematic. The 3D models discussed here can, therefore, be used as testing prototypes for further development as possible shapes for robots. Because of the flexible nature of these models, they can also be easily altered in terms of their shape, which would allow investigators to visualize new phenotypes that have not yet evolved, and which may possess novel functional capacities.
Computational fluids dynamics
One of the most exciting research opportunities for the usage of 3D modeling is computational fluids dynamics (CFD) research and more generally, computer simulation (Fauci and Peskin, 1988; Fish and Lauder, 2017; Laforsch et al., 2012; Lauder, 2015). The use of CFD is widespread within the realm of mechanical engineering and animal locomotion and is viewed as a vital tool for understanding the consequences of variation in body shape on locomotor efficiency in a range of animals, primarily those that move in air or water, such as fish, marine mammals, and birds, among others (Cohen and Cleary 2010; Dong et al., 2020; Liu, 2002; Nakata et al., 2011; Ravi et al., 2020). It is also critical for better tag designs in biologging research, where the interplay between body shape and form factor of the tag is critical for the developing drag when the tagged animals move in air and water (Shorter et al., 2014; Vandenabeele et al., 2014).A typical workflow involves the creation of a basic anatomical shape of interest and then testing how various aspects, such as the speed of air or water flow, or other factors in either a computational simulated environment or in a flow-tank, impact aspects such as drag. In addition, there may be particular interest paid to particular body parts, such as that of a fin or a limb, for example. Some investigators go to great lengths to accurately reconstruct the proper shapes of animal bodies, or their constituent parts, such as by using CT-reconstruction or 3D photogrammetry (e.g., Liu et al., 2017; Wang et al., 2020). However, in many cases, an approximate model is created that may not be anatomically correct or may neglect aspects of variation related to age or sex. The value of modeling particular individuals, or possibly multiple individuals, as opposed to the generic reconstruction of a particular species, is that it is possible to examine the consequences of variation among individuals, ages, and sexes, among other aspects. As an example, ontogenetic changes are well-documented for many animals, including lizards, sharks, and sea turtles, and in some cases, some of these changes, such as a relatively enlarged caudal fin in younger sharks, are likely to have a significant impact on locomotor efficiency (Carrier, 1996; Fu et al., 2016). The utility of this approach was demonstrated by Dudley et al. (2016), who used 3D models of various sea turtle species to examine the potential impacts of climate change in nesting leatherback sea turtle females (Dermochelys coriacea). As an example of the potential value of CFD work with our models, we used a 3D scan from a subadult green sea turtle (Chelonia mydas), and then examined locomotor aspects such as drag, power, and power/volume for three sizes (hatchling, subadult, and adult, Data S2). Drag increased markedly with sea turtle size, with a noticeable jump between a hatchling and a subadult (Figure 5). Similarly, the power required for swimming was also noticeably larger for larger sea turtles compared to smaller ones. Interestingly, however, the power/volume was roughly two orders of magnitude higher for the smallest sea turtles than for the largest sea turtle when they are traveling at the same speed. Obviously, it would be beneficial to rerun these analyses in more detail with more specific models (i.e., one hatchling, one subadult, one adult), ideally from the same breeding population. However, the data seem to show that there are additional power requirements for smaller sea turtles for locomotion that could be ecologically relevant. Also, the drag and power estimates now offer some values that could be used in a larger analysis that considers the total distance moved by sea turtles and could result in a total estimated energetic budget. This analysis points toward how one could use the base 3D model and alter basic aspects, such as carapace shape, or limb or head shape. Finally, one could perform similar analyses with all the 3D models of various sea turtle species, such as comparing the same life-stage (e.g., adults, subadults) during various kinds of flow regimes (e.g., steady, unsteady flow dynamics).Swimming parameters for scaled green sea turtle 3D modelPlots of swimming speed (x axis) versus swimming thrust force (A), swimming power (B), and standardized swimming power (C) based on computational fluids dynamics (CFD) simulations for three simulated age classes from 3D models of green sea turtles (Chelonia mydas).
Machine learning applications for locomotion
Studies of animal motion have informed us on the evolution of form and function (Biewener and Patek, 2017; Irschick and Higham, 2016), and most research on animal locomotion has focused on empirical studies of kinematics, kinetics, or anatomy (Alexander, 2003; Biewener and Patek, 2017; Irschick and Higham, 2016; Lauder, 2015). Although such studies are valuable, they are only one way to study animal movement. One complement to this approach is computational research (e.g., Dong et al., 2010; Fauci and Peskin, 1988; Hutchinson et al., 2007; Liu, 2002; Vargas et al., 2008) which can provide scientists, educators, and the public, with tools and techniques to analyze and visualize complex datasets and processes related to locomotion. For example, machine learning approaches offer exciting potential for accelerating the pace of the creation of 3D models. Another application is to integrate the movement of living organisms into static 3D reconstructions of these organisms. Currently, animated movements of 3D models are created by hand, and while metrics (e.g., frequency, amplitude) can be inserted into these models, these represent only rough surrogates of the motion of live animals.Integration of movement with shape would be beneficial for scientists, educators, and 3D artists. As one example, scientists interested in biodiversity could use such methods to create digital online “museums” of accurate, full-color, and moving animals from different regions to compare against one another. For science educators, showcasing accurate and moving 3D models of animals would be beneficial for demonstrating principles of biomechanics. With the emergence of new virtual reality (VR) and augmented reality (AR) hardware, students could perceive moving animals and their surroundings in immersive environments. Finally, such a tool would have commercial applications, as there is a demand for realistic 3D animal models in the animation industry, especially for films and video games.The typical workflow to create an animated 3D model is for artists and modelers to manually design the geometric representation of a 3D model, texture it (i.e., add color to the geometry), and finally create a rig. A rig is a hierarchical set of interconnected bones, called an animation skeleton in the computer graphics literature (Magnenat-Thalmann et al., 1988), which is used as a control data structure to animate the geometric representation. Machine learning approaches would be valuable here, as manually creating rigs requires expertise and significant training. An alternative approach is to use 3D photogrammetry methods to capture animal body shapes and motion to reconstruct a rig. However, these methods are limited to reconstruction from static snapshots. Although one can capture several different snapshots representing key body shapes and deformations that could be combined into a final moving shape (Anguelov et al., 2005), such an approach is time-consuming.As an example of how machine learning approaches can be integrated with the creation and study of life-like 3D models for scientific applications, we describe a neural net-based computer algorithm that automatically translates animal motion captured in the video into animated 3D models of animals that can be readily visualized in graphics, augmented and virtual reality environments (see Figure 6, Figure 7 and 7 and Video S1). The contribution of this machine learning software (ANATAR, from ANimal AuTomated Animation and Rigging) is 2-fold (Data S3). First, we introduce an automatic algorithm that creates an internal motion rig for a given body shape. Our algorithm uses the shape of the object, in this case, 3D models of fish, as a case example, to estimate the rig. Second, we integrate this method with harvesting motion data from live moving organisms and place them into our automated rigs. Together, these methods provide the foundations of a method for creating motion rigs for a variety of living organisms in a standardized, time- and cost-efficient manner. As a case example of our video-to-rig method, we use previously published video movement from a live chain catshark (Scyliorhinus retifer) which was placed into a previously created 3D model of a blacktip shark (see Figures 6 and 7). Although these are different species, the motion of sharks shares some commonality and offers a proof-of-concept that could ultimately be applied to motion on the same animals for which 3D models are created. Applying kinematic models measured for one species to a diversity of 3D body shapes in other species allows the investigation of the effect of body shape on locomotor performance using a controlled experimental method.
Figure 6
Input and output of the machine learning method outlined in Data S3
Given an input video capturing the motion of a live chain catshark (Scyliorhinus retifer).
(A) Our method automatically generates animations of several fish-like models closely following the captured motion.
(B) We also refer to our Video S1 showing the whole motion.
Figure 7
Pipeline of the machine learning method outlined in Data S3
(A–D) given an input 3D model, created through photogrammetry or modeled by an artist, (B) our method employs a neural network to rig it with an animation skeleton, then (C) using an input video capturing the locomotion of a real animal, (D) another neural network controls the skeleton and animates the 3D model so that its motion closely follows the input video.
Input and output of the machine learning method outlined in Data S3Given an input video capturing the motion of a live chain catshark (Scyliorhinus retifer).(A) Our method automatically generates animations of several fish-like models closely following the captured motion.(B) We also refer to our Video S1 showing the whole motion.Pipeline of the machine learning method outlined in Data S3(A–D) given an input 3D model, created through photogrammetry or modeled by an artist, (B) our method employs a neural network to rig it with an animation skeleton, then (C) using an input video capturing the locomotion of a real animal, (D) another neural network controls the skeleton and animates the 3D model so that its motion closely follows the input video.Our method provides a mechanism for creating an automated rig for a certain body shape. Normally, the process of creating rigs for motion is time-consuming and subjective. In the context of the broader workflow of creating 3D models of living animals and creating rigs that represent their actual motion automatically, our method is a first step. Ultimately, a fuller workflow should be (1) recreate an accurate 3D model of living animal, (2) use a version of our automated motion rig for an animal based on its 3D shape, (3) film living animal moving (e.g., swimming, running), and (4) use information from the motion of the animal to inform movements of the rig created in step (2). Our method integrates all four steps, with a special focus on elucidating steps (2) and (4). Biologists can use this algorithm to create rigs for fish-like organisms for modeling research, such as for computational fluid dynamics research. As our algorithm continues to develop with new training data, it could be used for a wider range of shapes and organisms. This thus represents a good example of how machine learning approaches the creation of 3D models to create both accurate shapes and motions.
Educational and conservation uses
The growth of hardware and software for visualizing objects in 3D presents new opportunities for incorporating 3D models for education, both at the K-12 (i.e., early education) and University levels (Donalek et al., 2014; Jang et al., 2017; Lischer-Katz and Cook, 2017; Pantelidis, 2010; Pober and Cook, 2016; Seth et al., 2011). Over the past decade, there have been substantial improvements in virtual reality (VR) headsets, which now allow people to visualize 3D scenes with remarkable clarity and realism. As the cost of these systems goes down, they are now increasingly accessible to a wider range of entities, including elementary and secondary K-12 schools. As an example of a public resource that is tailored to educational needs, one of us (Savvas Zotos and colleagues), has used many of the methods described here to create 3D models of the reptiles of Cyprus (http://3dreptiles.cs.ucy.ac.cy/public/our-reptiles), which offer a tremendous educational and scientific tool. There is also growing evidence that such visualization tools offer a learning benefit for students with some learning disabilities, compared to more traditional teaching methods (e.g., textbooks, PowerPoint slides, Ke and Im, 2013). For natural history museums that are seeking novel ways to engage visitors, visualizing 3D models using a range of techniques such as integrated video projectors, or VR headsets, offers tantalizing opportunities, especially if there is the ability to visualize how a living animal looks and moves, as opposed to an artistic recreation. At a different level, there have been increasing concerns raised by the public about the presence of megafauna such as killer whales (Orcinus orca), elephants, rhinos, and tigers within aquaria and zoos (Shapiro, 2018), and it is likely that many organizations will forgo the acquisition of new animals once their current individuals have died. The rise of high-fidelity virtual 3D experiences may take the place of many of these live animal experiences, as there will likely remain an intense interest in these animals moving forward. If supplemented with audio or text-driven content, these kinds of virtual experiences could prove easily as motivational and educational as witnessing a live animal in a zoo setting.
Prospectus for the future
Over the past 20 years, there have been tremendous advances in computer processing power, 3D scanning techniques, and software for processing these 3D data. As discussed here, these methods are now being applied to living animals, but we are only at the beginning of this process. New techniques and workflows are needed to both effectively scan some kinds of organisms, such as plants or small invertebrates, but also to do so in a time-efficient manner. Currently, the workflows for the 3D scanning of live animals are overly time-consuming, and new efficiencies will need to be accomplished before larger taxonomic sampling is possible. The sheer diversity of most animal groups stands as a significant challenge, as 4000 + species of lizards and 11,000 + species of birds indicate. As an example of the kind of innovation that is needed, some research groups have shown great progress in 3D scanning methods for preserved insects (Plum and Labonte, 2021), with the potential promise of ultimately scanning live insects. New innovations in 3D techniques for scanning electron microscopy could prove useful for creating 3D scans of various kinds of microorganisms (Reiss and Eulitz, 2015). Plants represent perhaps one of the greatest challenges for proper 3D reconstruction owing to their tremendous variation in size and shape and developing physical scanning protocols for this group will be important. One of the most promising avenues for accelerating the creation of accurate life-like 3D scans is machine-learning techniques (Lawrance, 1994). Such techniques might be able to overcome the time-consuming process of fixing small imperfections in 3D meshes, a process that currently has to be done by hand. Although technical innovation in hardware and software has transformed how we view 3D data, there is a need for more standardization of how the scientific community creates, views, deposits, and shares 3D data, especially in the context of traditional natural history collections, such as museums. Grayburn et al., 2019 discuss these issues in detail in the context of academic libraries, and perhaps some of those lessons can be applied to 3D life-like models. The opportunity to gather 3D life-like specimens rapidly and then disseminate them in a standardized manner is an exciting opportunity for scientists and natural history collections, and we are eagerly anticipating this future.
Authors: Junshi Wang; Dylan K Wainwright; Royce E Lindengren; George V Lauder; Haibo Dong Journal: J R Soc Interface Date: 2020-04-08 Impact factor: 4.118