Literature DB >> 31338446

A comparison between 2D and 3D methods of quantifying facial morphology.

I Y Anas1, B O Bamgbose2,3, Saleh Nuhu4.   

Abstract

OBJECTIVES: Currently, two & three-dimensional (2D & 3D) imaging techniques have largely replaced the direct anthropometric method in the assessment of facial morphology, but the difference between the two techniques was not quantified. Therefore, the aim of this study was to compare and quantify (the difference between) the two techniques.
MATERIALS AND METHODS: The faces of 150 subjects (75 males, 75 females) of northern Nigeria, predominantly Hausa ethnic group, were photographed (using digital camera) and scanned (using a 3D surface laser scanner). Facial dimensions were generated from the resulting virtual 2D and 3D models. Data were analyzed using R-statistic software & Paired sample t-test/Pearson correlation were conducted to compare the two methods and to quantify the level of closeness between the two measurements.
RESULTS: Intraclass correlation coefficient (ICC) was very low (0.26) for the 3D and 2D measurements indicating the level of differences between the methods. Measurements taken with laser scanner were higher relative to the one taken by camera. The mean differences between the 3D and the 2D methods of quantifying facial morphology indicated a statistically significant positive difference.
CONCLUSION: 2D and 3D anthropometry cannot be used interchangeably since there exists statistically significant variation between the two methods.

Entities:  

Keywords:  2D; 3D; Anatomy; Comparison; Facial morphology; Health profession; Methods

Year:  2019        PMID: 31338446      PMCID: PMC6579906          DOI: 10.1016/j.heliyon.2019.e01880

Source DB:  PubMed          Journal:  Heliyon        ISSN: 2405-8440


Introduction

Currently, 2D and 3D imaging techniques are mainly used for the assessment of the human facial morphology and largely replaced direct anthropometry, which in the past, was the only method for the assessment of the human facial morphology, and was the only source of large volume of literature on human face anthropometry (Farkas, 1994; Aung et al., 1995; Zankl et al., 2002). Studies intended on large population is barely impossible with direct anthropometry, simply because it is time consuming, and again not suitable for infants and children. The major disadvantage with the direct anthropometry is its inability to provide digital coordinate record of the participants for later use in order to extract new facial measurements. The use of the metallic instruments (e.g., Vernier, Sliding or Spreading Calipers) may occasionally press against the measured soft tissues, and the accuracy and reliability of this technique is therefore skeptical. The use of the two dimensional (2D) (Langlois and Roggman, 1990; Ferrario et al., 1993; Rhodes, 1998; Rhodes et al., 2005; Rennels et al., 2008; Lee et al., 2010; Hooder and Souza, 2012), and three dimensional (3D) measurement techniques (Burke, 1971; Burke and Healy, 1993; Ras et al., 1995; Heike et al., 2010; Verhoeven et al., 2013; Ladeira et al., 2013), in the quantification of facial morphology has largely replaced the direct anthropometric method in the recent time. However, the preference and accuracy of one over the other is largely equivocal, which culminated into comparative studies (Farkas, 2002; Weinberg et al., 2006; Ghoddousi et al., 2007; Noyan et al., 2011; Joe et al., 2012; Kramer et al., 2012). The 2D method poses many advantages, for example, rapid acquisition of images, archival capabilities and low cost. Although more expensive than the 2D, 3D method has simplified the craniofacial studies and thus the availability of 3D cone beam computed tomography (CBCT) (Cevidanes et al., 2009; Nada et al., 2011), 3D surface laser scanning (Toma et al., 2008; Djordjevic et al., 2011), 3D stereophotogrammetry (Maal et al., 2010; kau et al., 2011) and other 3D methods allow for description and comparison of 3D facial images, quantification of facial morphology (Ghoddousi et al., 2007; Jayaratne and Zwahlen, 2014). In the real sense, the development of optical scanning technology has enhanced facial morphology research from 2D to three-dimension (3D) (Aung et al., 2000; Toma et al., 2009; Berssenbrugge et al., 2014; Vezzetti et al., 2018). Recently, 3D ultrasonography was interestingly used to extract reference points of fetal faces automatically (Moos et al., 2017) and the authors claim high level of accuracy. Although this method is interesting, but it is only limited to fetuses, albeit it may possibly be good for internal organs such as Heart, Liver, Kidneys and the rest and therefore, physical anthropometry is virtually impossible with such method. Similarly, automatic method for the extraction of facial morphological features was recently devised from the 3D facial scans with 97% face classification accuracy (Abbas et al., 2015), thereby simplifying the whole process. Comparison between 2 methods can be used to determine the performance evaluation of technical equipment (Enciso et al., 2004; Aung et al., 1995, Aung et al., 2000). Although the 2D and 3D methods of quantifying facial morphology are the most commonly used techniques, studies are yet to quantify the levels of variation between the two methods. Therefore, the aim of this study was to determine the levels of differences between 2D and 3D methods of quantifying facial dimensions. This paper was organized with the following headings: Introduction, materials and methods, results, discussion and conclusion.

Materials and method

3D images capturing (scanning) technique and processing

The study recruited 150 participants from Kano and Kaduna States in Nigeria, predominantly Hausa ethnic group, by simple random sampling. The participants were recruited only after receiving informed consent and was also informed that their privacy rights of human subjects will be observed and the that the study was purely anonymous. The age range was 18–25 years. The faces of the study participants were scanned using Exascan 3D Laser surface scanner from Creaform® (www.handyscan3d.com), and saved in a computer for analyses. Prior to the commencement of the study, the scanner was calibrated to correct any optical or electronic distortions and the sensor configured for dark skin. Prior to scanning, positioning targets were placed on the face of each participant, from the hair line down to the chin, and along each side of the face including the ears. Test scans were conducted with the participant lying supine with and without the use of a dough-nut shaped head rest and with the subject sitting down still. The preliminary results indicated better images with the participants sitting rather than lying supine. Scanning was done with each participant seated in an upright position, asked to sit still on a chair with the head facing up (neck extended) at a slight angle of about 45° relative to the floor, as this position was found to be the most comfortable to scan eliminating the need for the researcher to bend over while aquiring images. Participants were instructed to keep their eyes closed to avoid discomfort from the laser beams. During the scanning process, the 3D digital scan is generated on the computer screen in real time, allowing the researcher to continue scanning until a satisfactory scan has been created (Fig. 1). Good quality 3D facial scans were obtained with the subject maintaining a natural pose with neutral facial expression (see (Peter et al., 2004). In a situation where the position or pose of the subject distorted the face, or if the facial expression was not neutral, the scans were discarded as the inclusion of non-neutral facial expressions would have affected morphological comparisons between subjects (see (Peter et al., 2004). Each of the obtained scan was then trimmed and cleaned of any mesh (e.g., Fig. 2) before the analyses.
Fig. 1

Un-cleaned scanned face.

Fig. 2

Cleaned scanned face.

Un-cleaned scanned face. Cleaned scanned face.

3D linear measurements using landmarks

Facial dimensions were acquired by using standard landmarks (Table 1) on various locations of the face (Figs. 3 and 4) using the Geomagic studio software 12. Raw landmark coordinates were exported into Excel and saved as .csv file (comma delimited) for each individual. The metrics of each facial scan for each individual were acquired from a personally designed template using the coordinates of a single scan. Facial geometrical features are extracted from the Cartesian coordinates of the neoclassical canonical landmarks as reference points (Vezzetti and Marcolin, 2012). The Euclidean distances (dimensions) are most often extracted by the use of the Pythagoras formula which corresponds to the Euclidean distance, and the Euclidean distance between points X and Y is the length of line connecting the 2 points [See (Moos et al., 2017; Vezzetti et al., 2018; Cirrincione et al., 2018)].
Table 1

Summary of facial landmarks used in this study and their descriptions.

PointLandmarkNameDescription
P1ex (r)ExocanthionOuter commissure of the right eye fissure
P2so (r)SupraorbitaleThe most prominent point on the right supraorbitale
P3en (r)EndocanthionEndocanthion of the right eye fissure
P4NNasionMidpoint between the eyes, just above the bridge of the nose
P5en (l)EndocanthionEndocanthion of the left eye fissure
P6so (l)SupraorbitaleThe most prominent point on the left supraorbitale
P7ex (l)ExocanthionExocanthion of the left eye fissure
P8zy (r)ZygionThe most lateral point on the right cheek
P9al (r)AlarMost lateral point on the right alar contour
P10SnSubnasaleMid-point of angle at columella base
P11al (l)AlarMost lateral point on the left alar contour
P12zy (l)ZygionThe most lateral point on the left cheek
P13go (r)GonionThe point at the angle of the (r) mandible
P14ch (r)ChelionPoint located at right labial commissure
P15LsL. superiorMidpoint of the border of the upper lip
P16ch (l)ChelionPoint located at left labial commissure
P17go (l)GonionThe point at the angle of the left mandible
P18StoStomiumMidpoint of closed lip
P19LiL. inferiorMidpoint of the lower vermilion
P20gnGnathionThe lower-most point on the mid-anterior of the menton
P21prPronasaleThe most prominent point on the tip of the nose
P22slSublabiusMidpoint of the junction between the lower lip and the chin
Fig. 3

Sexually dimorphic dimensions. Red dimensions greater in females & Blue dimensions greater in males.

Fig. 4

22 landmarks used for quantifying facial shape.

Summary of facial landmarks used in this study and their descriptions. Sexually dimorphic dimensions. Red dimensions greater in females & Blue dimensions greater in males. 22 landmarks used for quantifying facial shape. Pythagoras formula: SQRT ((X1-X2)ˆ2+(Y1–Y2)ˆ2+(Z1-Z2)ˆ2). Each individual measurement so acquired from the template, was saved as .xls file, were copied and pasted into the main Excel file in ascending order of the questionnaire number. Normality of the facial metrics data was tested using the ‘Kolmogorov Smirnov’ and ‘Shapiro tests’ to guide the type of analysis (Schoder et al., 2006; Ghasemi and Zahediasl, 2012).

2D linear measurements using landmarks

The facial dimensions were recorded to the nearest millimetres (mm) from the 2D facial images, performed using Facial art software with magnification factor of 0.5 (Fig. 5). The software automatically calculated all the measurements identified on each landmark record, which were then transferred to Microsoft excel.
Fig. 5

2D Facial landmarks and Art face software interface.

2D Facial landmarks and Art face software interface.

2D images capturing (photo-taking) technique and processing

The face of each of the 150 study subjects was captured using 24-mm wide-angle lens camera (Sony, model DSC-W380 made in china), with a shutter speed of 1/125 per second, and a primary flash light. A 100-mm focal lens and a zooming power of 3.6 was selected in order to maintain the natural proportions. Each of the participant was positioned on a line marked on the floor, which is about 100cm (kept constant) from another marked lined just in front of the former line, similar to the method used by Joe et al., (2012). Each of the participants was asked to stand erect facing the camera, and was also asked to maintain a neutral face adopting the position they normally show during the day, and an identification number was placed behind the participant so as to merge each subject with his/her 3D scans. The operator ensured that the participant's forehead, neck, and ears were clearly visible during the recording.

Ethical approval

This study has been carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and was approved by the National Health Research Ethics Committee (NHREC) of Nigeria.

Results

Repeatability (Intra-observer error)

Intra-observer analysis was carried out using the method adopted by Osvaldo et al. (2012), by re-measuring the same metrics on the 25 randomly selected scans 2 weeks after the first 25 sets of measurements and the data were then analyzed using Paired Samples T-Tests. Measurement error was below 5% for all metrics and substantially lower for most, but some metrics differed in their mean values between first and second measurements. Intraclass correlation coefficient (ICC) was very low (0.26) for the 3D and 2D measurements (Table 2) indicating the level of differences between the methods of quantifying facial morphology. Table 3 shows the comparison between the means of the 2D and 3D facial dimensions using paired t-test. In all the twelve (12) pairs compared, measurements taken with laser scanner were higher relative to the one taken by camera. The mean differences between the 3D and the 2D methods of quantifying facial morphology indicated a positive difference which were found to be statistically significant in all the pairs considered (Table 4).
Table 2

Intraclass correlation coefficient.

Intraclass Correlation95% Confidence Interval
F Test with True Value 0
Lower BoundUpper BoundValuedf1df2P - value
Single Measures0.010.010.0216.6714837000.0001
Average Measures0.260.150.3816.6714837000.0001
Table 3

Comparison of 2D and 3D facial dimensions.

ParametersMean ± SDCorrelationP - value
Pair 1Exen3D42.30 ± 2.950.330.0001
exen2D25.51 ± 3.02
Pair 2exzy3D42.08 ± 5.830.310.0001
exzy2D15.14 ± 1.81
Pair 3exal3D55.01 ± 3.440.310.0001
exal2D37.61 ± 5.11
Pair 4exch3D74.28 ± 4.410.430.0001
exch2D57.07 ± 6.63
Pair 5enzy3D77.32 ± 5.840.300.0001
enzy2D36.25 ± 4.06
Pair 6enal3D38.45 ± 2.770.370.0001
enal2D28.02 ± 3.76
Pair 7ench3D71.21 ± 4.070.450.0001
ench2D51.83 ± 6.39
Pair 8zyal3D83.77 ± 6.920.710.037
zyal2D37.94 ± 5.55
Pair 9zych3D87.77 ± 7.240.330.0001
zych2D51.88 ± 6.18
Pair 10alal3D40.80 ± 4.110.570.0001
alal2D34.16 ± 4.72
Pair 11zyzy3D139.55 ± 6.500.310.0001
zyzy2D98.93 ± 11.15
Pair 12nsn3D48.11 ± 3.320.310.0001
nsn2D40.38 ± 5.10
Pair 13enen3D30.59 ± 3.450.360.0001
enen2D28.65 ± 4.09
Table 4

Mean differences between 3D and 2D.

ParametersPaired Differences
t- value P - value
Mean ± SD95% CI of the Difference
LowerUpper
Pair 1Exen3D- exen2D16.79 ± 3.4516.2317.3559.310.0001
Pair 2exzy3D - exzy2D26.94 ± 5.5526.0427.8459.200.0001
Pair 3Exal3D- exal2D17.39 ± 5.2216.5518.2440.670.0001
Pair 4Exch3D- exch2D17.21 ± 6.1916.2118.2233.960.0001
Pair 5Enzy3D- enzy2D41.07 ± 6.0240.0942.0583.220.0001
Pair 6Enal3D- enal2D10.42 ± 3.759.8111.0333.900.0001
Pair 7Ench3D - ench2D19.38 ± 5.8418.4320.3240.490.0001
Pair 8Zyal3D- zyal2D45.83 ± 8.1044.5247.1469.080.0001
Pair 9Zych3D- zych2D35.88 ± 7.7934.6237.1556.200.0001
Pair 10Alal3D- alal2D6.65 ± 4.145.987.3219.580.0001
Pair 11Zyzy3D- zyzy2D40.62 ± 11.0438.8342.4044.910.0001
Pair 12Nsn3D - nsn2D7.73 ± 5.146.898.5618.370.0001
Pair 13Enen3D- enen2D1.94 ± 4.291.242.635.500.0001
Intraclass correlation coefficient. Comparison of 2D and 3D facial dimensions. Mean differences between 3D and 2D.

Discussion

Direct anthropometric measurements have been used for the study of facial morphology for long period of time. In the recent time, 2D photographs are commonly used for such studies and for assessment and treatment of facial abnormalities. However, the 2D images in assessing facial morphology is subject to errors because the natural 3D facial anatomy will be translated in 2D (Enciso et al., 2004). The latest 3D imaging devices are specifically designed to capture and quantify facial morphology and are presently very common in the research environments (Da Silveira et al., 2003; Fourie, 2010; Plooij et al., 2011; Knoops et al., 2017). These 3D devices were purposely manufactured just to address the shortcomings of the 2D imaging systems, and several of them are now available to generate 3D surface facial images such as laser scans, stereo-photogrammetry, and infrared imaging (Enciso et al., 2004; Weinberg et al., 2006; Tzou et al., 2014). The three-dimensional (3D) surface imaging with its advances, are having a dramatic impact on the field of craniofacial anthropometry (Bailey and Byrnes, 1990). Relatively few studies have directly compared anthropometric measurements obtained through alternative methods. Previous studies have focused mainly on the concordance between direct and indirect measurement techniques, comparing traditional caliper-based anthropometry with either standard 2D photogrammetry (Guyot et al., 2003) or cephalometry (Guyot et al., 2003; Budai et al., 2003). Along these same lines, a few studies have also compared direct anthropometric measurements with those obtained by way of indirect 3D surface imaging methods, including traditional stereo-photogrammetry (Weinberg et al., 2006; Meintjes et al., 2002), surface laser scanning (Baca et al., 1994; Aung et al., 1995), and fully automated digital 3D photogrammetry (Losken et al., 2005; Weinberg et al., 2006). Our study has, to some extent, quantified the levels of the differences between 2D and 3D measurement of facial morphology. The findings of our study were in agreement with the results obtained by van Vlijmen et al. (Van VLIJMEN et al., 2009), where they compared 2D images obtained from radiographs and cone beam computed tomography (CBCT)-constructed 3D models. They reported a statistically significant difference between the 2D and 3D measurements. Although, these methods serve as veritable tools in the study of facial morphology for varying purposes, they cannot be used interchangeably, more especially in clinical evaluation of facial form diagnosis as pointed out by Zamani (2015). In the developing world where there is no 3D scanners, direct and 2D anthropometry are the only means of assessing and evaluating facial morphology, however, it has to be noted that the advantages of those two methods should not be overemphasized over the 3D since in the 2D anthropometry, the depth of the face is lost. The fact that there is a statistically significant difference between two-dimensional and three-dimensional images, the 2D facial models can still be used especially when assessing or evaluation of facial dimensions with less significant differences as seen in the previous table.

Conclusion

The 2D method of quantifying facial morphology is one of the easiest and cheap method in the field of facial anthropology (a field in the Biological anthropology), simply because it is non-invasive, less time consuming, not boring to the participating subjects and above all, data from this method is easy to analyze. However, despite its advantages, the 3D method is far better, especially in terms of accuracy and additional depth of the image been taken, but 2D is still a useful tool in Bioanthropology despite its limitations and disadvantages.

Declarations

Author contribution statement

Anas I. Y., BAMGBOSE B. O., SALEH NUHU: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.

Funding statement

This work was supported by Bayero University Kano, in collaboration with TetFund (Tertiary education trust Fund).

Competing interest statement

The authors declare no conflict of interest.

Additional information

No additional information is available for this paper.
  41 in total

1.  Differences between direct (anthropometric) and indirect (cephalometric) measurements of the skull.

Authors:  Leslie G Farkas; Bryan D Tompson; Marko J Katic; Christopher R Forrest
Journal:  J Craniofac Surg       Date:  2002-01       Impact factor: 1.046

2.  Preliminary testing for normality: some statistical aspects of a common concept.

Authors:  V Schoder; A Himmelmann; K P Wilhelm
Journal:  Clin Exp Dermatol       Date:  2006-11       Impact factor: 3.470

3.  Are attractive men's faces masculine or feminine? The importance of type of facial stimuli.

Authors:  Jennifer L Rennels; P Matthew Bronstad; Judith H Langlois
Journal:  J Exp Psychol Hum Percept Perform       Date:  2008-08       Impact factor: 3.332

4.  A comparison between two-dimensional and three-dimensional cephalometry on frontal radiographs and on cone beam computed tomography scans of human skulls.

Authors:  Olivier J C van Vlijmen; Thomas J J Maal; Stefaan J Bergé; Ewald M Bronkhorst; Christos Katsaros; Anne Marie Kuijpers-Jagtman
Journal:  Eur J Oral Sci       Date:  2009-06       Impact factor: 2.612

5.  Three dimensional evaluation of facial asymmetry after mandibular reconstruction: validation of a new method using stereophotogrammetry.

Authors:  T J Verhoeven; C Coppen; R Barkhuysen; E M Bronkhorst; M A W Merkx; S J Bergé; T J J Maal
Journal:  Int J Oral Maxillofac Surg       Date:  2012-08-30       Impact factor: 2.789

6.  Evaluation of facial asymmetry using digital photographs with computer aided analysis.

Authors:  Shivani Hooda; Mariette D' Souza
Journal:  J Indian Prosthodont Soc       Date:  2011-08-10

7.  A serial study of normal facial asymmetry in monozygotic twins.

Authors:  P H Burke; M J Healy
Journal:  Ann Hum Biol       Date:  1993 Nov-Dec       Impact factor: 1.533

8.  Reproducibility of facial soft tissue landmarks on 3D laser-scanned facial images.

Authors:  A M Toma; A Zhurov; R Playle; E Ong; S Richmond
Journal:  Orthod Craniofac Res       Date:  2009-02       Impact factor: 1.826

9.  A lack of sexual dimorphism in width-to-height ratio in white European faces using 2D photographs, 3D scans, and anthropometry.

Authors:  Robin S S Kramer; Alex L Jones; Robert Ward
Journal:  PLoS One       Date:  2012-08-07       Impact factor: 3.240

10.  Normality tests for statistical analysis: a guide for non-statisticians.

Authors:  Asghar Ghasemi; Saleh Zahediasl
Journal:  Int J Endocrinol Metab       Date:  2012-04-20
View more
  3 in total

1.  Facial Anthropometry and Analysis in Egyptian Women.

Authors:  Hisham El Minawi; Yasmeen El Saloussy; Mohamed Sabry; Wessam Wahdan; Omar El Sharkawy
Journal:  Plast Reconstr Surg Glob Open       Date:  2022-05-18

2.  Facial Anthropometry Measurements Using Three-Dimensional Stereophotogrammetry Analysis Among Nigerians.

Authors:  Adegbayi Adeola Adekunle; Abiodun Yusuff Olowo; Olutayo James; Olawale Olatubosun Adamson; Azeez A Alade; Failat Olushola Agbogidi; Afìsu O Oladega; Mobolanle Olugbemiga Ogunlewe; Wasiu Lanre Adeyemo; Tamara D Busch; Peter A Mossey; Mary L Marazita; Azeez Butali
Journal:  J Craniofac Surg       Date:  2021-07-23       Impact factor: 1.172

3.  Reliability and Accuracy of 2D Photogrammetry: A Comparison With Direct Measurement.

Authors:  Yin Cheng Lim; Ameerah Su'ad Abdul Shakor; Rafiza Shaharudin
Journal:  Front Public Health       Date:  2022-01-25
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.