| Literature DB >> 32331262 |
Wesley Tay1, Bhupinder Kaur1, Rina Quek1, Joseph Lim1, Christiani Jeyakumar Henry1,2.
Abstract
Obesity is a global health problem with wide-reaching economic and social implications. Nutrition surveillance systems are essential to understanding and addressing poor dietary practices. However, diets are incredibly diverse across populations and an accurate diagnosis of individualized nutritional issues is challenging. Current tools used in dietary assessment are cumbersome for users, and are only able to provide approximations of dietary information. Given the need for technological innovation, this paper reviews various novel digital methods for food volume estimation and explores the potential for adopting such technology in the Southeast Asian context. We discuss the current approaches to dietary assessment, as well as the potential opportunities that digital health can offer to the field. Recent advances in optics, computer vision and deep learning show promise in advancing the field of quantitative dietary assessment. The ease of access to the internet and the availability of smartphones with integrated cameras have expanded the toolsets available, and there is potential for automated food volume estimation to be developed and integrated as part of a digital dietary assessment tool. Such a tool may enable public health institutions to be able to gather an effective nutritional insight and combat the rising rates of obesity in the region.Entities:
Keywords: deep learning; dietary assessment; digital health; food volume estimation; personalized nutrition; public health
Year: 2020 PMID: 32331262 PMCID: PMC7231293 DOI: 10.3390/nu12041167
Source DB: PubMed Journal: Nutrients ISSN: 2072-6643 Impact factor: 5.717
Figure 1An illustration of the various steps of image-based automated dietary assessment—Segmentation, Classification, Volume Assessment and Nutrient Derivation.
Figure 2Measuring relative volumes by pixel density, Liang and Li. 2019.
Figure 3Food volume estimation using geometric modelling. (A) Movable spherical cap as adapted from Jia et al. 2014; (B) Projected variable cube using AR technology as adapted from Yang et al. 2019.
Figure 4Depth map of a mango captured with a structured light system, Makhsous et al. 2019.
Figure 53D reconstruction of various food models with deep learning view synthesis, Lo et al. 2019.