Literature DB >> 33752071

Weight and volume estimation of poultry and products based on computer vision systems: a review.

Innocent Nyalala1, Cedric Okinda1, Chen Kunjie2, Tchalla Korohou1, Luke Nyalala3, Qi Chao1.   

Abstract

The appearance, size, and weight of poultry meat and eggs are essential for production economics and vital in the poultry sector. These external characteristics influence their market price and consumers' preference and choice. With technological developments, there is an increase in the application and importance of vision systems in the agricultural sector. Computer vision has become a promising tool in the real-time automation of poultry weighing and processing systems. Owing to its noninvasive and nonintrusive nature and its capacity to present a wide range of information, computer vision systems can be applied in the size, mass, volume determination, and sorting and grading of poultry products. This review article gives a detailed summary of the current advances in measuring poultry products' external characteristics based on computer vision systems. An overview of computer vision systems is discussed and summarized. A comprehensive presentation of the application of computer vision-based systems for assessing poultry meat and eggs was provided, that is, weight and volume estimation, sorting, and classification. Finally, the challenges and potential future trends in size, weight, and volume estimation of poultry products are reported.
Copyright © 2021 The Authors. Published by Elsevier Inc. All rights reserved.

Entities:  

Keywords:  classification; computer vision; egg; poultry product; weight estimation

Year:  2021        PMID: 33752071      PMCID: PMC8010860          DOI: 10.1016/j.psj.2021.101072

Source DB:  PubMed          Journal:  Poult Sci        ISSN: 0032-5791            Impact factor:   3.352


Introduction

Owing to an ever-growing world population (Gerland et al., 2014), as well as the rising demand for animal-based food proteins (FAO, 2018), future global meat consumption is projected to increase (Berckmans, 2014). The poultry industry has been the fastest-growing industry (Mallick et al., 2020), particularly in developing countries, owing to growth in urbanization, population, and income (Liu et al., 2010). Over the year, white meat has been steadily favored globally, growing the intake of poultry products with broiler chicken meat becoming the most desired (Henchion et al., 2014; Okinda et al., 2019). This mushrooming demand is attributed to the high nutritional values of poultry meat and eggs from its protein source, quality, and reasonable pricing compared with other kinds of meats (Mallick et al., 2020). With accelerated poultry production and increased understanding of acceptable conditions for animal welfare, animal health, performance, and sustainable environmental conditions (Berckmans, 2014), human observation is no longer feasible for livestock management (Okinda et al., 2018a). The chicken is the most common poultry species widely raised worldwide. About 5 billion chickens are reared yearly as a food source for their meat and eggs (Mallick et al., 2020). Broilers are the most favored type of poultry and are reared for business purposes, meat production, and consumption. The poultry industry is generally categorized into meat and egg production sections (Ren et al., 2020). In 2018, the USDA reported an average intake of 277.7 eggs per person in the United States. Worldwide production of eggs was around 80.1 million metric tons in 2016; this was twice the production in 1990 (Conway, 2018). With nearly 529 billion eggs, China is the leading egg-producing country, followed by the United States, Mexico, India, and Brazil. Global production of poultry meat reached 120.5 million tons in 2017 and was expected to increase to 123 million tons in 2018. The largest producers China, the United States, and Brazil are predicted to lead poultry meat production to 2028. As indicated by the latest Food and Agriculture Organization and Organization for Economic Cooperation and Development estimates, poultry meat production is projected to rise by 141 million tons by 2028, from 121 million tons over the 2016–2018 average base period (Conway, 2019). Poultry size and BW are critical physical growth attributes used to assess poultry production's efficacy by comparing the measured weight with the feed consumed (Mortensen et al., 2016). It is also vital for the flock managers to estimate the average poultry weight in advance and weight spread at slaughter (Lott et al., 1982; Turner et al., 1983, 1984). An accurate estimation of weight dispensation throughout the flock and average weight helps index broiler chicken pickup and slaughter (Ross and Davis, 1990; Chedad et al., 2003). The most widely known procedure for weight measurement is a manual process, where the broiler is caught and placed on the electronic scale. This approach increases labor and reduces animal welfare; it also affects the quality and yield and can lead to broiler deaths (Wang et al., 2017). Traditionally, poultry farmers and human experts have been measuring these metrics by visual approximation or by manual weighing, which is tedious and can be very exhausting to the birds (Turner et al., 1983; Doyle and Leeson, 1989; Mortensen et al., 2016). In modern poultry houses, automatic weighing systems weigh the birds where they are expected to visit voluntarily. Four methods exist that can be used to estimate livestock weights automatically: 1) foreleg and platform scales, 2) walk-through scales, 3) automated cage scales, and 4) vision-based scales (Tscharke and Banhazi, 2013). The application of these automated weighing platform systems to poultry was reported in numerous studies (Lott et al., 1982; Turner et al., 1983, 1984; Newberry et al., 1985; Doyle and Leeson, 1989; Kettlewell, 1989; Lokhorst, 1996; Chedad et al., 2000, 2003; Wang et al., 2018) The major downside of such automated platform systems is that individual birds are likely to visit the platform less frequently than others, notably heavier birds, resulting in a challenge in determining the actual weight of these birds (Newberry et al., 1985; Chedad et al., 2000, 2003; Mortensen et al., 2016). In addition, researchers reported unconvincing conformity between manual and automatic mean measurements (Chedad et al., 2003). For instance, Klein Wolterink and Meijerhof (1989) noted that during the finishing phase of young broilers, weighing platforms are less frequently visited, even though the systems work well with broilers when they are young. Moreover, Newberry et al. (1985) and Blokhuis et al. (1988) ascertained that the average BW predicted by manual weighing were higher than the average weights predicted by automatic weighing systems. Despite the automated systems exhibiting a relative path of the flock's development, body measurements are doubtful toward maturing period completion (Blokhuis et al., 1988). A novel technique is necessary to overcome such challenges, which is accurate and fast, and noninvasive to make poultry processing increasingly economical and efficient. Machine vision technology has proven adequate in achieving this goal. In the poultry production industry, computer vision (CV) techniques have been used for poultry and its products (Okinda et al., 2020a). For instance, in safety inspection (Park et al., 2003), identification, detection, monitoring, and classification of contamination (Lawrence et al., 2001a,b; Nakariyakul and Casasent, 2007; Park et al., 2007), monitoring, evaluation, and prediction of freshness (Grau et al., 2011; Salinas et al., 2012; Xiong et al., 2015), quality inspection (Chao et al., 2002; Barbin et al., 2015, 2016) tenderness classification (Jiang et al., 2018), carcass and live BW estimation (Lotufo et al., 1999; Amraei et al., 2017b; Chen et al., 2017), wholesomeness and unwholesomeness characterization and inspection (Chao et al., 2008, 2010). Likewise, CV was used for crack, defects, dirt detection, and grading in eggs (Patel et al., 1998a; Mertens et al., 2005; Leiqing et al., 2007; Dehrouyeh et al., 2010; Wang, 2014), egg weight, and volume estimation (Hoyt, 1979; Okinda et al., 2020b), egg freshness estimation (Dutta et al., 2003; Abdel-Nour et al., 2011; Sun et al., 2015), egg grading and sorting (Omid et al., 2013; Nasiri et al., 2020). Classification of size, volume, and weight is an essential step in grading and sorting most food, agricultural, and meat products. Owing to its nondestructive capability and high efficiency, CV has been applied for assessing weight (mass), volume, and size. The technique was used to measure the volume and mass of fruits and vegetables in agriculture and food industries (Koc, 2007; Khojastehnazhand et al., 2009, 2010; Rashidi et al., 2009; Omid et al., 2010; Fellegari and Navid, 2011; Lee et al., 2014; Concha-Meyer et al., 2018; Nyalala et al., 2019), pig weight (Du and Sun, 2006; Yan et al., 2006; Yang and Teng, 2007; Kongsro, 2014; Fernandes et al., 2019), and cow weight (Tasdemir et al., 2011; Hansen et al., 2018) The primary aim of this article is to present a detailed review of the recent publications on image processing applications and CV approaches in the measurement and classification of poultry products. The articles were selected considering an exhaustive search in the primary scientific databases, given that they should use image processing and analysis to solve some problems with poultry products. This review discusses the measurement methods of physical parameters used by computer vision systems (CVS) in the poultry industry. The article further explains the various imaging approaches related to the specific criteria being analyzed and their relevant elements, estimation techniques, and data analysis. This article will benefit consumers, poultry production and processing plants, and researchers interested and involved with the recent progressions and advancements in nonintrusive measurement determination, primarily implemented in poultry production poultry products.

Overview of CV Systems

A CV system's main component is the camera sensor, lighting (illumination), image processing board, software, and hardware. The photons are converted to electric signals by the camera sensor. Visual light–based (charge-coupled devices and complementary metal oxide semiconductor), thermal and infrared depth-based sensors have been applied in mass and volume estimation systems to acquire images in various environments. Ultrasound, magnetic resonance imaging, and computed tomography are other imaging devices and technologies used for vision systems (Yu et al., 2020). Lighting devices produce light illuminating the target object being inspected; thus, image quality is significantly affected by the lighting system functionality and the system's efficiency and precision in general (Liu et al., 2015). Suitable illumination significantly improves image processing and analysis by refining image contrast, reducing shadow, noise, and reflection (Zhang et al., 2014); thus, consistent illumination should be provided through all-scene lighting to assess exterior quality using a CV system. In a process referred to as digitization, the image processing board, also known as the digitizer or frame grabber, transforms the pictorial image into numerical form (pixels). The software is the underlying code of image analysis that manipulates images to achieve the desired performance. To perform the underlying tasks based on a specific programming framework such as MATLAB, ImageJ, and OpenCV, to name a few, various processing algorithms have been developed and applied to the acquired images. All the connected components that make up the CV system are regarded as the hardware, that is, a camera sensor, connecting cables, a computer.

Application of imaging techniques to poultry products

Computer vision systems provide numerous benefits, such as accuracy and reliability in grading and sorting speed compared with manual processes. Furthermore, they also have stable performance levels and nondestructive practices (Okinda et al., 2018b, 2020a; Nyalala et al., 2019; Korohou et al., 2020; Raghavendra et al., 2020; Tan and Xu, 2020; Tian et al., 2020; Xu et al., 2020; Yu et al., 2020). Size is a critical parameter in food and agricultural product production. Features such as length, width, area, and perimeter define an object's size. Size measurements can be applied individually or combined with shape features (Du and Sun, 2004). The size of a product usually matches its volume, weight (mass), and surface area. Weight has been used to monitor the growth of fruits, vegetables, and livestock (Omid et al., 2010; Kongsro, 2014; Lee et al., 2014; Amraei et al., 2017a). Weight also determines the grading, packaging, and cost of the products. Calculation of volume is critical for the density-based sorting of food and agricultural products and volume-based packaging space optimization. Volume can also be used to aid in determining the weight of a product (Moreda et al., 2009; Nyalala et al., 2019; Okinda et al., 2020b). The following subsections will cover image analysis techniques used for size classification, volume, and weight prediction in the poultry industry.

Image Preprocessing

Image processing is vital for improving the measurement accuracy and validity of the analysis (Amraei et al., 2017a). In chicken live weight estimation techniques, both red, green blue (RGB) and infrared depth images have been used. Red, green, blue is the commonly used color space, but it is not suitable for object segmentation owing to the high correlation between R, G, and B color spaces (Cheng et al., 2001). Several studies have thus explored various transformation techniques aimed at achieving accurate segmentation of image objects. All studies of poultry live weight estimation are based on RGB and infrared depth images. In all the experiments on 2-dimensional (2D) weight estimation systems discussed, image analysis was conducted in RGB color space with no transformation to other color space models. However, color space transformation was performed in a sick broiler detection system by (Zhuang et al., 2018), whereby RGB color space was converted to hue, saturation, value and Lab (CIE L∗a∗b) color spaces. Based on the presentation (Zhuang et al., 2018), because the resulting picture intensities were uniformly spaced and divergent, the S and V color spaces are not conducive to chicken segmentation. However, H space produces a precise, clear broiler body segmentation; the accuracy was somehow lower than the a-b map. Therefore Zhuang et al (2018) applied the a-b map to define the color space while L-a broiler body segmentation as an auxiliary definition. During the image acquisition phase or as a preprocessing procedure, contrast adjustments may be made. To obtain a simple outline of the birds, a contrasting background (dark floor for white birds) can be manually placed (Amraei et al., 2017a). Furthermore, image filtering in intensity-based images (depth images) may be performed as a precaution against oversegmentation (Mortensen et al., 2016). Table 1 shows the different camera characteristics and data sets by various studies in the application of computer vision to poultry products.
Table 1

Characteristics of type of camera and data set by different studies.

Poultry productCamera typeNumber of imagesSamplesSize(pixels)Author(s)
Broiler chickens120 × 120De Wet et al. (2003)
Sony Cyber-shot, Sony., Japan1,200Mollah et al. (2010)
Microsoft Kinect camera44,952640 × 480Mortensen et al. (2016)
SM-N9005, Samsung., Korea2,520Amraei et al. (2017a)
SM-N9005, Samsung., Korea2,440Amraei et al. (2017b)
SM-N9005, Samsung., Korea2,440(Ab Nasir et al., 2018); Amraei et al. (2018)
Microsoft Kinect640 × 480Wang et al. (2017)
Chicken carcassCCD grayscale cameraLotufo et al. (1999)
95Chen et al. (2017)
Ace1300-200uc, Basler, Germanyn = 1001,280 × 1,024Teimouri et al. (2018)
ScanBright Archeo 2, Polandn = 252,560 × 1,920Adamczak et al. (2018)
EOS 5D, Canon Inc, Chinan = 250Qi et al. (2019)
Jai BB-141 GE, England136,472n = 45Jørgensen et al. (2019)
EggTMC-7DSP (PULNIX)n = 110Cen et al. (2006)
UI-2210RE-C-HQ, IDS, Germany640 × 480Duan et al. (2016)
PROLINE UK, Model 565 sn = 125Soltani et al. (2015)
Microsoft Kinect cameran = 8424 × 512Chan et al. (2018)
Microsoft Kinect camera7,500n = 1,500512 × 424Okinda et al. (2020b)
SDN-550, Samsungn = 200768 × 576Javadikia et al. (2011)
Canon IXUS 960IS1,200 × 1,600Asadi et al. (2012)
HD Webcam c270 h640 × 480Siswantoro et al. (2017)
Logitech Webcam C170640 × 480Widiasri et al. (2019)
FUJIFILM cameran = 120Ab Nasir et al. (2018)
Canon IXUS 960ISn = 901,200 × 1,600Raoufat and Asadi (2010)
SRC-500HP CCD cameran = 100Zhou et al. (2008)
Nikon D90 camera4,288 × 2,848Zhang et al. (2016)
Characteristics of type of camera and data set by different studies.

Segmentation

Segmentation is performed after image processing to separate a digital image into distinct areas. The primary function is to separate the background during object evaluation for the processing of the significant area. The meaningful segments, also known as ROI, are the initial step in transforming a color or a grayscale image from low-level image processing to high-level image description (features). Significant discriminating features are fundamental in separating the background from the birds. The main segmentation approaches based on color can be grouped into 3 techniques: background subtraction, threshold-based, and learning-based techniques. The threshold-based technique is the most widely used technique for foreground detection in chicken weight estimation systems. The adaptive threshold technique, based on the study by Otsu (1979), is the most classical technique based on an image's global intensity histograms to determine the threshold value. The model-based segmentation approach has not been applied to chicken weight estimation systems based on the reviewed articles. Adaptive threshold technique based on (Otsu, 1979) has been used by De Wet et al. (2003); Mollah et al. (2010); Wang et al. (2017); Amraei et al. (2017a); Amraei et al. (2017b); Amraei et al. (2018); Teimouri et al. (2018); Qi et al. (2019); and Koodtalang and Sangsuwan (2019). Mortensen et al. (2016) used the range-based watershed segmentation technique to partition the broiler image into several partitions. To avoid oversegmentation of the broilers, the depth images were smoothed using a Gaussian kernel, followed by a morphologic opening with a circular structuring element. Cen et al. (2006) used threshold-based image segmentation and an indicator to enhance the egg's difference and background. Duan et al. (2016) also used threshold-based segmentation to binarize the egg image after removing the light leakage. Likewise, a simple threshold operation was applied to segment the egg region from its background (Thipakorn et al., 2017). In the studies by Soltani et al. (2015), Siswantoro et al. (2017), and Widiasri et al. (2019), to separate eggs from the background, the segmentation was carried out on the images using automatic thresholding. In this method, the program finds the best threshold for each image separately. Chan et al. (2018) used the Otsu (1979) method to separate the egg from the stage based on the assigned infrared intensity values. A summary of different segmentation techniques applied in poultry studies is presented in Table 2.
Table 2

Summary of segmentation techniques by different studies.

Segmentation techniqueProductImage typeAuthor(s)
Threshold basedBroiler chickenRGBDe Wet et al. (2003)
Mollah et al. (2010)
Amraei et al. (2017a)
Amraei et al. (2017b)
Amraei et al. (2018)
Watershed basedDepthMortensen et al. (2016)
Threshold basedWang et al. (2017)
RGB – GrayscaleLotufo et al. (1999)
Chicken portionsTeimouri et al. (2018)
Chicken carcassQi et al. (2019)
Chicken legsKoodtalang and Sangsuwan (2019)
EggsRGBCen et al. (2006)
BinaryDuan et al. (2016)
Soltani et al. (2015)
Asadi et al. (2012)
DepthChan et al. (2018)
Okinda et al. (2020b)
RGB – GrayscaleThipakorn et al. (2017)
Siswantoro et al. (2017)
Widiasri et al. (2019)
Ab Nasir et al. (2018)
RGB – Grayscale-BinaryAlikhanov et al. (2019)
Alikhanov et al. (2018)
Summary of segmentation techniques by different studies.

Features Extraction

The morphologic features (shape and size) are frequently used for weight estimation, automatic sorting, and poultry product classification. Quantifying the feature size is measured using 2D features (projected area, perimeter, length, width, radial distance, major and minor axis). The area (a scalar quantity) calculates the actual number of pixels in the region. The distance of 2 neighboring pixels results in feature extraction. Perimeter (a scalar quantity) is the distance between the region's boundaries (Korohou et al., 2020). Eccentricity is the ratio of the major and minor axis when an ellipse is fitted on an image. The radial distance is the average distance between the boundary points and the center of an image's gravity. Major axis length is the pixel distance between an ellipse's major axis end points, and minor axis length is the pixel distance between an ellipse's and minor axis endpoints. Three-dimensional (3D) features (volume, surface area) have also been used to quantify feature size. Mortensen et al. (2016) used 1-dimensional, 2D, and 3D features for broiler weight prediction. The age was the 1D feature, while 2D features included; projected area, width, perimeter, radius, and eccentricity. Three-dimensional features extracted were volume, convex volume, surface area, convex surface area, back width, and back height from the depth images. Amraei et al. (2017a); Amraei et al. (2017b); and Amraei et al. (2018) extracted 2D feature parameters using the Image Processing Toolbox. Wang et al. (2017) extracted 9 features using a mathematical geometry method for backpropagation neural network model construction. Lotufo et al. (1999) used the area of 3 carcass parts as a feature for weight prediction. Chen et al. (2017) extracted 6 parameters (projection area, contour length, length, breast width, breast length, and fitting ellipse) from 95 processed images. Similarly, Qi et al. (2019) obtained the same set of features for automatic chicken carcass classification based on weight. Teimouri et al. (2018) obtained 12 geometrical features, color features, and texture features from chicken portions images. Koodtalang and Sangsuwan (2019) extracted width, length, and contour's area features from chicken's leg images as inputs for the deep neural network model. Finally, Jørgensen et al. (2019) extracted a total of thirty-five 2D and 3D features for the model development. Cen et al. (2006) extracted 4 egg size features of vertical diameter, maximal horizontal diameter, upper horizontal diameter, and nether horizontal diameter to correlate to egg's weight. Duan et al. (2016) extracted the major and minor axis feature parameters representing size and egg shape index. Waranusast et al. (2016) used the following 6 features: major axis, minor axis, egg circumference, egg area, axis ratio, and compactness from the best ellipse's geometric properties. Thipakorn et al. (2017) extracted 13 geometric features from the acquired images of eggs for weight prediction. Okinda et al. (2020b) extracted 2D geometric features from depth images and used them to develop 13 regression models. Ab Nasir et al. (2018) extracted 7 geometric features using image processing techniques and the principal component analysis. Table 3 summarizes feature and space and feature types extracted by different studies.
Table 3

Comparison of different types of features used by different studies.

ProductParametersFeature spaceFeature typeAuthor(s)
Broiler chickensLive weight prediction1D + 2D + 3DMorphologic(Mortensen et al., 2016)
Mass estimation model(Wang et al., 2017)
Weight estimation2D(Amraei et al., 2017a)
Weight estimation(Amraei et al., 2017b)
Weight grading(Chen et al., 2017)
Weight-based classification(Qi et al., 2019)
Broiler carcassPoultry weight estimationArea(Lotufo et al., 1999)
Chicken portionsOn-line separation and sortingGeometrical, color, and texture(Teimouri et al., 2018)
Chicken's legsSize classification2DGeometric(Koodtalang and Sangsuwan, 2019)
Broilers carcassWeight estimation2D + 3DMorphological(Jørgensen et al., 2019)
EggWeight detection2D(Cen et al., 2006)
Shape and size grading(Duan et al., 2016)
Volume prediction1D + 2D(Siswantoro et al., 2017)
Mass and volume measurement2D(Widiasri et al., 2019)
Mass estimation(Asadi et al., 2012)
Size ClassificationGeometric(Waranusast et al., 2016)
Weight prediction and size classification(Thipakorn et al., 2017)
Volume measurement(Chan et al., 2018)
Volume estimation(Okinda et al., 2020b)
Weight measurement(Javadikia et al., 2011)
Weight estimation(Aragua and Mabayo, 2018)
Weight estimation(Asadi and Raoufat, 2010)
Weight- and shape-based grading(Ab Nasir et al., 2018)
Weight measurement(Alikhanov et al., 2015)
Automatic Sorting(Alikhanov et al., 2019)
Weight sorting(Alikhanov et al., 2018)
Egg weight estimation(Raoufat and Asadi, 2010)
Volume and surface area determination(Zhou et al., 2008)

Abbreviations: 1D, 1-dimensional; 2D, 2-dimensional; 3D, 3-dimensional.

Comparison of different types of features used by different studies. Abbreviations: 1D, 1-dimensional; 2D, 2-dimensional; 3D, 3-dimensional.

Modeling Techniques

Using image processing techniques, poultry product images can be described by a set of features such as size and shape. These features are used to form a training set; then, classification algorithms are applied to extract the knowledge base, which decides the unknown case. Different modeling methods have been used in CVS to classify poultry products by weight, volume, and size. These have included artificial neural networks (ANN), regression, support vector regression (SVR), support vector machines (SVM), and convolutional neural network/deep learning (Tan and Xu, 2020). Either single independent or multiple variables are used in the development of these models. Lotufo et al. (1999) used a multidimensional linear regression least square curve fitting to predict the weight from the 3 area parameters (breast, legs, and wings regions). They achieved a R2 = 0.92 and a SD error of 78 g for the system. Mendeş and Akkartal (2009) used the regression trees to predict slaughter weight, and Tyasi et al. (2020) used classification and regression trees to predict the BW of laying hens. Mendeş et al. (2005); Yakubu et al. (2009); Mendes (2009); Yakubu and Salako (2009); and Egena et al. (2014b) used principal component scores and analysis for weight prediction of poultry. Multiple regression models were developed to predict male chicken's weight (Mendes, 2009; Jesuyon and Oyelola, 2016). In addition, Chen et al. (2017) constructed a simple linear regression model and a multiple linear regression model for carcass grading. Based on the carcass projection area, the simple linear regression model had the highest accuracy of R2 = 0.827. The multiple linear regression model had the highest accuracy of R2 = 0.8880 based on the projection area, breast width, and breast length. The adjusted multiple linear regression model accuracy was R2 = 0.933 after removing the detected 8 outliers. Similarly, De Wet et al. (2003) used regression equations to determine broiler chickens' daily BW. The study estimated chicken weight by a 10% relative error (SD of the residuals from the image surface pixels) and 15% for image periphery data. Mollah et al. (2010) developed a linear equation to estimate the broiler's weights from its body surface area pixels. The relative error in weight estimation of broiler chicken by image analysis, expressed in terms of percent error of the residuals from surface area pixels, was between 0.04 and 16.47. Adamczak et al. (2018) used regression equations to correlate the breast and muscle weight with cross-section areas. The observed results reported standard prediction errors of 36.99 g for the breast and 33.19 g for the m—pectoralis major. Qi et al. (2019) used 3 machine learning methods (RF, AB, and GB) to establish a nonlinear regression model. Results showed that for carcass weight prediction with a root mean square error (RMSE) of 0.039, the gradient boosting prediction model had the highest accuracy of R2 = 0.996. The model was also successful at 96% accuracy in weight grading. Jørgensen et al. (2019) used linear regression models in the weight estimation of broilers. The results showed that using a 2D model only, the mean absolute error was 47.22 g with a 3.53% MAPE and 63.49 g for the 3D model using only 3D features with a 4.72%. MAPE. The 2D–3D model had a 46.47 g MAE and 3.47%. MAPE. Compared with 2D features alone, there is a 1.80% reduction in MAPE between 2D and 3D features. In the study by Cen et al. (2006), a regression model between an egg's weight and its size was established and used to detect its weight. The results indicated the system's egg weight detection capability with a correlation coefficient of 0.9781 and an absolute error of less than ±3 g. Chan et al. (2018) calculated egg volume directly from the egg shape parameters estimated from the least squares technique, where the captured egg point clouds are fitted in a 3D space to unique geometric models of an egg. Consequently, the egg shape parameters were estimated alongside the eggs' orientation and position. The results showed an estimated volume accuracy of 93.3% when approximated with reference volumes. In the study by Okinda et al. (2020b), to estimate the volume of an egg, 13 regression models were explored: SVR (linear, quadratic, cubic, fine Gaussian, medium Gaussian, and coarse Gaussian), Gaussian Process Regression (rational quadratic, squared exponential, Matern 5/2, and 182 exponential) and ANN (Levenberg-Marquardt, Bayesian regularization, and scaled conjugate gradient training algorithms). Regression models for egg volume and surface area measurement were developed in studies on eggs (Zhou et al., 2008). Javadikia et al. (2011) used backpropagation and hybrid learning methods in an ANFIS model to measure eggs' weight. The results showed a precited correlation coefficient of 0.9942 from the ANFIS model, thus a practical and cheap methodology for measuring egg weight. Amraei et al. (2017b) used the SVR algorithm for weight prediction of live broiler chicken. The SVR algorithms' R2, the MAPE, and RMSE results values were 0.98., 8.63%, and 67.88, respectively. Amraei et al. (2018) used a transfer function model to estimate broiler chicken weight. The MAPE, RMSE, SRE, and RAV were calculated as 21.465, 102.97, 0.240, and 0.0578, respectively. The transfer function model is one of the dynamic data-based modeling approaches used to describe systems' dynamic responses related to the system's output(s) to the input(s) (Amraei et al., 2018). Artificial neural networks were used to estimate broiler chickens' weight (Mortensen et al., 2016; Amraei et al., 2017a; Wang et al., 2017). In addition, ANN, partial least squares regression, and linear discriminant analysis were used in the separation and sorting of on-line chicken portions for carcass and cuts applications (Teimouri et al., 2018). The ANN classifier outperformed the linear models with an overall 93% accuracy and a maximum conveyor speed of 0.2 m s−1. They reported a total processing rate of 2,800 samples per hour. Koodtalang and Sangsuwan (2019) developed deep neural network models for the classification of the leg size. In egg products, neural networks have been widely used. For instance, ANN was used to predict egg volume (Soltani et al., 2015), neural network techniques and algorithms were used to estimate egg weight (Asadi and Raoufat, 2010; Raoufat and Asadi, 2010), and backpropagation neural network was used for the prediction of egg volume (Siswantoro et al., 2017). The SVM classifier was applied in the classification of egg size (Waranusast et al., 2016; Thipakorn et al., 2017) and the prediction and classification of egg weight (Zalhan et al., 2016). A summary of classification techniques applied by different poultry studies are shown in Table 4.
Table 4

Summary of classification techniques by different poultry studies.

Input imagePreprocessingFeature extractionClassifier/data analysisAccuracyAuthor(s)
Broiler chickenThreshold-based segmentationNonlinear regressionRE = 11%, 16%De Wet et al. (2003)
Linear regression modelsRE = 0.04%, 16.47%Mollah et al. (2010)
Watershed segmentation, Smoothening, morphologic opening3D + Morphologic featuresLinear regression ANNsMRE = 7.8%,RSD = 6.6%Mortensen et al. (2016)
Broiler chickenThreshold-based segmentationMorphological featuresBPNN, t testR2 = 0.98Amraei et al. (2017a)
SVRR2 = 0.98RMSE = 67.88MAPE = 8.63%Amraei et al. (2017b)
TF modelR2 = 0.98Amraei et al. (2018)
Morphologic + 3D featuresBPNNRMSE = 0.048 kgMRE = 3.3%Wang et al. (2017)
Chicken carcassMorphologic featuresSimple linear &Multiple linear regressionR2 = 0.827R2 = 0.880Chen et al. (2017)
Chicken portionsThreshold-based segmentationGeometrical, Color, and texturePLSR, LDA, and ANNsAccuracy = 93%2,800 samples p/hTeimouri et al. (2018)
Chicken carcassCorrelation coefficientsRegression equationsSEP = 36.99, 33.19 gAdamczak et al. (2018)
Chicken carcassThreshold-based segmentationMorphologic featuresML Classification and regression tree modelsR2 = 0.996RMSE = 0.039Qi et al. (2019)
Broiler carcass2D + 3DRegression modelsR2 = (0.755–0.808)R2 = (0.833–0.855)Jørgensen et al. (2019)
EggThreshold-based segmentationMorphologic featuresRegression modelr = 0.9781AE = < ± 3 gCen et al. (2006)
Statistical analysisSize grading = 90.5%Shape grading = 89.3%Duan et al. (2016)
Geometrical featuresSVM classifier80.4%, Measurement error = 3.1%Waranusast et al. (2016)
Linear regression and equationsSVM classifierr = 0.991587.58%Thipakorn et al. (2017)
EggMorphologic featuresStatistical analysisAccuracy = >96%Zalhan et al. (2016)
Threshold-based segmentationDiameterStatistical analysisANNR2 = 0.99, Mean AE = 0.59 cm3, Maximum AE = 1.69 cm3R2 = 0.992, RMSE = 0.66 cm3Soltani et al. (2015)
Geometric featuresRegression analysisAccuracy = 93.3%Chan et al. (2018)
Regression modelsStatistical analysist testR2 = 0.984, RMSE = 1.175 cm3, 1.294 cm3 and 1.080 cm3Okinda et al. (2020b)
ANFIS modelMSE = 0.2955, MAE = 0.3285, SSE = 35.4649, r = 0.9942 and P = 0Javadikia et al. (2011)
Threshold-based segmentationStatistical analysisAccuracy = 96.31%Error = 3.69%Aragua and Mabayo (2018)
Neural NetworkR2 = 96%Absolute Error = < 2.3 gAsadi and Raoufat (2010)
Threshold-based segmentationBPNNStatistical analysisAbsolute RE = 2.2078%R2 = 0.9738Siswantoro et al. (2017)
Morphologic featuresLinear regressionANOVAAbsolute RE = < 5%, CV= < 1%Widiasri et al. (2019)
Regression modelsr = 95%Asadi et al. (2012)
Geometric featuresK-NN classifierAccuracy = 94.16%Accuracy = 44.17%Ab Nasir et al. (2018)
Statistical and regression analysisR2 = 0.9439R2 = 0.9235Alikhanov et al. (2015)
Threshold-based segmentationRegression analysisSorting accuracy = 94.6% and 90.3%Alikhanov et al. (2019)
Regression analysisr = 0.989, R2 = 0.978Error = 2.5% and 12.5%Alikhanov et al. (2018)
Neural network algorithmsR2 = 0.96AE = < 2.3 gRaoufat and Asadi (2010)
Regression analysisR2 = 0.95Shanmugasundaram (2016)
Statistical analysisAccuracy = 99%Zhang et al. (2016)
EggGeometric featuresLinear regression Statistical analysisR = 0.88 and 0.86Zhou et al. (2008)

Abbreviations: AE, average error; ANFIS, adaptive neuro fuzzy inference system; MAE, mean absolute error; MAPE, mean absolute percentage error; MRE, mean relative error; MSE, mean square error; P, probability; r, correlation coefficient; R2, coefficient of determination; RE, relative error; RMSE, root mean square error; RSD, relative SD; SEP, standard error of prediction; SSE, Sum square error, TF, transfer function.

Summary of classification techniques by different poultry studies. Abbreviations: AE, average error; ANFIS, adaptive neuro fuzzy inference system; MAE, mean absolute error; MAPE, mean absolute percentage error; MRE, mean relative error; MSE, mean square error; P, probability; r, correlation coefficient; R2, coefficient of determination; RE, relative error; RMSE, root mean square error; RSD, relative SD; SEP, standard error of prediction; SSE, Sum square error, TF, transfer function.

Discussion

Applications to Poultry Live Weight

As with other livestock, the correlation between preslaughter and postslaughter measurements is essential in poultry production. BW is the primary indicator of development in poultry production. The measurement of poultry's BW using machine vision (MV) techniques is an excellent option than using weighing scales. This vision-based method ensures that poultry is weighed regularly and will not cause stress or the need for sizeable human labor. Continuing developments in CV technology currently dispense interestingly new combinations of image processing and analysis and implementation of machine learning techniques and electronic hardware in poultry BW determination, especially broilers. Therefore, it is a worthy asset to large-scale poultry processing plants. Within the poultry industry, BW is valuable because it provides relevant information on feed conversion efficiency, vitality, disease occurrence, weight uniformity, and growth rate (Flood et al., 1992). BW also indicates management quality (Lott et al., 1982; Turner et al., 1983; De Wet et al., 2003). Measurement of BW is used for body development evaluation in poultry production and livestock, but it is hardly measured in the field (De Brito Ferreira et al., 2020). Subsequently, the existence of the correlation between the BW or body conformation and physical measurements, for example, keel length, body length, thigh length, shank length, breast girth, and thigh length, have been the primary focus for poultry producers owing to the effect of such characters on the broiler efficiency and feed productivity. These interrelationships between body measurements could, therefore, lead to faster selection (Mustafa, 2016). They also determine the market value for poultry by their weight. For this reason, poultry breeders have sought to attain a higher BW of chickens in earlier ages in production to achieve better marketing prices for their birds (Malik et al., 1997). General development and body mass are the primary measurements used to determine the appropriate age slaughter time. It is worth noting that, apart from MV, researchers have used different methods for evaluating the live weight of the poultry. The article will also highlight specific approaches, but they are not a critical aspect of this review. Live poultry BW has also been predicted from linear and zoometrical body measurements (Raji et al., 2009; Semakula et al., 2011; Ojedapo et al., 2012). Ahmed (2018) studied the interrelationships between linear body measurements and commercial Marshall's BW broilers in Nigeria's semiarid region. Malomane et al. (2014) performed a factor score analysis for BW estimation from 3 indigenous southern African chicken breeds' linear body measurements. Research by Teguia et al. (2008) used 1-wk-old ducklings' body measurements for the African Muscovy duck's evaluation of live BW and body characteristics. Al-Nedawi (2019) has researched body measurement's role as predictors of commercial broilers' final weight, using reasonable regression procedures. Likewise, Bachev and Lalev (1990) studied the relationship between corporal dimensions and live turkey weight. Tyasi et al. (2017) performed assessment research on the relation between body measuring traits and indigenous Chinese Dagu chickens' BW using path analysis. Mendeş and Akkartal (2009) used the regression tree analysis to predict slaughter weight in broilers. Likewise, Tyasi et al. (2020) have used classification and regression tree analysis for BW prediction of Potchefstroom Koekoek laying hens. Researchers have often implemented multiple linear regression, principal component scores, and analysis in broiler weight prediction. Mendes (2009) predicted male chickens' slaughter weight using principal component scores and developed multiple regression models. Principal components scores were also used to measure indigenous chickens' shape and size and predict BW in Nigeria by Yakubu et al. (2009). The results showed a highly significant and positive correlation between the biometric traits and BW. Jesuyon and Oyelola (2016) compared the BW and live weight measurements of broiler strains using multiple regression. Ogah (2011) conducted principal components analysis of Nigerian indigenous turkey body measurements to use both orthogonal and original traits to predict the live weight. In another work, Egena et al. (2014b) applied principal component analysis for Nigerian indigenous chickens' body measurement relationships. Path coefficient analysis has also been used in the estimation of BW in poultry. Yakubu and Salako (2009) analyzed Nigerian indigenous chickens' BW and morphologic attributes using path coefficient analysis. Path analysis was conducted by Mendeş et al. (2005) on the correlation between the live weight and various body measures of American bronze turkeys. Egena et al. (2014a) performed a related analysis on indigenous chickens from Nigeria to establish the association between BW and body measurements. Latshaw and Bishop (2001) estimated chickens' body composition and BW by using noninvasive measurements. Cangar et al. (2006) used models of input-output and a single output to forecast slaughter end weight of broiler chickens. De Wet et al. (2003) used image analysis to investigate the possibility of detecting broiler chickens' daily growth rates. Fifty broiler chickens reared under commercial conditions were used. Ten of 50 chickens were randomly selected and video recorded (upper view) 18 times during the 42-d growing period. The number of surfaces and periphery pixels from the images was used to derive a relationship between body dimension and live weight. Likewise, Mollah et al. (2010) used digital image analysis for broilers' live weight estimation. Using a linear equation from image analysis on the broiler body surface area, they estimated broiler weights from body surface area pixels. The degree of fit of the linear equation was 0.999, and on the other hand, the estimated BW were not significantly (P > 0.05) different from manually measured BW up to 35 d of age. This research showed a clear implicit relation between broiler surface area and BW. Mortensen et al. (2016) used 3D CV to predict broiler chicken weight. They tested the system in a commercial broiler house with 48,000 broilers (Ross 308) during the last 20 d of the breeding period. Relative mean errors of 7.8% and a relative SD of 6.6% for all ages and broilers used for the study were achieved. They concluded that a better segmentation method could significantly improve prediction because of the broilers in the image overlap. A comparative study conducted by Amraei et al. (2017b) found that the use of MV with SVR is promising for estimating the weight of live broiler chickens. Amraei et al. (2018) developed a transfer function model and predicted broiler chicken weight using MV. The reported accuracy was R2 = 0.98 for predicting BW transfer function from the correlation between absolute and predicted weight. Amraei et al. (2017a) used MV and ANN techniques to estimate broiler chickens' weight. For weight prediction, they used several ANN techniques. The study showed that the approach suggested was practical and useful in estimating broiler chickens' weight using ANN models. Wang et al. (2017) developed a broiler quality estimation model based on depth images and backpropagation neural network. The results showed an 11% maximum relative error and a 0.5% minimum relative error. It also reported an optimal fitness of 0.994, a mean relative error of 3.3%, and an RMSE of 0.048 kg. This study concluded that estimating broiler weight using the broiler weight estimation models and CV was indeed practical and feasible.

Applications to Carcass and Cuts

Carcass weight is an essential parameter for production economics and cutting equipment adjustment at any poultry slaughtering plant (Jørgensen et al., 2019). In the poultry industry, nonstandard raw material in terms of size and weight of slaughtered chickens is critical (Adamczak et al., 2018). The carcass size and weight determine the appropriate cutup station's cutting specifications for a given specific broiler carcass. If a carcass is gigantic than the cutting line settings, parts of meat will either be left in the body or overlapped into other cuttings. Besides, if a carcass is smaller than the cutting line settings, it may be possible to cut bones and ribs together with the fillet (Adamczak et al., 2018). Correct measures of carcass weight and size thus reduce waste, increase cut quality, and maximize profits (Adamczak et al., 2018; Jørgensen et al., 2019). In addition, each carcass's cutting line settings are not manually adjustable (Adamczak et al., 2018). The automation of these cutting lines is thus a fundamental factor in the processing of chicken carcasses. The conventional broiler carcass weighing technique uses a conveyor weighing scale installed as part of the processing line (Jørgensen et al., 2019). However, this method suffers from some shortcomings, that is, it requires a carcass transfer off and back between the production line and the conveyor weighing scale (Jørgensen et al., 2019). The conveyor weighing scales are often quite large, and the entire production line needs to be halted during their maintenance, or the weighing scale is entirely bypassed. A study by Hudspeth et al. (1973) sought to establish the relationship between broiler parts weight, carcass weights, and type of cut. Several studies have reported several approaches based on several techniques for estimating broiler carcass weight. Scollan et al. (1998) introduced nuclear magnetic resonance imaging for evaluating the pectoralis muscle mass in broilers. Silva et al. (2006) and Oviedo-Rondon et al. (2007) developed a nondestructive real-time ultrasound system to measure broiler carcass and breast muscle mass. Studies by Yakubu and Idahor (2009), Raji et al. (2010), and Tyasi et al. (2018) correlated measurement traits of the body and age to the weight of the broiler carcass. Despite the nondestructive nature of these mentioned methods, they were invasive and required contact with a live chicken before slaughter, and they were time-consuming. Çelik et al. (2018) performed a study of the analysis of variables that affect the weight of the white turkeys' carcass weight using regression analysis focused on factor analysis scores and ridge regression. Another study by Hidayat and Iskandar (2018) aimed to estimate carcass weight and carcass cuts based on female SenSi-1 Agrinak chickens' live weight and age. They reported a definite relation to live weight between carcass cuts weight and carcass weight. Lotufo et al. (1999) used MV for bird weight estimation. Digitized silhouettes images of the carcass were acquired and divided into 6 regions: wings, neck, breast, and legs. The study used algorithms of mathematical morphology for region-based carcass segmentation. The areas of those regions were used as parameters for fitting the polynomial curve. Chen et al. (2017) used MV to grade the weight of the chicken carcass. The study also achieved a carcass weight grading rate of 89% on average. In another study, Teimouri et al. (2018) used MV with linear and nonlinear classifiers to automatically sort chicken portions. This study showed that a combination of machine vision and ANN classifier might be applied to the effective sorting of chicken portions automatically and accurately into 5 categories (breast, leg, fillet, wing, and drumstick). Adamczak et al. (2018) used 3D scanning for weight determination of whole chicken breast. In the study, 3D images from 9 scans were obtained from 25 chicken carcasses and split into cross sections across various planes. The study concluded that the reported results were significantly lower compared with the industrial classification method, which is usually based on the entire carcass weight. Qi et al. (2019) used machine vision and machine learning technology to classify and grade chicken carcasses by weight automatically. Koodtalang and Sangsuwan (2019) used digital image processing and a deep neural network to classify chicken's leg size. The findings revealed that the object dimension measurement error was smaller than ±0.2 cm and ±0.4 cm2 for length and area, respectively, and the trained model yielded 100% accuracy for the chicken leg size classification. Jørgensen et al. (2019) recently used 3D prior knowledge for broiler carcass weight estimation in images using 3D prior knowledge. An unpaired t test was carried out to ensure the results were significant. Also correlated were 5 top 2D and 3D features, and the results indicated R2 between 0.755 and 0.808 for 2D features and between 0.833 and 0.855 for 3D features. The study concluded that using 3D prior features, weight from images can be adequately estimated. In most countries, the process of cutting poultry is carried out manually, where human operators are tasked with cutting up poultry into 5 main classifications: drumstick, wing, breast, leg, and fillet. Afterward, the chicken parts are manually sorted into separate containers and finally packaged. However, this manual process of sorting poultry cuts is obsolete, invasive, and has multiple drawbacks in the poultry industry (Teimouri et al., 2018). A challenge in the poultry industry has been the nonstandard weight and size of slaughtered poultry, which leads to heterogeneity in the carcasses obtained (Adamczak et al., 2018). Such disparity in raw materials leads to various operational and technical difficulties in cutting lines and critically imparts the plant's economic indicators (Brosnan and Sun, 2004; Misimi et al., 2016). Carcass weight is an essential parameter regarding production economics and cutting equipment adjustment at every slaughtering plant (Jørgensen et al., 2019). The carcass weight and size determine the appropriate cutup station's cutting specifications for a given specific broiler carcass. Carcass substantiality in poultry is strongly dependent on the leg and breast muscle; therefore, selection may be targeted at these areas, as indicated by Wilkiewicz-Wawro et al. (2003). Most poultry processing plants are currently factoring in the weight of the entire carcass during classification. Owing to the carcass shape distinction, such classification does not consider the exact share and size of individual muscle prediction in the carcass. Automatic cutting of larger-sized carcasses than expected in cutting-line settings may result in some portions of the meat left in the body, resulting in economic losses (Adamczak et al., 2018). Most current raw material classification systems are equipped with a 2D camera and video systems capable of accurately assigning carcasses into quality classes allowing appropriate targeting of carcasses with suitable parameters and better use of raw material for further processing stages in the division (Mollah et al., 2010; Adamczak et al., 2018; Teimouri et al., 2018). Therefore, we must eradicate potential meat contaminations from human operators, which pose health hazards and increase processing speed during processes of sorting, grading, and packaging in the poultry and meat and food industries in general (Bhattacharya, 2014).

Applications to Eggs

The production of eggs in poultry farms involves many tasks such as collecting and harvesting eggs, washing and cleaning, separating cracked eggs from healthy eggs, classifying, sorting, grading, and packaging, most of which are performed manually and therefore tedious human labor involved (Aragua and Mabayo, 2018). In the poultry industry, weighing eggs is an essential requirement, and the information can be used for many applications. The geometrical properties (volume and surface area) of an egg are crucial in both the poultry production industry as well as in biological investigations as they provide relevant information on poultry weight prediction, internal egg parameters, shell quality inspection, and ecological and population morphology research (Narushin, 2005). Eggs are classified, sorted, graded, and finally packed as per their size for market distribution, determining their weight. Eggs are commonly sold for markets as per size grading. Egg weight is also used worldwide in specific food recipes (Waranusast et al., 2016). Size and appearance are critical to the purchase of eggs and are the best critical quality attribute that consumers assess before selecting eggs (Soltani et al., 2015). Because of chicken rearing benefits, chicken eggs are the most preferred over other poultry varieties such as turkey, duck, ostrich, and quail (Okinda et al., 2020b). The Archimedes-based water drainage method, also known as the water displacement method, was commonly used to measure and estimate egg volume (Loftin and Bowman, 1978; Rush et al., 2009; Boersma and Rebstock, 2010). However, this approach is destructive because the egg becomes moist, which inhibits the egg's incubation. (Narushin, 1997; Rush et al., 2009). Several studies have been conducted on egg size classification, egg weight estimation, and egg volume prediction. For example, Cen et al. (2006) developed an egg-weight detection system using MV. Duan et al. (2016) have developed an on-line egg shape and size detection system based on the convex hull algorithm. The developed system was able to detect 30,000 eggs per h for shape classification at 89.3% accuracy and 90.5% for size grading. Waranusast et al. (2016) introduced image processing techniques and the SVM classifier to classify egg size using a smartphone camera. The results showed 2.8, 3.4, and 5.9% errors for the eggs' major axis, eggs minor axis, and the radius of the coins, respectively, when automatically acquired by the system were compared with the manually measured properties. The study reported an overall accuracy of 80.4% after performance evaluation, which applied the 10-fold cross-validation to the classification model. By counting the number of pixels after detecting the coin from the image, the egg's size can be estimated. The egg pixels were modeled as an ellipse for size estimation purposes, and camera lens distortion was not considered. These conditions had limited egg size estimation accuracy. Similar research by Thipakorn et al. (2017) applied image processing and machine learning to classify the egg size and predict egg weight. They used linear regression to predict egg weight and SVM classifiers for egg classification as per size. The results reported an 87.58% accuracy for the egg size classification and a correlation coefficient of r = 0.9915 between the real and the predicted weight. Zalhan et al. (2016) proposed a CV-based egg grade classifier. The egg was placed vertically on a stage, and the camera aligned with the egg's apex. The egg images' pixel size was determined from reference values derived from a coordinate measuring machine. Using a digital camera, the egg radius calculation model gave more than 96% volume accuracy. They concluded that the coordinate measuring machine is generally an expensive method with high operational complexity levels. Soltani et al. (2015) also predicted egg volume using MV based on a mathematical model known as the Pappus theorem and artificial neural networks. They developed an image processing algorithm for the computation of minor and major diameters of the eggs used as the ANN model's input parameters. The results reported a predicted volume accuracy of R2 = 0.99 from the mathematical Pappus theorem model with a maximum absolute error of 1.69 cm3 and an absolute mean error of 0.59 cm3. The topology of the best ANN model reported an R2 = 0.992 and an RMSE of 0.66 cm3. The mathematical model overall was superior to the ANN model, but the results were significantly better than other studies. In another study, Chan et al. (2018) used the Microsoft Kinect 3D camera to develop a system for measuring an egg's volume. For calculating the egg volume, a point cloud postprocessing algorithm was implemented. Okinda et al. (2020b) recently estimated egg volume using image processing, CV, and machine learning techniques. They developed an algorithm that was used to segment the occluded eggs. The exponential Gaussian process regression was the highest performing model, with 0.984 R2 and 1.175 cm3 RMSE. The model also estimated egg volume under partial occlusion at RMSE of 1.294 cm3 and 1.080 cm3, respectively. A t test conducted also found no significant difference between the different volume estimation methods used in the research. Javadikia et al. (2011) proposed measuring egg weight using image processing and the adaptive neuro fuzzy inference system (ANFIS) model. The model used weight prediction features extracted from the width and length and consequently found the relationship between them and egg weight. Aragua and Mabayo (2018) estimated egg weight using CV. The approach was reported as cost-effective as it reduces human involvement over the entire process. The experimental results showed egg classification and weight estimation accuracy of 96.31%. Asadi and Raoufat (2010) applied MV and neural network techniques for fresh egg weight estimation. The study extracted 12 size features that were used to train the various algorithms. The results reported the multilayer perceptron and scaled conjugate gradient superior to the other algorithms with high accuracy (R = 96) in egg weight estimation. In a later study, Asadi et al. (2012) presented a MV technique for estimating fresh egg mass, where 6 algorithms were used to extract features and the data used for model establishment. The results showed that an egg's mass could be accurately estimated by using its 2 perpendicular views. The study achieved a correlation coefficient accuracy of 95% for egg mass prediction. Siswantoro et al. (2017) designed a CVS to predict egg volume using a backpropagation neural network. The results indicated an absolute relative error of 2.2078% in the proposed technique and a correlation coefficient of 0.978 between the actual and predicted egg volume and no significant difference. In another study, CV was used by Widiasri et al. (2019) for mass and volume measurement using the disc approach. The extracted features were the major and minor axis, and the data were used for volume calculation, while mass was estimated using density and a linear regression model. To test the proposed technique's accuracy, correlation tests, relative absolute error, and ANOVA tests were carried out. Ab Nasir et al. (2018) developed an egg grading system based on CV using egg shape and weight parameters. The k-nearest neighbor classifier was applied for classification. The results showed 94.16% accuracy for shape-based grading and a weight-based accuracy of 44.17%. Alikhanov et al. (2015) proposed an indirect technique using image processing to measure egg weight. From the processed images, geometric parameters were acquired, and egg volume and weight were calculated. They used regression analysis to calculate egg weight and the relation between geometric parameters and exponential regression for the approximation of egg volume and weight relationship. This study concluded that when measuring egg weight using image analysis and indirect measurement, some parameters are insignificant. In addition, eggs have been sorted into various categories using different MV techniques to replace manual sorting, which results in low efficiency (Alikhanov et al., 2017). Egg sorting is usually performed by determining the geometric parameters of the eggs. In their work, Alikhanov et al. (2017) designed an automatic egg sorting and control framework machine that could sort the eggs into specified categories. Later research by Alikhanov et al. (2018) used image processing and an indirect approach to sort egg weight into 4 classes. Results showed that the egg area was the most crucial parameter for indirect egg weight estimation, achieving a correlation coefficient of 0.989, and the mathematical model used for egg area and weight relationship achieving a 0.978 correlation of determination. The test and training set also had a 2.5% and 12.5 classification error, respectively. In recent work, Alikhanov et al. (2019) designed an automatic egg-sorting system using CV. The system was capable of sorting eggs by indirect shape and weight. Image processing algorithms were used to obtain geometric features from the eggs, and a regression model was developed to sort the eggs' weight. The experimental findings showed that the system was feasible in sorting of 2 to 3 eggs per s with an accuracy of 94.6 and 90.3%, respectively, after assessment of the 2 transport conveyor speeds. Raoufat and Asadi (2010) used neural network techniques with MV for egg weight estimation. From the experimental results, the multilayer perceptron and scaled conjugate gradient training algorithm was superior to the other 2 algorithms for egg weight estimation with an accuracy of R2 = 0.96 and an absolute error of <2.3 g for an average egg size of 60 g. Shanmugasundaram (2016) used an image processing algorithm to determine the surface area and volume of eggs. The study correlated volume obtained using the tape and water displacement methods with the image processing technique, and they were not statistically significant. The findings showed a strong correlation of R2 = 0.95 between the measured egg mass and computed volume. In addition, a novel photogrammetric reconstruction method was used to measure the egg's surface area and volume by Zhang et al. (2016). For volume calibration factor estimation, a convex hull algorithm was used to estimate the volume of the reconstructed 3D egg and the Monte Carlo approach. The results showed a high 99% accuracy compared with the drainage method. Although this method is accurate, it requires many images to be used in the photogrammetric bundle adjustment operation, taken from different positions and orientations. In addition, a target field is required to provide tie points between the images. Zhou et al. (2008) used MV and linear models to calculate the surface area and egg volume. They developed a particular stage to keep a leveled egg under lighting so a digital camera could capture the egg's shadow. Then, the length and breadth of the egg could be calculated. The study's R-value results were 0.88 for the volume model and 0.86 for the surface area. Qiaohua and Youxian (2007) developed a Newton dichotomy–based egg weight prediction model and image detection method. The model used could forecast the correlation between the egg shape index and its weight. The findings revealed that the procedure could predict egg weight at a correlation coefficient of 0.980. In research by Georgieva-Nikolova et al. (2020), the weight of quail and hen eggs was indirectly calculated by shapes and spectral indices. Rashidi et al. (2008) proposed that egg volume can be estimated using image processing and the spheroid approximation method. In his later research, Rashidi and Gholami (2011) used linear regression models to predict egg mass based on geometrical attributes. The models were divided into a mass model based on length and diameter established on mean geometric diameter, projected areas. The prediction of egg volume and surface area was conducted in Narushin (2005) based on egg breadth and length measurement. For calculations and comparisons between the measured and predicted geometrical properties, multiple equations were used. They reported that about 90% of their estimates yielded a 2-mL volume error. Nonetheless, this method requires manually positioning the egg with an egg-shaped hole on the system's stage, making it hard to automate. Besides, there was no guarantee that the egg would be leveled faultlessly. In another research by Dehrouyeh et al. (2010), an MVS was used to grade defective eggs. Omid et al. (2013) also graded eggs as per size using MV and artificial intelligence approaches. An egg size algorithm was developed to classify the eggs by size. The algorithm provided 95% accuracy for egg size detection. In another research, Kunrui et al. (2015) proposed an automatic grading system for salted eggs using machine vision. For grading and classification of the eggs, a multiple linear regression equation was used. The results indicated that the system had 5,400 eggs/h classification accuracy and 93% egg-grading accuracy. Similarly, research by Buyukarikan (2018) used MV and image processing techniques for egg classification. Eggs are a vital poultry commodity widely consumed worldwide as healthy, readily available, and affordable because eggs are the cheapest animal protein source than other protein sources. Egg grading generally involves the sorting of the product by weight, consistency, size, and other factors that influence its relative value (USDA, 2005). It involves grouping into categories of eggs having the same weight and quality. Factors such as weight, quality, and soundness determine egg rating, but the lowest grade element influences the overall egg grade (Jacob et al., 2000). Egg weight is a crucial requirement in the poultry sector, which has multiple applications. It is possible to use egg weight to predict eggshell features and quality (Narushin, 1994; Narushin et al., 2004). There is also a high correlation between egg weight and shell weight (Paganelli et al., 1974). Other studies found a correlation between hatchability and egg weight; whereas egg weight increased, the hatchability decreased (Wilson, 1991; Gonzalez et al., 1999). Besides, egg weight may be used in predicting chick weight (Wilson, 1991). Egg weight also influences egg quality, chick yield, and length (Iqbal et al., 2017). The volume and surface area of the eggs are the standards that are used for external egg properties. The eggs' volume and surface area can be used to measure chick mass, internal egg parameters, shell quality attributes, hatchability, and can also be used in ecological morphology and population analysis (Zhou et al., 2008). The egg size is one of the essential quality metrics consumers use when selecting and evaluating eggs. In general, consumers prefer eggs of similar shapes and sizes (Rashidi and Gholami, 2011). The tape method was widely used for determining the surface area of eggs. The tape is split into tiny parts covering the object's surface area and then stripped off, and the total area is measured by area meter or by hand (Zhang et al., 2016). This method's accuracy depends heavily on calculating the tape strips' area and how precisely they cover the object rendering this process time consuming, vulnerable to human error, and labor-intensive (Sabliov et al., 2002). Consequently, researchers have used egg horizontal and vertical diameter to establish prediction models of surface area and volume (Narushin, 1997, 2001, 2005; Labaque et al., 2007; Zhou et al., 2008; Boersma and Rebstock, 2010). In the poultry egg industry, a human (operator) performs most of the egg grading manually. For this reason, numerous studies have been carried out to develop expert egg-grading systems to automate this process and improve egg quality. Patel et al. (1998b) and Omid et al. (2013) conducted studies on the grading and sorting of eggs. The critical criterion when designing grading systems for egg grading is the classification of egg weight. Classification is usually performed and defined by the eggs' weight range (Patel et al., 1998a). Most egg grading applications were conducted on defective eggs using MV and image processing techniques (Öztürk and Gangal, 2014; Mota-Grajales et al., 2019). The basic geometric parameters used for egg sorting are the shape factor, shape index, area, perimeter, and major and minor axis extracted using image processing algorithms and used for model creation (Alikhanov et al., 2015). An overall summary of all the studies is presented in Table 5 categorized by the poultry product and application.
Table 5

Overall summary of all studies by application category.

CategoryPoultry productParametersMethodsNo. of studiesCitations
WeightBroiler chickensBW estimationRegression equations7De Wet et al. (2003)
Live weight estimationLinear equation, regression modelMollah et al. (2010)
Weight predictionANN, multivariate linear regression modelMortensen et al. (2016)
SVRAmraei et al. (2017b)
Transfer function modelAmraei et al. (2018)
Weight estimationANN, t testAmraei et al. (2017a)
Weight determinationBPNNWang et al. (2017)
CarcassCarcass weight estimationLinear regression and equationLotufo et al. (1999)
Regression modelsJørgensen et al. (2019)
Carcass weight classificationML classification, regression tree modelsQi et al. (2019)
Carcass weight gradingSimple linear regressionMultiple linear regressionAdjusted multiple linear5Chen et al. (2017)
Breast weight determinationRegression analysisAdamczak et al. (2018)
EggWeight detectionRegression modelsCen et al. (2006)
Weight predictionLinear regression and equationsThipakorn et al. (2017)
Weight measurementANFIS modelJavadikia et al. (2011)
Statistical, regression analysisAlikhanov et al. (2015)
ANOVA, Linear regressionWidiasri et al. (2019)
Weight estimationStatistical analysis11Aragua and Mabayo (2018)
Neural networkAsadi and Raoufat (2010)
Regression modelsAsadi et al. (2012)
Neural network algorithmsRaoufat and Asadi (2010)
Weight gradingK-NN classifierAb Nasir et al. (2018)
Weight sortingRegression analysisAlikhanov et al. (2018)
VolumeEggVolume predictionStatistical analysis, ANNSoltani et al. (2015)
BPNN, statistical analysisSiswantoro et al. (2017)
Volume measurementRegression analysisChan et al. (2018)
Linear regression, ANOVAWidiasri et al. (2019)
Statistical analysis8Zhang et al. (2016)
Volume estimationSVR, GPR, ANN, statistical, T-test analysisOkinda et al. (2020b)
Volume determinationRegression analysisShanmugasundaram (2016)
Volume calculationLinear regression, statisticalZhou et al. (2008)
Surface areaEggSurface area determinationRegression analysisShanmugasundaram (2016)
Surface area measurementStatistical analysis3Zhang et al. (2016)
Surface area calculationLinear regression, statisticalZhou et al. (2008)
Shape and sizeChickenLeg size classificationDNN modelsKoodtalang and Sangsuwan (2019)
EggShape and size gradingStatistical analysisDuan et al. (2016)
Size classificationSVM classifier5Waranusast et al. (2016)
Linear regression, equations, and SVM classifierThipakorn et al. (2017)
Shape-based gradingK-NN classifierAb Nasir et al. (2018)
Sorting and gradingChicken portionsOn-line separation and sortingANN, PLSR, and LDA analysisTeimouri et al. (2018)
EggGrade classifierStatistical analysis3Zalhan et al. (2016)
Automatic sortingRegression analysisAlikhanov et al. (2019)

Abbreviations: ANFIS, adaptive neuro fuzzy inference system; ANN, artificial neural networks; BPNN, backpropagation neural network; GPR, Process Regression; LDA, linear discriminant analysis; PLSR, partial least squares regression; SVM, support vector machines; SVR, support vector regression.

Overall summary of all studies by application category. Abbreviations: ANFIS, adaptive neuro fuzzy inference system; ANN, artificial neural networks; BPNN, backpropagation neural network; GPR, Process Regression; LDA, linear discriminant analysis; PLSR, partial least squares regression; SVM, support vector machines; SVR, support vector regression.

Challenges and Future Perspectives

The years' leading research priorities cover developments in image processing and vision-based approaches for food quality assurance. CVS has been increasingly used in the poultry sector for assessment and inspection because it can provide a consistent, economical, fast, environmentally friendly, and impartial evaluation. Researchers use CV technologies as effective nondestructive means to tackle these tasks to eliminate the manual processes and achieve greater precision. Most studies reported reasonable weight and volume estimates for poultry, cuttings, and eggs, as well as the accurate size and grading classifications. Despite the increasing use of CVS in the food and agricultural industries, difficulties exist in the poultry industry, taking up this technology. There have been few studies on poultry's live BW, carcass and cuts, volume, and weight determination. This is because poultry has an irregular shape, thus challenging to develop techniques for accurately estimating its weight and possibly volume. However, the current literature provides a sufficient framework for researchers to improve the methodologies already used. New CV applications are expected to focus on these areas, as significant progress is being made in CV applications in egg production. These studies reviewed here are mainly about 2D imaging techniques. The future trend will be toward more complex 3D CV systems with a 3D data focus. Three-dimensional CV will ensure significantly that the technology retains the quality and accuracy required in the agricultural and food industries. Although there has been substantial development of accurate and efficient algorithms, computing speeds do not conform to modern production requirements. Much of the literature reviewed in this article was focused on a laboratory scale, although only a few were used in commercial poultry processing and production. The use of machine learning techniques with CV in poultry production is also worthy of mention. Most reviewed literature has used regression techniques to develop predictive models. Few studies have implemented other approaches such as; SVM and neural networks. Thus, adopting these novel machine learning techniques will ensure faster computation and better accurate future application trends. Neural networks have many benefits, such as requiring less statistical training and implicitly identifying dynamic and nonlinear associations between independent and dependent variables, identifying possible relations between predictor variables, and using several training algorithms (Tu, 1996; Nyalala et al., 2019). The significant strengths of SVM are as follows: the training is relatively easy, so they deliver better performance (accuracy). They can be controlled explicitly, have no optimal local solution, elegant mathematical tractability, and avoid overfitting because there is no need for many training samples. They offer a direct geometric interpretation (Nyalala et al., 2019). We have found that there is limited information on CV-based techniques in size and weight estimation of live poultry and carcass and cuts. We can also presume that using models developed from animals of the same species for different management systems can be unreliable and may not estimate BW in another community. Age-specific models developed elsewhere cannot estimate animal weight in another field owing to apparent variations in training techniques, diet, and housing that influence the magnitude of linear body measurements. Genetic variation and environmental factors that impinge on a person may be associated with variation in BW within a population; therefore, morphometric measurements are possible features for use in animal selection.

Conclusion

This review article provides a detailed overview of the current CV application studies and advances in the nondestructive measurement, classification, grading, and sorting of poultry products in the poultry sector and food industry as a whole. The results showed the efficacy of the live poultry technique, carcass and cuttings, and egg production systems. The review began with the concept of MVS and its critical components, namely a camera, illumination, image grabber, and compatible software and hardware. It also assessed the advantages and constraints associated with this approach. It was reported that the vision-based method is currently the most efficient methodology for estimating and classifying size, weight, and volume because it is detailed, cost-effective, nondestructive, reliable, and fast. Owing to the high processing speed of its algorithms and computer-based technologies, the present article concluded that CV has the ability and the necessity to be a fundamental element in the poultry production industry. Different research results and more nondestructive, rapid, and advanced approaches have gradually addressed the challenges encountered with the manual weighing, grading, and sorting (destructive) of poultry products. This study aims to contribute an overview of the recent work on developing food processing and nondestructive detection technologies for agricultural production. This review article is essential and will be of value in helping researchers implement the latest machine learning techniques for accurate body measurement estimates and other livestock meat classification. This work will also contribute to the advancement of more accurate, efficient, and reliable automated sorting and grading systems for practical use in in-line processing plants for poultry, as the industry continues to become highly competitive and machine vision application will provide an advantage in processing speeds and costs and labor reduction. Besides, CV will help provide consumers with better quality poultry foods and significantly improve the poultry industry's productivity. The poultry measurement and classification systems may be integrated into poultry meat systems for quality inspection, identification, contamination, safety, and disease detection. Furthermore, MV techniques based on 2D CV systems can be applied nonintrusively to production lines.
  2 in total

Review 1.  Automated Tracking Systems for the Assessment of Farmed Poultry.

Authors:  Suresh Neethirajan
Journal:  Animals (Basel)       Date:  2022-01-19       Impact factor: 2.752

2.  Assessing the Feasibility of Using Kinect 3D Images to Predict Light Lamb Carcasses Composition from Leg Volume.

Authors:  Severiano R Silva; Mariana Almeida; Isabella Condotta; André Arantes; Cristina Guedes; Virgínia Santos
Journal:  Animals (Basel)       Date:  2021-12-19       Impact factor: 2.752

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.