| Literature DB >> 27136560 |
Manuel Vázquez-Arellano1, Hans W Griepentrog2, David Reiser3, Dimitris S Paraforos4.
Abstract
Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.Entities:
Keywords: 3-D sensors; agricultural automation; agricultural robotics; interferometry; optical triangulation; time-of-flight
Year: 2016 PMID: 27136560 PMCID: PMC4883309 DOI: 10.3390/s16050618
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The 3-D image generation techniques are critical for generating robust raw data for useful information extraction.
Figure 2Schematic representation of light beam (left) and stereo vision (right) triangulation. “Z”, depth; “b”, baseline length; “d”, position of the incoming light beam on the image sensor; and “f”, focal length.
List of some triangulation techniques for 3-D image generation found in the literature for different visual cues.
| Triangulation Approach | Visual Cue | 3-D Image Generation Techniques |
|---|---|---|
| Digital photogrammetry | Stereopsis | Stereo vision [ |
| Multi-view stereo [ | ||
| Multiple-baseline stereo [ | ||
| Motion | Structure-from-motion [ | |
| Shape-from-zooming [ | ||
| Optical flow [ | ||
| Silhouette | Shape-from-silhouette [ | |
| Shape-from-photoconsistency [ | ||
| Shape-from-shadow [ | ||
| Structured light | Texture | Shape-from-texture [ |
| Shading | Shading | Shape-from-shading [ |
| Photometric stereo [ | ||
| Focus | Focus | Shape-from-focus [ |
| Shape-from-defocus [ | ||
| Theodolite | Stereopsis | Trigonometry [ |
Figure 3Schematic representation of the basic principle of time-of-flight measurement, where distance “Z” is dependent on the time “t” that takes a light pulse to travel forth and back.
Figure 4Schematic representation of a Michelson interferometer where the relative depth “Z” is directly proportional to the wavelength of the light source “λ” and to the number of fringes “”.
Advantages and disadvantages of the most common sensor implementations, based on the basic principles for 3-D vision.
| Basic Principle | Sensor/Technique | Advantages | Disadvantages |
|---|---|---|---|
| Consumer triangulation sensor (CTS) | -Off-the-shelf | -Vulnerable to sunlight, where no depth information is produced | |
| Stereo vision | -Good community support, good documentation | -Low texture produce correspondence problems | |
| Structure-from-motion | -Digital cameras are easily and economically available | -Camera calibration and field references are a requirement for reliable measurements | |
| Light sheet triangulation | -High precision | -High cost | |
| TOF camera | -Active illumination independent of an external lighting source | -Most of them have low pixel resolution | |
| Light sheet (pulse modulated) LIDAR | -Emitted light beams and are robust against sunlight | -Poor performance in edge detection due the spacing between the light beams | |
| Optical coherent tomography (OCT) | -High accuracy | -High cost |
Autonomous platforms for reducing the time-consuming and repetitive phenotyping practice.
| Platform | Basic Principle | Shadowing Device | Environment | Institution | Type |
|---|---|---|---|---|---|
| Becam [ | Triangulation | √ | Open field | UMR-ITAP | Research |
| BoniRob [ | TOF | √ | Open field | Deepfield Robotics | Commercial |
| BredVision [ | TOF | √ | Open field | University of Applied Sciences Osnabrück | Research |
| Heliaphen [ | Triangulation | × | Greenhouse | Optimalog | Research |
| Ladybird [ | TOF and triangulation | √ | Open field | University of Sidney | Research |
| Marvin [ | Triangulation | √ | Greenhouse | Wageningen University | Research |
| PhenoArch [ | Triangulation | √ | Greenhouse | INRA-LEPSE (by LemnaTec) | Research |
| Phenobot [ | TOF and Triangulation | × | Greenhouse | Wageningen University | Research |
| PlantEye [ | Triangulation | × | Greenhouse | Phenospex | Commercial |
| Robot gardener [ | Triangulation | × | Indoor | GARNICS project | Research |
| SAS [ | Triangulation | × | Greenhouse | Alci | Commercial |
| Scanalyzer [ | Triangulation | √ | Open field, Greenhouse | LemnaTec | Commercial |
| Spy-See [ | TOF and Triangulation | × | Greenhouse | Wageningen University | Research |
| Zea [ | Triangulation | √ | Open field | Blue River | Commercial |
Figure 5RGB (left) and depth image (right) using a light field camera (reproduced from Polder and Hofstee [103]).
Figure 6Reconstruction of maize plants using a CTC mounted on a field robot in different agricultural environments (reproduced from [114]).
Figure 73-D reconstruction of melon seeds based on interferometry (reproduced from [115]).
Summary of the technical difficulties of the 3-D techniques used in agricultural applications.
| Basic Principle | Technique | Application | Technical Difficulties |
|---|---|---|---|
| Stereo vision | -Autonomous navigation [ | -Blank pixels of some locations specially the ones that are further away from the camera | |
| Multi-view stereo | -Crop husbandry [ | -Surface integration from multiple views is the main obstacle | |
| Multiple-baseline stereo | -Autonomous navigation [ | -Handling a rich 3-D data is computationally demanding | |
| Structure-from-motion | -Crop husbandry [ | -Occlusion of leaves | |
| Shape-from-Silhouette | -Crop husbandry [ | -3-D reconstruction results strongly depend on good image pre-processing | |
| Structured light (light volume) sequentially coded | -Crop husbandry [ | -Limited projector depth of field | |
| Structured light (light volume) pseudo random pattern | -Autonomous navigation [ | -Strong sensitivity to natural light | |
| Shape-from-Shading | -Crop husbandry [ | -A zigzag effect at the target object’s boundary is generated (in interlaced video) if it moves at high speeds | |
| Structured light shadow Moiré | -Crop husbandry [ | -Sensitive to disturbances (e.g., surface reflectivity) that become a source of noise | |
| Shape-from-focus | -Crop husbandry [ | -Limited depth of field decreases the accuracy of the 3-D reconstruction | |
| Pulse modulation (light sheet) | -Autonomous navigation [ | -Limited perception of the surrounding structures | |
| Pulse modulation (light volume) | -Autonomous navigation and crop husbandry [ | -Limited pixel resolution | |
| Continuous wave modulation (light sheet) | -Crop husbandry [ | -Poor distance range measurement (up to 3 m) | |
| Continuous wave modulation (light volume) | -Crop husbandry [ | -Small field of view | |
| White-light | -Crop husbandry [ | -The scattering surface of the plant forms speckles that affect the accuracy | |
| Holographic | -Crop husbandry [ | -Need of a reference object in the image to detect disturbances | |
| Speckle | -Crop husbandry [ | -Agricultural products with rough surface could be difficult to reconstruct |