Literature DB >> 34066256

Semantic Evidential Grid Mapping Using Monocular and Stereo Cameras.

Sven Richter1, Yiqun Wang1, Johannes Beck2, Sascha Wirges1, Christoph Stiller1.   

Abstract

Accurately estimating the current state of local traffic scenes is one of the key problems in the development of software components for automated vehicles. In addition to details on free space and drivability, static and dynamic traffic participants and information on the semantics may also be included in the desired representation. Multi-layer grid maps allow the inclusion of all of this information in a common representation. However, most existing grid mapping approaches only process range sensor measurements such as Lidar and Radar and solely model occupancy without semantic states. In order to add sensor redundancy and diversity, it is desired to add vision-based sensor setups in a common grid map representation. In this work, we present a semantic evidential grid mapping pipeline, including estimates for eight semantic classes, that is designed for straightforward fusion with range sensor data. Unlike other publications, our representation explicitly models uncertainties in the evidential model. We present results of our grid mapping pipeline based on a monocular vision setup and a stereo vision setup. Our mapping results are accurate and dense mapping due to the incorporation of a disparity- or depth-based ground surface estimation in the inverse perspective mapping. We conclude this paper by providing a detailed quantitative evaluation based on real traffic scenarios in the KITTI odometry benchmark dataset and demonstrating the advantages compared to other semantic grid mapping approaches.

Entities:  

Keywords:  autonomous driving; environment perception; grid mapping; monocular vision; stereo vision

Year:  2021        PMID: 34066256      PMCID: PMC8152009          DOI: 10.3390/s21103380

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.576


  2 in total

1.  Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

Authors:  Fayao Liu; Chunhua Shen; Guosheng Lin; Ian Reid
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2015-12-03       Impact factor: 6.226

2.  Deep Ordinal Regression Network for Monocular Depth Estimation.

Authors:  Huan Fu; Mingming Gong; Chaohui Wang; Kayhan Batmanghelich; Dacheng Tao
Journal:  Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit       Date:  2018-12-17
  2 in total
  1 in total

1.  Crowdsourcing-Based Indoor Semantic Map Construction and Localization Using Graph Optimization.

Authors:  Chao Li; Wennan Chai; Xiaohui Yang; Qingdang Li
Journal:  Sensors (Basel)       Date:  2022-08-20       Impact factor: 3.847

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.