| Literature DB >> 35161712 |
Arnadi Murtiyoso1, Eugenio Pellis1,2, Pierre Grussenmeyer1, Tania Landes1, Andrea Masiero2.
Abstract
Developments in the field of artificial intelligence have made great strides in the field of automatic semantic segmentation, both in the 2D (image) and 3D spaces. Within the context of 3D recording technology it has also seen application in several areas, most notably in creating semantically rich point clouds which is usually performed manually. In this paper, we propose the introduction of deep learning-based semantic image segmentation into the photogrammetric 3D reconstruction and classification workflow. The main objective is to be able to introduce semantic classification at the beginning of the classical photogrammetric workflow in order to automatically create classified dense point clouds by the end of the said workflow. In this regard, automatic image masking depending on pre-determined classes were performed using a previously trained neural network. The image masks were then employed during dense image matching in order to constraint the process into the respective classes, thus automatically creating semantically classified point clouds as the final output. Results show that the developed method is promising, with automation of the whole process feasible from input (images) to output (labelled point clouds). Quantitative assessment gave good results for specific classes e.g., building facades and windows, with IoU scores of 0.79 and 0.77 respectively.Entities:
Keywords: automation; classification; deep learning; dense matching; photogrammetry; point cloud; semantic segmentation
Mesh:
Year: 2022 PMID: 35161712 PMCID: PMC8840648 DOI: 10.3390/s22030966
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Developed workflow for the proposed semantic photogrammetry process.
Figure 2Creation of class-dependent image masks from the segmented image and its application in dense matching to generate semantically classified point clouds.
Figure 3Visual illustration of some results from the experiment: (a) raw unclassified dense point cloud generated by Micmac, (b) manually segmented ground truth, (c) result of the semantic segmentation on Micmac dense point cloud and (d) result of the same procedure applied to Metashape dense point cloud.
Confusion matrix for the semantic segmentation on Micmac dense point cloud.
|
| |||||||
|
|
|
|
|
|
|
| |
|
|
|
| 3920 | 688 | 14,927 | 254,941 |
|
|
| 1566 |
| 0 | 0 | 21,483 |
| |
|
| 682 | 25,738 |
| 0 | 13,326 |
| |
|
| 15 | 0 | 0 |
| 9535 |
| |
|
| 38,903 | 6121 | 66 | 6910 |
|
| |
|
|
|
|
|
|
|
| |
Confusion matrix for the semantic segmentation on Metashape dense point cloud.
|
| |||||||
|
|
|
|
|
|
|
| |
|
|
|
| 22,286 | 6467 | 6697 | 217,949 |
|
|
| 42,294 |
| 179,411 | 87 | 44,619 |
| |
|
| 6427 | 0 |
| 0 | 531 |
| |
|
| 181,417 | 0 | 0 |
| 54,961 |
| |
|
| 1,595,515 | 190,429 | 82,260 | 130,134 |
|
| |
|
|
|
|
|
|
|
| |
Figure 4Performance statistics for the proposed method applied to dense point clouds generated by (a) Micmac and (b) Metashape.
Figure 5Comparison of IoU scores of the proposed method to other previous work.
Figure 6Comparison to other studies for the class (a) “window” and (b) “facade”.
Figure 7Example of concrete application of the proposed method in photogrammetric point cloud cleaning: (a) original image, (b) mask of all classes except “background”, (c) mask applied to the original image and (d) 3D point cloud from dense image matching using the masked image.