| Literature DB >> 33786087 |
Zi Yin1, Valentin Yiu2,3, Xiaolin Hu2, Liang Tang1.
Abstract
Face parsing is an important computer vision task that requires accurate pixel segmentation of facial parts (such as eyes, nose, mouth, etc.), providing a basis for further face analysis, modification, and other applications. Interlinked Convolutional Neural Networks (iCNN) was proved to be an effective two-stage model for face parsing. However, the original iCNN was trained separately in two stages, limiting its performance. To solve this problem, we introduce a simple, end-to-end face parsing framework: STN-aided iCNN(STN-iCNN), which extends the iCNN by adding a Spatial Transformer Network (STN) between the two isolated stages. The STN-iCNN uses the STN to provide a trainable connection to the original two-stage iCNN pipeline, making end-to-end joint training possible. Moreover, as a by-product, STN also provides more precise cropped parts than the original cropper. Due to these two advantages, our approach significantly improves the accuracy of the original model. Our model achieved competitive performance on the Helen Dataset, the standard face parsing dataset. It also achieved superior performance on CelebAMask-HQ dataset, proving its good generalization. Our code has been released at https://github.com/aod321/STN-iCNN. © Springer Nature B.V. 2020.Entities:
Keywords: End-to-end; Face parsing; STN-iCNN
Year: 2020 PMID: 33786087 PMCID: PMC7947053 DOI: 10.1007/s11571-020-09615-4
Source DB: PubMed Journal: Cogn Neurodyn ISSN: 1871-4080 Impact factor: 5.082