Literature DB >> 29994083

Look into Person: Joint Body Parsing & Pose Estimation Network and a New Benchmark.

Xiaodan Liang, Ke Gong, Xiaohui Shen, Liang Lin.   

Abstract

Human parsing and pose estimation have recently received considerable interest due to their substantial application potentials. However, the existing datasets have limited numbers of images and annotations and lack a variety of human appearances and coverage of challenging cases in unconstrained environments. In this paper, we introduce a new benchmark named "Look into Person (LIP)" that provides a significant advancement in terms of scalability, diversity, and difficulty, which are crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels and 16 body joints, which are captured from a broad range of viewpoints, occlusions, and background complexities. Using these rich annotations, we perform detailed analyses of the leading human parsing and pose estimation approaches, thereby obtaining insights into the successes and failures of these methods. To further explore and take advantage of the semantic correlation of these two tasks, we propose a novel joint human parsing and pose estimation network to explore efficient context modeling, which can simultaneously predict parsing and pose with extremely high quality. Furthermore, we simplify the network to solve human parsing by exploring a novel self-supervised structure-sensitive learning approach, which imposes human pose structures into the parsing results without resorting to extra supervision. The datasets, code and models are available at http://www.sysu-hcp.net/lip/.

Entities:  

Year:  2018        PMID: 29994083     DOI: 10.1109/TPAMI.2018.2820063

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  7 in total

1.  Improved Graph Convolutional Neural Network for Dance Tracking and Pose Estimation.

Authors:  Liangliang Zhang
Journal:  Comput Intell Neurosci       Date:  2022-06-27

2.  Action Recognition Using Action Sequences Optimization and Two-Stream 3D Dilated Neural Network.

Authors:  Xin Xiong; Weidong Min; Qing Han; Qi Wang; Cheng Zha
Journal:  Comput Intell Neurosci       Date:  2022-06-13

3.  You can try without visiting: a comprehensive survey on virtually try-on outfits.

Authors:  Hajer Ghodhbani; Mohamed Neji; Imran Razzak; Adel M Alimi
Journal:  Multimed Tools Appl       Date:  2022-03-10       Impact factor: 2.577

4.  Template-Aware Transformer for Person Reidentification.

Authors:  Yanwei Zheng; Zengrui Zhao; Xiaowei Yu; Dongxiao Yu
Journal:  Comput Intell Neurosci       Date:  2022-04-01

5.  A Universal Decoupled Training Framework for Human Parsing.

Authors:  Yang Li; Huahong Zuo; Ping Han
Journal:  Sensors (Basel)       Date:  2022-08-09       Impact factor: 3.847

6.  Gait Recognition and Understanding Based on Hierarchical Temporal Memory Using 3D Gait Semantic Folding.

Authors:  Jian Luo; Tardi Tjahjadi
Journal:  Sensors (Basel)       Date:  2020-03-16       Impact factor: 3.576

7.  UVIRT-Unsupervised Virtual Try-on Using Disentangled Clothing and Person Features.

Authors:  Hideki Tsunashima; Kosuke Arase; Antony Lam; Hirokatsu Kataoka
Journal:  Sensors (Basel)       Date:  2020-10-02       Impact factor: 3.576

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.