| Literature DB >> 32043091 |
Kyungjun Lee1, Hernisa Kacorri2.
Abstract
Camera manipulation confounds the use of object recognition applications by blind people. This is exacerbated when photos from this population are also used to train models, as with teachable machines, where out-of-frame or partially included objects against cluttered backgrounds degrade performance. Leveraging prior evidence on the ability of blind people to coordinate hand movements using proprioception, we propose a deep learning system that jointly models hand segmentation and object localization for object classification. We investigate the utility of hands as a natural interface for including and indicating the object of interest in the camera frame. We confirm the potential of this approach by analyzing existing datasets from people with visual impairments for object recognition. With a new publicly available egocentric dataset and an extensive error analysis, we provide insights into this approach in the context of teachable recognizers.Entities:
Keywords: blind; egocentric; hand; k-shot learning; object recognition
Year: 2019 PMID: 32043091 PMCID: PMC7008716 DOI: 10.1145/3290605.3300566
Source DB: PubMed Journal: Proc SIGCHI Conf Hum Factor Comput Syst