Literature DB >> 32043091

Hands Holding Clues for Object Recognition in Teachable Machines.

Kyungjun Lee1, Hernisa Kacorri2.   

Abstract

Camera manipulation confounds the use of object recognition applications by blind people. This is exacerbated when photos from this population are also used to train models, as with teachable machines, where out-of-frame or partially included objects against cluttered backgrounds degrade performance. Leveraging prior evidence on the ability of blind people to coordinate hand movements using proprioception, we propose a deep learning system that jointly models hand segmentation and object localization for object classification. We investigate the utility of hands as a natural interface for including and indicating the object of interest in the camera frame. We confirm the potential of this approach by analyzing existing datasets from people with visual impairments for object recognition. With a new publicly available egocentric dataset and an extensive error analysis, we provide insights into this approach in the context of teachable recognizers.

Entities:  

Keywords:  blind; egocentric; hand; k-shot learning; object recognition

Year:  2019        PMID: 32043091      PMCID: PMC7008716          DOI: 10.1145/3290605.3300566

Source DB:  PubMed          Journal:  Proc SIGCHI Conf Hum Factor Comput Syst


  8 in total

1.  Knowledge about hand shaping and knowledge about objects.

Authors:  R L Klatzky; B McCloskey; S Doherty; J Pellegrino; T Smith
Journal:  J Mot Behav       Date:  1987-06       Impact factor: 1.328

2.  Evidence for a proprioception-based rapid on-line error correction mechanism for hand orientation during reaching movements in blind subjects.

Authors:  Nadia Gosselin-Kessiby; John F Kalaska; Julie Messier
Journal:  J Neurosci       Date:  2009-03-18       Impact factor: 6.167

Review 3.  The proprioceptive senses: their roles in signaling body shape, body position and movement, and muscle force.

Authors:  Uwe Proske; Simon C Gandevia
Journal:  Physiol Rev       Date:  2012-10       Impact factor: 37.312

4.  (Computer) Vision without Sight.

Authors:  Roberto Manduchi; James Coughlan
Journal:  Commun ACM       Date:  2012-01       Impact factor: 4.654

5.  Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions.

Authors:  Sven Bambach; Stefan Lee; David J Crandall; Chen Yu
Journal:  Proc IEEE Int Conf Comput Vis       Date:  2016-02-18

Review 6.  Computational modelling of visual attention.

Authors:  L Itti; C Koch
Journal:  Nat Rev Neurosci       Date:  2001-03       Impact factor: 34.870

7.  Contact points during multidigit grasping of geometric objects.

Authors:  René Gilster; Constanze Hesse; Heiner Deubel
Journal:  Exp Brain Res       Date:  2011-12-24       Impact factor: 1.972

8.  Delving into Egocentric Actions.

Authors:  Yin Li; Zhefan Ye; James M Rehg
Journal:  Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit       Date:  2015-06
  8 in total
  5 in total

1.  Hand-Priming in Object Localization for Assistive Egocentric Vision.

Authors:  Kyungjun Lee; Abhinav Shrivastava; Hernisa Kacorri
Journal:  IEEE Winter Conf Appl Comput Vis       Date:  2020-05-14

2.  Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the Blind.

Authors:  Kyungjun Lee; Daisuke Sato; Saki Asakawa; Chieko Asakawa; Hernisa Kacorri
Journal:  ASSETS       Date:  2021

3.  Sharing Practices for Datasets Related to Accessibility and Aging.

Authors:  Rie Kamikubo; Utkarsh Dwivedi; Hernisa Kacorri
Journal:  ASSETS       Date:  2021

4.  Revisiting Blind Photography in the Context of Teachable Object Recognizers.

Authors:  Kyungjun Lee; Jonggi Hong; Simone Pimento; Ebrima Jarjue; Hernisa Kacorri
Journal:  ASSETS       Date:  2019-10

Review 5.  A Review of Recent Deep Learning Approaches in Human-Centered Machine Learning.

Authors:  Tharindu Kaluarachchi; Andrew Reis; Suranga Nanayakkara
Journal:  Sensors (Basel)       Date:  2021-04-03       Impact factor: 3.576

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.