Literature DB >> 26415154

Labeled Graph Kernel for Behavior Analysis.

Ruiqi Zhao, Aleix M Martinez.   

Abstract

Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.

Entities:  

Mesh:

Year:  2015        PMID: 26415154      PMCID: PMC4846576          DOI: 10.1109/TPAMI.2015.2481404

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  5 in total

1.  The humanID gait challenge problem: data sets, performance, and analysis.

Authors:  Sudeep Sarkar; P Jonathon Phillips; Zongyi Liu; Isidro Robledo Vega; Patrick Grother; Kevin W Bowyer
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2005-02       Impact factor: 6.226

2.  Modelling and Recognition of the Linguistic Components in American Sign Language.

Authors:  Liya Ding; Aleix M Martinez
Journal:  Image Vis Comput       Date:  2009-11-01       Impact factor: 2.818

3.  Spontaneous facial expression in unscripted social interactions can be measured automatically.

Authors:  Jeffrey M Girard; Jeffrey F Cohn; Laszlo A Jeni; Michael A Sayette; Fernando De la Torre
Journal:  Behav Res Methods       Date:  2015-12

4.  Compound facial expressions of emotion.

Authors:  Shichuan Du; Yong Tao; Aleix M Martinez
Journal:  Proc Natl Acad Sci U S A       Date:  2014-03-31       Impact factor: 11.205

5.  Discriminant features and temporal structure of nonmanuals in American Sign Language.

Authors:  C Fabian Benitez-Quiroz; Kadir Gökgöz; Ronnie B Wilbur; Aleix M Martinez
Journal:  PLoS One       Date:  2014-02-06       Impact factor: 3.240

  5 in total
  2 in total

1.  Computational Models of Face Perception.

Authors:  Aleix M Martinez
Journal:  Curr Dir Psychol Sci       Date:  2017-06-14

Review 2.  Review of Three-Dimensional Human-Computer Interaction with Focus on the Leap Motion Controller.

Authors:  Daniel Bachmann; Frank Weichert; Gerhard Rinkenauer
Journal:  Sensors (Basel)       Date:  2018-07-07       Impact factor: 3.576

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.