Literature DB >> 33501247

Human Movement Representation on Multivariate Time Series for Recognition of Professional Gestures and Forecasting Their Trajectories.

Sotiris Manitsaris1, Gavriela Senteri1, Dimitrios Makrygiannis1, Alina Glushkova1.   

Abstract

Human-centered artificial intelligence is increasingly deployed in professional workplaces in Industry 4.0 to address various challenges related to the collaboration between the operators and the machines, the augmentation of their capabilities, or the improvement of the quality of their work and life in general. Intelligent systems and autonomous machines need to continuously recognize and follow the professional actions and gestures of the operators in order to collaborate with them and anticipate their trajectories for avoiding potential collisions and accidents. Nevertheless, the recognition of patterns of professional gestures is a very challenging task for both research and the industry. There are various types of human movements that the intelligent systems need to perceive, for example, gestural commands to machines and professional actions with or without the use of tools. Moreover, the interclass and intraclass spatiotemporal variances together with the very limited access to annotated human motion data constitute a major research challenge. In this paper, we introduce the Gesture Operational Model, which describes how gestures are performed based on assumptions that focus on the dynamic association of body entities, their synergies, and their serial and non-serial mediations, as well as their transitioning over time from one state to another. Then, the assumptions of the Gesture Operational Model are translated into a simultaneous equation system for each body entity through State-Space modeling. The coefficients of the equation are computed using the Maximum Likelihood Estimation method. The simulation of the model generates a confidence-bounding box for every entity that describes the tolerance of its spatial variance over time. The contribution of our approach is demonstrated for both recognizing gestures and forecasting human motion trajectories. In recognition, it is combined with continuous Hidden Markov Models to boost the recognition accuracy when the likelihoods are not confident. In forecasting, a motion trajectory can be estimated by taking as minimum input two observations only. The performance of the algorithm has been evaluated using four industrial datasets that contain gestures and actions from a TV assembly line, the glassblowing industry, the gestural commands to Automated Guided Vehicles as well as the Human-Robot Collaboration in the automotive assembly lines. The hybrid approach State-Space and HMMs outperforms standard continuous HMMs and a 3DCNN-based end-to-end deep architecture.
Copyright © 2020 Manitsaris, Senteri, Makrygiannis and Glushkova.

Entities:  

Keywords:  differential equations; forecasting; gesture recognition; hidden Markov models; motion trajectory; movement modeling; state-space representation

Year:  2020        PMID: 33501247      PMCID: PMC7805970          DOI: 10.3389/frobt.2020.00080

Source DB:  PubMed          Journal:  Front Robot AI        ISSN: 2296-9144


  2 in total

Review 1.  Kinematic models of the upper limb joints for multibody kinematics optimisation: An overview.

Authors:  Sonia Duprey; Alexandre Naaim; Florent Moissenet; Mickaël Begon; Laurence Chèze
Journal:  J Biomech       Date:  2016-12-09       Impact factor: 2.712

2.  OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields.

Authors:  Zhe Cao; Gines Hidalgo Martinez; Tomas Simon; Shih-En Wei; Yaser A Sheikh
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2019-07-17       Impact factor: 6.226

  2 in total
  1 in total

1.  Stochastic-Biomechanic Modeling and Recognition of Human Movement Primitives, in Industry, Using Wearables.

Authors:  Brenda Elizabeth Olivas-Padilla; Sotiris Manitsaris; Dimitrios Menychtas; Alina Glushkova
Journal:  Sensors (Basel)       Date:  2021-04-03       Impact factor: 3.576

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.