Literature DB >> 16119361

Empirical modeling of human face kinematics during speech using motion clustering.

Jorge C Lucero1, Susanne T R Maciel, Derek A Johns, Kevin G Munhall.   

Abstract

In this paper we present an algorithm for building an empirical model of facial biomechanics from a set of displacement records of markers located on the face of a subject producing speech. Markers are grouped into clusters, which have a unique primary marker and a number of secondary markers with an associated weight. Motion of the secondary markers is computed as the weighted sum of the primary markers of the clusters to which they belong. This model may be used to produce facial animations, by driving the primary markers with measured kinematic signals.

Entities:  

Mesh:

Year:  2005        PMID: 16119361     DOI: 10.1121/1.1928807

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  3 in total

1.  Asymmetries in unimodal visual vowel perception: The roles of oral-facial kinematics, orientation, and configuration.

Authors:  Matthew Masapollo; Linda Polka; Lucie Ménard; Lauren Franklin; Mark Tiede; James Morgan
Journal:  J Exp Psychol Hum Percept Perform       Date:  2018-03-08       Impact factor: 3.332

2.  Modulation transfer functions for audiovisual speech.

Authors:  Nicolai F Pedersen; Torsten Dau; Lars Kai Hansen; Jens Hjortkjær
Journal:  PLoS Comput Biol       Date:  2022-07-19       Impact factor: 4.779

3.  Analysis of facial motion patterns during speech using a matrix factorization algorithm.

Authors:  Jorge C Lucero; Kevin G Munhall
Journal:  J Acoust Soc Am       Date:  2008-10       Impact factor: 2.482

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.