| Literature DB >> 25264433 |
Xiaoyu Ding1, Wen-Sheng Chu2, Fernando De la Torre2, Jeffery F Cohn3, Qiao Wang1.
Abstract
Automatic facial Action Unit (AU) detection from video is a long-standing problem in facial expression analysis. AU detection is typically posed as a classification problem between frames or segments of positive examples and negative ones, where existing work emphasizes the use of different features or classifiers. In this paper, we propose a method called Cascade of Tasks (CoT) that combines the use of different tasks (i.e., frame, segment and transition) for AU event detection. We train CoT in a sequential manner embracing diversity, which ensures robustness and generalization to unseen data. In addition to conventional frame-based metrics that evaluate frames independently, we propose a new event-based metric to evaluate detection performance at event-level. We show how the CoT method consistently outperforms state-of-the-art approaches in both frame-based and event-based metrics, across three public datasets that differ in complexity: CK+, FERA and RU-FACS.Entities:
Year: 2013 PMID: 25264433 PMCID: PMC4174346 DOI: 10.1109/ICCV.2013.298
Source DB: PubMed Journal: Proc IEEE Int Conf Comput Vis ISSN: 1550-5499