| Literature DB >> 35557988 |
Yanyi Zhang1,2, Xinyu Li1, Chunhui Liu1, Bing Shuai1, Yi Zhu1, Biagio Brattoli1, Hao Chen1, Ivan Marsic2, Joseph Tighe1.
Abstract
We introduce Video Transformer (VidTr) with separable-attention for video classification. Comparing with commonly used 3D networks, VidTr is able to aggregate spatio-temporal information via stacked attentions and provide better performance with higher efficiency. We first introduce the vanilla video transformer and show that transformer module is able to perform spatio-temporal modeling from raw pixels, but with heavy memory usage. We then present VidTr which reduces the memory cost by 3.3× while keeping the same performance. To further optimize the model, we propose the standard deviation based topK pooling for attention (pooltopK_std), which reduces the computation by dropping non-informative features along temporal dimension. VidTr achieves state-of-the-art performance on five commonly used datasets with lower computational requirement, showing both the efficiency and effectiveness of our design. Finally, error analysis and visualization show that VidTr is especially good at predicting actions that require long-term temporal reasoning.Entities:
Year: 2021 PMID: 35557988 PMCID: PMC9093781 DOI: 10.1109/iccv48922.2021.01332
Source DB: PubMed Journal: Proc IEEE Int Conf Comput Vis ISSN: 1550-5499