Multi-dataset Training of Transformers for Robust Action Recognition

التفاصيل البيبلوغرافية
العنوان: Multi-dataset Training of Transformers for Robust Action Recognition
المؤلفون: Liang, Junwei, Zhang, Enwei, Zhang, Jun, Shen, Chunhua
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition
الوصف: We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition. We build our method on Transformers for its efficacy. Although we have witnessed great progress for video action recognition in the past decade, it remains challenging yet valuable how to train a single model that can perform well across multiple datasets. Here, we propose a novel multi-dataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss, aiming to learn robust representations for action recognition. In particular, the informative loss maximizes the expressiveness of the feature embedding while the projection loss for each dataset mines the intrinsic relations between classes across datasets. We verify the effectiveness of our method on five challenging datasets, Kinetics-400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2 datasets. Extensive experimental results show that our method can consistently improve state-of-the-art performance. Code and models are released.
Comment: NeurIPS 2022 Spotlight paper. Supplementary material at https://openreview.net/forum?id=aGFQDrNb-KOTest. Code and models are available at https://github.com/JunweiLiang/MultiTrainTest
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2209.12362Test
رقم الانضمام: edsarx.2209.12362
قاعدة البيانات: arXiv