SkelVIT: Consensus of Vision Transformers for a Lightweight Skeleton-Based Action Recognition System

التفاصيل البيبلوغرافية
العنوان: SkelVIT: Consensus of Vision Transformers for a Lightweight Skeleton-Based Action Recognition System
المؤلفون: Karadag, Ozge Oztimur
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence, Computer Science - Machine Learning, Computer Science - Robotics
الوصف: Skeleton-based action recognition receives the attention of many researchers as it is robust to viewpoint and illumination changes, and its processing is much more efficient than the processing of video frames. With the emergence of deep learning models, it has become very popular to represent the skeleton data in pseudo-image form and apply CNN for action recognition. Thereafter, studies concentrated on finding effective methods for forming pseudo-images. Recently, attention networks, more specifically transformers have provided promising results in various vision problems. In this study, the effectiveness of VIT for skeleton-based action recognition is examined and its robustness on the pseudo-image representation scheme is investigated. To this end, a three-level architecture, SkelVit is proposed, which forms a set of pseudo images, applies a classifier on each of the representations, and combines their results to find the final action class. The performance of SkelVit is examined thoroughly via a set of experiments. First, the sensitivity of the system to representation is investigated by comparing it with two of the state-of-the-art pseudo-image representation methods. Then, the classifiers of SkelVit are realized in two experimental setups by CNNs and VITs, and their performances are compared. In the final experimental setup, the contribution of combining classifiers is examined by applying the model with a different number of classifiers. Experimental studies reveal that the proposed system with its lightweight representation scheme achieves better results than the state-of-the-art methods. It is also observed that the vision transformer is less sensitive to the initial pseudo-image representation compared to CNN. Nevertheless, even with the vision transformer, the recognition performance can be further improved by the consensus of classifiers.
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2311.08094Test
رقم الانضمام: edsarx.2311.08094
قاعدة البيانات: arXiv