يعرض 1 - 10 نتائج من 17,803 نتيجة بحث عن '"Human activity"', وقت الاستعلام: 1.04s تنقيح النتائج
  1. 1
    دورية أكاديمية

    المؤلفون: Nan, Hai1 (AUTHOR), Ye, Qilang1,2 (AUTHOR) rikeilong@stu.cqut.edu.cn, Yu, Zitong2 (AUTHOR), An, Kang3 (AUTHOR)

    المصدر: IET Image Processing (Wiley-Blackwell). Jun2024, Vol. 18 Issue 8, p2000-2010. 11p.

    مستخلص: Inference using skeleton to steer RGB videos is applicable to fine‐grained activities in indoor human action recognition (IHAR). However, existing methods that explore only spatial alignment are prone to bias, resulting in limited performance. The authors propose a Three‐stage Guidance (3sG) framework, leveraging skeleton knowledge to promote RGB in three stages. First, a soft shading image is proposed for alleviating background noise in videos, allowing the network to directly focus more on the motion region. Second, the authors propose to extract RGB frames of interest to reduce the computational effort. Furthermore, to explore more fully the complementary information between skeletons and RGB, the skeleton is coupled to the frame representation in a different spatial–temporal sharing pattern. Third, the global skeleton and skeleton‐guided RGB features are fed into the shared classifiers, which approximate the logit distributions of the two to enhance the performance in RGB unimodal. Finally, a fusion strategy that utilizes two learnable parameters to adaptively integrate the skeleton with the RGB is proposed. 3sG outperforms the state‐of‐the‐art results on the Toyota Smarthome dataset while it is more efficient than similar methods on the NTU RGB+D dataset. [ABSTRACT FROM AUTHOR]

    : Copyright of IET Image Processing (Wiley-Blackwell) is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

  2. 2
    مؤتمر

    المؤلفون: Narmatha, V.1 (AUTHOR) narmatha19@gmail.com, Ramesh, S.1 (AUTHOR) rameshsundars.sse@saveetha.com, Manikandan, T.1 (AUTHOR) tmcse@gmail.com

    المصدر: AIP Conference Proceedings. 2024, Vol. 3168 Issue 1, p1-6. 6p.

    مستخلص: The proposed work is to analyze and compare the effectiveness of two machine learning techniques-Residual Neural Networks and Lasso Regression, for recognizing human actions in videos. By conducting this analysis, the researchers aim to identify which technique is more suitable for this application and to provide insights into how these techniques can be further improved for better accuracy and performance. The 80% of the UCF101 dataset is used for training and 20% is used for testing in the suggested machine learning classifier model. The output from two classifiers is grouped in SPSS for analysis, with 20 samples in each group. G-power pretest, with a sample size of 80% and 95% confidence interval and a p value of 0.001 (p<0.05). The results show at the selected novel Residual Neural network (ResNet), effectively classified human action with a rate of 95.63%, while the Lasso Regression (LR) achieved a rate of 93.23% accuracy. The statistical significance value between ResNet and LR is p = 0.001, which indicates that there is a statistically significant difference between these two machine learning structures. The proposed research investigates a range of activities that can be completed in a constrained amount of time. Additionally, the classifier that was constructed using the Novel Residual Neural Network functioned efficiently and achieved an accuracy of 95.63%, whereas the Lasso Regression classifier achieved an accuracy of 93.23%. [ABSTRACT FROM AUTHOR]

  3. 3
    دورية أكاديمية

    المؤلفون: Vishwakarma, Shelly1 (AUTHOR) s.vishwakarma@soton.ac.uk, Chetty, Kevin2 (AUTHOR), Le Kernec, Julien3 (AUTHOR), Chen, Qingchao4 (AUTHOR), Adve, Raviraj5 (AUTHOR), Gurbuz, Sevgi Zubeyde6 (AUTHOR), Li, Wenda7 (AUTHOR), Ram, Shobha Sundar8 (AUTHOR), Fioranelli, Francesco9 (AUTHOR)

    المصدر: IET Radar, Sonar & Navigation (Wiley-Blackwell). Feb2024, Vol. 18 Issue 2, p235-238. 4p.

    مستخلص: This document is a guest editorial from the journal IET Radar, Sonar & Navigation. It discusses the advances in AI-assisted radar sensing applications and the challenges that hinder its adoption in this field. The special issue of the journal features nine papers that address these challenges and offer innovative ideas and experimental results. The papers cover a range of topics, including health monitoring, human activity recognition, voice identification, elderly care health monitoring, track-to-track association, signal pre-processing, traffic congestion alleviation, and target recognition. The authors express their gratitude to the contributors and reviewers and believe that the research presented will inspire further exploration and innovation in this field. [Extracted from the article]

    : Copyright of IET Radar, Sonar & Navigation (Wiley-Blackwell) is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

  4. 4
    دورية أكاديمية

    المؤلفون: Yu, Ran1,2 (AUTHOR), Du, Yaxin2 (AUTHOR), Li, Jipeng3 (AUTHOR), Napolitano, Antonio4 (AUTHOR), Le Kernec, Julien1 (AUTHOR) julien.lekernec@glasgow.ac.uk

    المصدر: IET Radar, Sonar & Navigation (Wiley-Blackwell). Feb2024, Vol. 18 Issue 2, p277-293. 17p.

    مستخلص: Radar‐based human activity recognition is considered as a competitive solution for the elderly care health monitoring problem, compared to alternative techniques such as cameras and wearable devices. However, raw radar signals are often contaminated with noise, clutter, and other artifacts that significantly impact recognition performance, which highlights the importance of prepossessing techniques that enhance radar data quality and improve classification model accuracy. In this study, two different human activity classification models incorporated with pre‐processing techniques have been proposed. The authors introduce wavelet denoising methods into a cyclostationarity‐based classification model, resulting in a substantial improvement in classification accuracy. To address the limitations of conventional pre‐processing techniques, a deep neural network model called Double Phase Cascaded Denoising and Classification Network (DPDCNet) is proposed, which performs end‐to‐end signal‐level classification and achieves state‐of‐the‐art accuracy. The proposed models significantly reduce false detections and would enable robust activity monitoring for older individuals with radar signals, thereby bringing the system closer to a practical implementation for deployment. [ABSTRACT FROM AUTHOR]

    : Copyright of IET Radar, Sonar & Navigation (Wiley-Blackwell) is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

  5. 5
    دورية أكاديمية

    المؤلفون: Zhou, Junyu1 (AUTHOR), Le Kernec, Julien1 (AUTHOR) julien.lekernec@glasgow.ac.uk

    المصدر: IET Radar, Sonar & Navigation (Wiley-Blackwell). Feb2024, Vol. 18 Issue 2, p239-255. 17p.

    مستخلص: Millimetre‐wave radar has been widely used in health monitoring and human activity recognition owing to its improved range resolution and operation in a variety of environmental conditions. With the MIMO antenna array, 4D radar is increasingly employed in autonomous driving, while its application in assisted living is recent and therefore the value added compared to the increase in signal processing and hardware requirements is still an open question. A model for 4D Time‐division multiplexing (TDM) multiple‐input‐multiple‐output (MIMO) frequency‐modulated Continuous wave radar is established using the human activities from the HDM05 motion capture dataset. The simulator produces an end‐to‐end simulation, including four human motions (jumping Jack, kick, punch, and walk), signal time of flight, noise, MIMO signal processing, and classification. Different pre‐processing and point cloud‐based methods are compared to obtain an average classification accuracy of 90% with PointNet. This study simulates a specific 4D TDM MIMO radar configuration to benchmark signal pre‐processing algorithms, which can also assist other researchers to generate range‐Doppler‐time (range‐Doppler time) point cloud data sets for human activities testing different radar configurations, array configurations, and activities saving valuable time in human resources and hardware development before prototyping to assess expected performances. [ABSTRACT FROM AUTHOR]

    : Copyright of IET Radar, Sonar & Navigation (Wiley-Blackwell) is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

  6. 6
    دورية أكاديمية

    المؤلفون: Li, Zhenghui1 (AUTHOR), Liu, Yushi1 (AUTHOR), Liu, Bo1 (AUTHOR), Le Kernec, Julien1 (AUTHOR) julien.lekernec@glasgow.ac.uk, Yang, Shufan2 (AUTHOR)

    المصدر: IET Radar, Sonar & Navigation (Wiley-Blackwell). Feb2024, Vol. 18 Issue 2, p256-265. 10p.

    الشركة/الكيان: UNIVERSITY of Glasgow

    مستخلص: Building on previous radar‐based human activity recognition (HAR), we expand the micro‐Doppler signature to 6 domains and exploit each domain with a set of handcrafted features derived from the literature and our patents. An adaptive thresholding method to isolate the region of interest is employed, which is then applied in other domains. To reduce the computational burden and accelerate the convergence to an optimal solution for classification accuracy, a holistic approach to HAR optimisation is proposed using a surrogate model‐assisted differential evolutionary algorithm (SADEA‐I) to jointly optimise signal processing, adaptive thresholding and classification parameters for HAR. Two distinct classification models are evaluated with holistic optimisation: SADEA‐I with support vector machine classifiers (SVM) and SADEA‐I with AlexNet. They achieve an accuracy of 89.41% and 93.54%, respectively. This is an improvement of ∼11.3% for SVM and ∼2.7% for AlexNet when compared to the performance without SADEA‐I. The effectiveness of our holistic approach is validated using the University of Glasgow human radar signatures dataset. This proof of concept is significant for dimensionality reduction and computational efficiency when facing a multiplication of radar representation domains/feature spaces and transmitting/receiving channels that could be individually tuned in modern radar systems. [ABSTRACT FROM AUTHOR]

    : Copyright of IET Radar, Sonar & Navigation (Wiley-Blackwell) is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

  7. 7
    دورية أكاديمية

    المؤلفون: Singh, Juginder Pal1 (AUTHOR) juginder.singh@gla.ac.in, Kumar, Manoj1 (AUTHOR)

    المصدر: Journal of Experimental & Theoretical Artificial Intelligence. Feb 2024, Vol. 36 Issue 2, p187-211. 25p.

    مستخلص: The activity recognition gained immense popularity due to increasing number of surveillance cameras. The purpose of activity recognition is to detect the actions from the series of examination by varying the environmental condition. In this paper, Chaotic Whale Atom Search Optimisation (CWASO)-based Deep stacked autoencoder (CWASO-Deep SAE) is proposed for crowd behaviour recognition. The key frames are subjected to the descriptor of feature to extort the features, which bring out the classifier input vector. In this model, the statistical features, optical flow features and visual features are conducted to extract important features. Furthermore, the significant features are shown in the deep stacked auto-encoder (Deep SAE) for activity recognition, as the guidance of deep SAE is performed byCWASO, that is planned is designed by adjoining Atom search optimisation (ASO) algorithm and Chaotic Whale optimisation algorithm (CWOA). The proposed systems' performance is analysed using two datasets. By considering the training data, the projected method attains performance that is high for dataset-1 with maximum precision, sensitivity, and with specific value of 96.826%, 96.790%, and 99.395%, respectively. Similarly, by considering the K-Fold, this method attains the maximum precision of 96.897%, sensitivity of 96.885%, and with specific values of 97.245% for the dataset-1. [ABSTRACT FROM AUTHOR]

    : Copyright of Journal of Experimental & Theoretical Artificial Intelligence is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

  8. 8
    دورية أكاديمية

    المؤلفون: Li, Zhixin1 (AUTHOR), Liu, Hao2 (AUTHOR), Huan, Zhan1 (AUTHOR) hzh@cczu.edu.cn, Liang, Jiuzhen2 (AUTHOR)

    المصدر: Journal of Intelligent & Fuzzy Systems. 2024, Vol. 46 Issue 2, p3987-3999. 13p.

    مصطلحات موضوعية: HUMAN activity recognition, DATA augmentation

    مستخلص: Human activity recognition (HAR) plays a crucial role in remotely monitoring the health of the elderly. Human annotation is time-consuming and expensive, especially for abstract sensor data. Contrastive learning can extract robust features from weakly annotated data to promote the development of sensor-based HAR. However, current research mainly focuses on the exploration of data augmentation methods and pre-trained models, disregarding the impact of data quality on label effort for fine-tuning. This paper proposes a novel active contrastive coding model that focuses on using an active query strategy to evenly select small, high-quality samples in downstream tasks to complete the update of the pre-trained model. The proposed uncertainty-based balanced query strategy mines the most indistinguishable hard samples according to the data posterior probability in the unlabeled sample pool, and imposes class balance constraints to ensure equilibrium in the labeled sample pool. Extensive experiments have shown that the proposed method consistently outperforms several state-of-the-art baselines on four mainstream HAR benchmark datasets (UCI, WISDM, MotionSense, and USCHAD). With approximately only 10% labeled samples, our method achieves impressive F1-scores of 98.54%, 99.34%, 98.46%, and 87.74%, respectively. [ABSTRACT FROM AUTHOR]

    : Copyright of Journal of Intelligent & Fuzzy Systems is the property of IOS Press and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

  9. 9
    دورية أكاديمية

    المؤلفون: Park, Hyunseo1 (AUTHOR) tkf92001@kaist.ac.kr, Lee, Gyeong Ho1 (AUTHOR) gyeongho@kaist.ac.kr, Han, Jaeseob1 (AUTHOR) j89449@kaist.ac.kr, Choi, Jun Kyun1 (AUTHOR) jkchoi59@kaist.edu

    المصدر: Future Generation Computer Systems. Feb2024, Vol. 151, p71-84. 14p.

    مستخلص: Leveraging the enormous amounts of real-world data collected through Internet of Things (IoT) technologies, human activity recognition (HAR) has become a crucial component of numerous human-centric applications, with the aim of enhancing the quality of human life. While the recent advancements in deep learning have significantly improved HAR, the process of labeling data continues to remain a significant challenge due to the substantial costs associated with human annotation for supervised model training. Active learning (AL) addresses this issue by strategically selecting informative samples for labeling during model training, thereby enhancing model performance. Although numerous approaches have been proposed for sample selection, which consider aspects of uncertainty and representation, the difficulties in estimating uncertainty and exploiting distribution of high-dimensional data still pose a major issue. Our proposed deep learning-based active learning algorithm, called Multiclass Autoencoder-based Active Learning (MAAL), learns latent representation leveraging the capacity of Deep Support Vector Data Description (Deep SVDD). With the multiclass autoencoder which learns the normal characteristics of each activity class in the latent space, MAAL provides an informative sample selection for model training by establishing a link between the HAR model and the selection model. We evaluate our proposed MAAL using two publicly available datasets. The performance results demonstrate the improvements across the overall active learning rounds, achieving enhancements up to 3.23% accuracy and 3.67% in the F 1 score. Furthermore, numerical results and analysis of sample selection are presented to validate the effectiveness of the proposed MAAL compared to the alternative comparison methods. [Display omitted] • We present a deep learning-based active learning for an efficiently labeled dataset. • The proposed method extends the autoencoder with SVDD in a multiclass scheme. • We evaluate the proposed active learning method in the scenario of HAR applications. • Experimental results show improvements in the performance of HAR with a smaller dataset. • Selection of informative samples which is difficult for model to predict is validated. [ABSTRACT FROM AUTHOR]

    : Copyright of Future Generation Computer Systems is the property of Elsevier B.V. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

  10. 10
    دورية أكاديمية

    المؤلفون: Praveenkumar, S. M.1 (AUTHOR) praveenkumar_sm@kletech.ac.in, Patil, Prakashgoud1 (AUTHOR) prakashpatil@kletech.ac.in, Hiremath, P. S.1 (AUTHOR) pshiremath@kletech.ac.in

    المصدر: International Journal of Pattern Recognition & Artificial Intelligence. Mar2024, Vol. 38 Issue 4, p1-29. 29p.

    مستخلص: Herein, a novel methodology is proposed for real-time recognition of human activity in a compressed domain of videos based on motion vectors and self-attention mechanism using vision transformers, and it is termed as motion vectors and vision transformers (MVViT). The videos in MPEG-4 and H.264 compression formats are considered for this study. Any video source without any prior setup could be considered by adopting the proposed method to the corresponding video codecs and camera settings. Existing algorithms for recognition of human action in a compressed video have some limitations in this regard, such as (i) requirement of keyframes at a fixed interval, (ii) usage of P frames only, and (iii) normally support single codec only. These limitations are overcome in the proposed method by using arbitrary keyframe intervals, using both P and B frames, and supporting MPEG-4 as well as H.264 codecs. The experimentation is carried out using the benchmark datasets, namely, UCF101, HMDB51, and THUMOS14, and the recognition accuracy in a compressed domain is found to be comparable to that observed in raw video data but at reduced cost of computation. The proposed MVViT method has outperformed other recent methods in terms of a lesser (61.0%) number of parameters and (63.7%) Giga Floating Point Operations Per Second (GFLOPS), while significantly improving accuracy by 0.8%, 5.9% and 16.6% for UCF101, HMDB51 and THUMOS14, respectively. Also, it is observed that the speed is increased by 8% in case of UCF101 when compared to the highest speed reported in the literature on the same dataset. The ablation study of the proposed method has been done using MVViT variants for different codecs and the performance analysis is done in comparison with the state-of-the-art network models. [ABSTRACT FROM AUTHOR]

    : Copyright of International Journal of Pattern Recognition & Artificial Intelligence is the property of World Scientific Publishing Company and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)