Citation
Wei, So Zheng and Pang, Ying Han and Khoh, Wee How and Ooi, Shih Yin (2025) Multi-head Temporal Learning for Human Activity Recognition Using Smartphone. In: 2nd International Conference on Security and Information Technologies with AI, Internet Computing and Big-data Applications, SITAIBA 2023, 7-9 December 2023, New Taipei City. Full text not available from this repository.Abstract
Human activities can be grouped into basic activities, complex activities and postural transitions within or between basic activities. In the literature, fewer works are focusing on postural transition recognition. Postural transitions, i.e., stand-to-sit and sit-to-stand, etc., are regular functional tasks indicative of muscle balance and power performance. Hence, examining postural transitions delivers an objective tool for a human mobility assessment. In this paper, a smartphone-based human activity recognition architecture with the utilization of multi-head temporal convolutional operators is proposed. In this work, a custom-tailored learning architecture for each sensor type, i.e., accelerometer and gyroscope sensors in this case, is developed. Feature learning is enhanced by deliberating on a single sensor instead of on all at once. This allows better data characterization of the multichannel inertial data. Empirical results reveal the superiority of the proposed model to the existing machine and deep learning models, with an accuracy of 98.35%.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Uncontrolled Keywords: | Smartphone, temporal deep learning |
Subjects: | Q Science > QA Mathematics > QA71-90 Instruments and machines |
Depositing User: | Ms Rosnani Abd Wahab |
Date Deposited: | 30 May 2025 02:24 |
Last Modified: | 03 Jun 2025 08:46 |
URII: | http://shdl.mmu.edu.my/id/eprint/13891 |
Downloads
Downloads per month over past year
![]() |