Smartphone-based human activity recognition using lightweight multiheaded temporal convolutional network

Citation

Raja Sekaran, Sarmela and Pang, Ying Han and Ooi, Shih Yin (2023) Smartphone-based human activity recognition using lightweight multiheaded temporal convolutional network. Expert Systems with Applications, 227. p. 120132. ISSN 0957-4174

[img] Text
2.pdf - Published Version
Restricted to Repository staff only

Download (3MB)

Abstract

Sensor-based human activity recognition (HAR) has drawn extensive attention from the research community due to its potential applications in various domains, including interactive gaming, activity monitoring, healthcare, etc. Although plentiful approaches (i.e., handcrafted feature-based and deep learning methods) have been proposed throughout the years, there are still several challenges in developing an efficient and effective HAR system. For instance, handcrafted feature-based methods rely on manual feature engineering by experts and require time-consuming feature selection methods. Conversely, deep learning methods can automatically capture salient features without domain experts. However, some deep learning methods, especially Convolutional Neural Networks (CNN), cannot extract temporal features effectively, which are significant to motion analysis. Unlike CNN, recurrent models are exceptional at capturing temporal characteristics, but these models contain gigantic model parameters, requiring tremendous computation. This may limit the deployment of such models, especially to low-spec or embedded devices. Hence, this paper proposes a lightweight deep learning model, Lightweight Multiheaded TCN (Light-MHTCN), for human activity recognition. Light-MHTCN extracts the multiscale features of the inertial sensor signals through the parallelly organised Convolutional Heads to capture richer information. Further, integrating dilated causal convolutions and residual connections preserves longer-term dependency, which can boost the overall model performance. The performance of Light-MHTCN is assessed on three popular smartphone-based HAR databases: UCI HAR, WISDM V1 and UniMiB SHAR. With only ∼0.21 million parameters, our lightweight model is able to achieve state-of-the-art performance with recognition accuracies of 96.47%, 99.98% and 98.63% on these databases, respectively.

Item Type: Article
Uncontrolled Keywords: Lightweight deep learning model Human activity recognition Temporal convolutional network Dilated convolution Multiscale feature extraction
Subjects: T Technology > TK Electrical engineering. Electronics Nuclear engineering > TK5101-6720 Telecommunication. Including telegraphy, telephone, radio, radar, television
Divisions: Faculty of Information Science and Technology (FIST)
Depositing User: Ms Nurul Iqtiani Ahmad
Date Deposited: 02 Jun 2023 00:38
Last Modified: 02 Jun 2023 00:38
URII: http://shdl.mmu.edu.my/id/eprint/11435

Downloads

Downloads per month over past year

View ItemEdit (login required)