HGR-ViT: Hand Gesture Recognition with Vision Transformer

Citation

Tan, Chun Keat and Lim, Kian Ming and Chang, Roy Kwang Yang and Lee, Chin Poo and Alqahtani, Ali (2023) HGR-ViT: Hand Gesture Recognition with Vision Transformer. Sensors, 23 (12). p. 5555. ISSN 1424-8220

[img] Text
15.pdf - Published Version
Restricted to Repository staff only

Download (3MB)

Abstract

Hand gesture recognition (HGR) is a crucial area of research that enhances communication by overcoming language barriers and facilitating human-computer interaction. Although previous works in HGR have employed deep neural networks, they fail to encode the orientation and position of the hand in the image. To address this issue, this paper proposes HGR-ViT, a Vision Transformer (ViT) model with an attention mechanism for hand gesture recognition. Given a hand gesture image, it is first split into fixed size patches. Positional embedding is added to these embeddings to form learnable vectors that capture the positional information of the hand patches. The resulting sequence of vectors are then served as the input to a standard Transformer encoder to obtain the hand gesture representation. A multilayer perceptron head is added to the output of the encoder to classify the hand gesture to the correct class. The proposed HGR-ViT obtains an accuracy of 99.98%, 99.36% and 99.85% for the American Sign Language (ASL) dataset, ASL with Digits dataset, and National University of Singapore (NUS) hand gesture dataset, respectively.

Item Type: Article
Uncontrolled Keywords: Computer interaction
Subjects: Q Science > QA Mathematics > QA71-90 Instruments and machines > QA75.5-76.95 Electronic computers. Computer science
Divisions: Faculty of Information Science and Technology (FIST)
Depositing User: Ms Nurul Iqtiani Ahmad
Date Deposited: 28 Jul 2023 07:16
Last Modified: 28 Jul 2023 07:16
URII: http://shdl.mmu.edu.my/id/eprint/11565

Downloads

Downloads per month over past year

View ItemEdit (login required)