Citation
Al Farid, Fahmid and Hashim, Noramiza and Abdullah, Junaidi and Bhuiyan, Md Roman and Kairanbay, Magzhan and Yusoff, Zulfadzli and Abdul Karim, Hezerul and Mansor, Sarina and Sarker, Md. Tanjil and Ramasamy, Gobbi (2024) Single Shot Detector CNN and Deep Dilated Masks for Vision-Based Hand Gesture Recognition From Video Sequences. IEEE Access, 12. pp. 28564-28574. ISSN 2169-3536
Text
26.pdf - Published Version Restricted to Repository staff only Download (1MB) |
Abstract
With an increasing number of people on the planet today, innovative human-computer interaction technologies and approaches may be employed to assist individuals in leading more fulfilling lives. Gesture-based technology has the potential to improve the safety and well-being of impaired people, as well as the general population. Recognizing gestures from video streams is a difficult problem because of the large degree of variation in the characteristics of each motion across individuals. In this article, we propose applying deep learning methods to recognize automated hand gestures using RGB and depth data. To train neural networks to detect hand gestures, any of these forms of data may be utilized. Gesture-based interfaces are more natural, intuitive, and straightforward. Earlier study attempted to characterize hand motions in a number of contexts. Our technique is evaluated using a vision-based gesture recognition system. In our suggested technique, image collection starts with RGB video and depth information captured with the Kinect sensor and is followed by tracking the hand using a single shot detector Convolutional Neural Network (SSDCNN). When the kernel is applied, it creates an output value at each of the m × n locations. Using a collection of convolutional filters, each new feature layer generates a defined set of gesture detection predictions. After that, we perform deep dilation to make the gesture in the image masks more visible. Finally, hand gestures have been detected using the well-known classification technique SVM. Using deep learning we recognize hand gestures with higher accuracy of 93.68% in RGB passage, 83.45% in the depth passage, and 90.61% in RGB-D conjunction on the SKIG dataset compared to the state-of-the-art. In the context of our own created Different Camera Orientation Gesture (DCOG) dataset we got higher accuracy of 92.78% in RGB passage, 79.55% in the depth passage, and 88.56% in RGB-D conjunction for the gestures collected in 0-degree angle. Moreover, the framework intends to use unique methodologies to construct a superior vision-based hand gesture recognition system.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Gesture recognition, video sequences, SVM, SSD-CNN, deep dilated mask |
Subjects: | N Fine Arts > NX Arts in general Q Science > QA Mathematics > QA71-90 Instruments and machines |
Divisions: | Faculty of Computing and Informatics (FCI) |
Depositing User: | Ms Nurul Iqtiani Ahmad |
Date Deposited: | 04 Mar 2024 03:09 |
Last Modified: | 04 Mar 2024 03:09 |
URII: | http://shdl.mmu.edu.my/id/eprint/12164 |
Downloads
Downloads per month over past year
Edit (login required) |