Enhancing Audio Classification Through MFCC Feature Extraction and Data Augmentation with CNN and RNN Models

Citation

Rezaul, Karim Mohammed and Jewel, Md. and Islam, Md. Shabiul and Siddiquee, Kazy Noor e Alam and Barua, Nick and Rahman, Muhammad Azizur and Shan-A-Khuda, Mohammad and Sulaiman, Rejwan and Shaikh, Md Sadeque Imam and Hamim, Md Abrar and Tanmoy, F.M and Haque, Afraz Ul and Nipun, Musarrat Saberin and Dorudian, Navid and Kareem, Amer and Farid, Ahmmed Khondokar and Mubarak, Asma and Jannat, Tajnuva and Asha, Umme Fatema Tuj (2024) Enhancing Audio Classification Through MFCC Feature Extraction and Data Augmentation with CNN and RNN Models. International Journal of Advanced Computer Science and Applications, 15 (7). ISSN 2158-107X

[img] Text
Enhancing Audio Classification Through MFCC Feature Extraction and Data Augmentation with CNN and RNN Models.pdf - Published Version
Restricted to Repository staff only

Download (1MB)

Abstract

—Sound classification is a multifaceted task that necessitates the gathering and processing of vast quantities of data, as well as the construction of machine learning models that can accurately distinguish between various sounds. In our project, we implemented a novel methodology for classifying both musical instruments and environmental sounds, utilizing convolutional and recurrent neural networks. We used the Mel Frequency Cepstral Coefficient (MFCC) method to extract features from audio, which emulates the human auditory system and produces highly distinct features. Knowing how important data processing is, we implemented distinctive approaches, including a range of data augmentation and cleaning techniques, to achieve an optimized solution. The outcomes were noteworthy, as both the convolutional and recurrent neural network models achieved a commendable level of accuracy. As machine learning and deep learning continue to revolutionize image classification, it is high time to explore the development of adaptable models for audio classification. Despite the challenges associated with a small dataset, we successfully crafted our models using convolutional and recurrent neural networks. Overall, our strategy for sound classification bears significant implications for diverse domains, encompassing speech recognition, music production, and healthcare. We hold the belief that with further research and progress, our work can pave the way for breakthroughs in audio data classification and analysis.

Item Type: Article
Uncontrolled Keywords: Deep learning (artificial intelligence); data augmentation; audio segmentation
Subjects: Q Science > Q Science (General) > Q300-390 Cybernetics
Divisions: Faculty of Engineering (FOE)
Depositing User: Ms Nurul Iqtiani Ahmad
Date Deposited: 02 Sep 2024 07:54
Last Modified: 02 Sep 2024 07:54
URII: http://shdl.mmu.edu.my/id/eprint/12920

Downloads

Downloads per month over past year

View ItemEdit (login required)