Deep Learning-Based Detection of Inappropriate Speech Content for Film Censorship

Citation

Ba Wazir, Abdulaziz Saleh and Abdul Karim, Hezerul and Hor, Sui Lyn and Ahmad Fauzi, Mohammad Faizal and Mansor, Sarina and Lye Abdullah, Mohd Haris (2022) Deep Learning-Based Detection of Inappropriate Speech Content for Film Censorship. IEEE Access, 10. pp. 101697-101715. ISSN 2169-3536

[img] Text
27.pdf - Published Version
Restricted to Repository staff only

Download (3MB)

Abstract

Audible content has become an effective tool for shaping one’s personality and character due to the ease of accessibility to a huge audible content that could be an independent audio files or an audio of online videos, movies, and television programs. There is a huge necessity to filter inappropriate audible content of the easily accessible videos and films that are likely to contain an inappropriate speech content. With this in view, all the broadcasting and online video/audio platform companies hire a lot of manpower to detect the foul voices prior to censorship. The process has a large cost in terms of manpower, time and financial resources. In addition to inaccurate detection of foul voices due to fatigue of manpower and weakness of human visual and hearing system in long time and monotonous tasks. As such, this paper proposes an intelligent deep learning-based system for film censorship through a fast and accurate detection and localization approach using advanced deep Convolutional Neural Networks (CNNs). The dataset of foul language containing isolated words samples and continuous speech were collected, annotated, processed, and analyzed for the development of automated detection of inappropriate speech content. The results indicated the feasibility of the suggested systems by reporting a high volume of inappropriate spoken terms detection. The proposed system outperformed state-of-the-art baseline algorithms on the novel foul language dataset evaluation metrics in terms of macro average AUC (93.85%), weighted average AUC (94.58%), and all other metrics such as F1-score. Additionally, proposed acoustic system outperformed ASR-based system for profanity detection based on the evaluation metrics including AUC, accuracy, precision, and F1-score. Additionally, proposed system was proven to be faster than human manual screening and detection of audible content for films’ censorship.

Item Type: Article
Uncontrolled Keywords: Speech recognition, Acoustics, Computational modeling, Deep learning, Motion pictures, Censorship, Speech processing
Subjects: Q Science > Q Science (General) > Q300-390 Cybernetics
Divisions: Faculty of Engineering (FOE)
Depositing User: Ms Nurul Iqtiani Ahmad
Date Deposited: 31 Oct 2022 07:21
Last Modified: 31 Oct 2022 07:21
URII: http://shdl.mmu.edu.my/id/eprint/10582

Downloads

Downloads per month over past year

View ItemEdit (login required)