Citation
Suud, Mazliham Mohd and Nazim, Sadia and Alam, Muhammad Mansoor and Rizvi, Syed Safdar Ali and Mustapha, Jawahir Che and Hussain, Syed Shujaa (2025) Advancing malware imagery classification with explainable deep learning: A state-of-the-art approach using SHAP, LIME and Grad-CAM. PLOS One, 20 (5). e0318542. ISSN 1932-6203 Full text not available from this repository.Abstract
Artificial Intelligence (AI) is being integrated into increasingly more domains of everyday activities. Whereas AI has countless benefits, its convoluted and sometimes vague internal operations can establish difficulties. Nowadays, AI is significantly employed for evaluations in cybersecurity that find it challenging to justify their proceedings; this absence of accountability is alarming. Additionally, over the last ten years, the fractional elevation in malware variants has directed scholars to utilize Machine Learning (ML) and Deep Learning (DL) approaches for detection. Although these methods yield exceptional accuracy, they are also difficult to understand. Thus, the advancement of interpretable and powerful AI models is indispensable to their reliability and trustworthiness. The trust of users in the models used for cybersecurity would be undermined by the ambiguous and indefinable nature of existing AI-based methods, specifically in light of the more complicated and diverse nature of cyberattacks in modern times. The present research addresses the comparative analysis of an ensemble deep neural network (DNNW) with different ensemble techniques like RUSBoost, Random Forest, Subspace, AdaBoost, and BagTree for the best prediction against imagery malware data. It determines the best-performing model, an ensemble DNNW, for which explainability is provided. There has been relatively little study on explainability, especially when dealing with malware imagery data, irrespective of the fact that DL/ML algorithms have revolutionized malware detection. Explainability techniques such as SHAP, LIME, and Grad-CAM approaches are employed to present a complete comprehension of feature significance and local or global predictive behavior of the model over various malware categories. A comprehensive investigation of significant characteristics and their impact on the decision-making process of the model and multiple query point visualizations are some of the contributions. This strategy promotes advanced transparency and trustworthy cybersecurity applications by improving the comprehension of malware detection techniques and integrating explainable AI observations with domain-specific knowledge.
Item Type: | Article |
---|---|
Subjects: | Q Science > QA Mathematics > QA71-90 Instruments and machines |
Depositing User: | Ms Suzilawati Abu Samah |
Date Deposited: | 26 Jun 2025 07:21 |
Last Modified: | 26 Jun 2025 07:21 |
URII: | http://shdl.mmu.edu.my/id/eprint/14105 |
Downloads
Downloads per month over past year
![]() |