Rough-based Data Sanitization Model for Defending Adversarial Attack in Machine Learning

Citation

Tan, Jun Wei and Goh, Pey Yun and Tan, Shing Chiang (2022) Rough-based Data Sanitization Model for Defending Adversarial Attack in Machine Learning. In: Postgraduate Colloquium December 2022, 1-15 December 2022, Multimedia University, Malaysia. (Unpublished)

[img] Text
48_TAN JUN WEI_FIST.pdf - Submitted Version
Restricted to Registered users only

Download (367kB)

Abstract

•What is Machine Learning (ML)? •ML learns from the data and get information to build its own knowledge to applied for prediction. •Due to the drastic growth of dependency on ML the security problem are showing up. •It’s called Adversarial Machine Learning (AML). •What is AML? •The ML model is attacked by attacker. For e.g., injecting fake samples or noise into ML model and causing wrong prediction (refer Figure 1). •To protect from attacker, defender is needed. •For e.g., data sanitization.

Item Type: Conference or Workshop Item (Poster)
Uncontrolled Keywords: Machine learning
Subjects: Q Science > QA Mathematics > QA71-90 Instruments and machines > QA75.5-76.95 Electronic computers. Computer science
Divisions: Faculty of Information Science and Technology (FIST)
Depositing User: Ms Nurul Iqtiani Ahmad
Date Deposited: 19 Dec 2022 05:35
Last Modified: 19 Dec 2022 05:35
URII: http://shdl.mmu.edu.my/id/eprint/10912

Downloads

Downloads per month over past year

View ItemEdit (login required)