Using MNIST dataset for de-pois attack and defense

Abstract

Machine learning (ML) has grown into compelling element of diverse systems and applications. Recent researches have revealed that ML algorithms are at the risk of serious security vulnerabilities, notwithstanding the outstanding validity of ML algorithms in various signals and decision-making tasks. ML systems trained on user-provided data are vulnerable to various data poisoning attacks, by which attackers inject the malicious training dataset with the purpose of misconducting the learned model For instance, in model extraction attacks, an attacker can steal the proprietary and secret data of ML models, and in model inversion attacks, they can obtain the private and sensitive data of the training dataset. The objective of this study is to build defence technique by using De-Pois to tackle attackers where they corrupt the ML models by data poisoning in training dataset. De-Pois is the only popular means of protection for poisoning attacks. It offers defence without information of any ML algorithms, kinds of poisoning attacks, and it may be applied to both classification and regression. Hence, De-Pois is a common defence system which will be able to stop all types of attacks. De-Pois approach is crucial for training mimic models, whose goal is to replicate the activities of the target model. The solution developed a De-Pois-based defence mechanism to combat attackers that damage ML models via data poisoning in the training dataset. De-Pois is the only and most popular means of poison defence. De-Pois is a security solution that can be used to safeguard both classification and regression workloads without the need for specialized understanding of ML methods or poisoning attack types.

 

Keywords:  iSeries Platform Support, knowledgebase, MTTR, Azure AI, Bot services.

 

Conference Published in: 4th international conference on recent trends in communication and intelligent systems (ICRTCI 2023)

AUTHORS

Chetan Niloor


Rashmi Agarwal,


Pradeepta Mishra


Leave a Reply

Your email address will not be published. Required fields are marked *