Adversarial Robustness for Machine Learning Models Books

Click Get Book Button To Download or read online Adversarial Robustness for Machine Learning Models books, Available in PDF, ePub, Tuebl and Kindle. This site is like a library, Use search box in the widget to get ebook that you want.

Robust Machine Learning in Adversarial Setting with Provable Guarantee


Robust Machine Learning in Adversarial Setting with Provable Guarantee
  • Author : Yizhen Wang
  • Publisher :
  • Release : 2020
  • ISBN : OCLC:1149141432
  • Language : En, Es, Fr & De
GET BOOK

Over the last decade, machine learning systems have achieved state-of-the-art performance in many fields, and are now used in increasing number of applications. However, recent research work has revealed multiple attacks to machine learning systems that significantly reduce the performance by manipulating the training or test data. As machine learning is increasingly involved in high-stake decision making processes, the robustness of machine learning systems in adversarial environment becomes a major concern. This dissertation attempts to build machine learning systems robust to such adversarial manipulation with the emphasis on providing theoretical performance guarantees. We consider adversaries in both test and training time, and make the following contributions. First, we study the robustness of machine learning algorithms and model to test-time adversarial examples. We analyze the distributional and finite sample robustness of nearest neighbor classification, and propose a modified 1-Nearest-Neighbor classifier that both has theoretical guarantee and empirical improvement in robustness. Second, we examine the robustness of malware detectors to program transformation. We propose novel attacks that evade existing detectors using program transformation, and then show program normalization as a provably robust defense against such transformation. Finally, we investigate data poisoning attacks and defenses for online learning, in which models update and predict over data stream in real-time. We show efficient attacks for general adversarial objectives, analyze the conditions for which filtering based defenses are effective, and provide practical guidance on choosing defense mechanisms and parameters.

Interpretable Machine Learning


Interpretable Machine Learning
  • Author : Christoph Molnar
  • Publisher : Lulu.com
  • Release : 2019
  • ISBN : 9780244768522
  • Language : En, Es, Fr & De
GET BOOK

Enhancing Adversarial Robustness of Deep Neural Networks


Enhancing Adversarial Robustness of Deep Neural Networks
  • Author : Jeffrey Zhang (M. Eng.)
  • Publisher :
  • Release : 2019
  • ISBN : OCLC:1127291827
  • Language : En, Es, Fr & De
GET BOOK

Logit-based regularization and pretrain-then-tune are two approaches that have recently been shown to enhance adversarial robustness of machine learning models. In the realm of regularization, Zhang et al. (2019) proposed TRADES, a logit-based regularization optimization function that has been shown to improve upon the robust optimization framework developed by Madry et al. (2018) [14, 9]. They were able to achieve state-of-the-art adversarial accuracy on CIFAR10. In the realm of pretrain- then-tune models, Hendrycks el al. (2019) demonstrated that adversarially pretraining a model on ImageNet then adversarially tuning on CIFAR10 greatly improves the adversarial robustness of machine learning models. In this work, we propose Adversarial Regularization, another logit-based regularization optimization framework that surpasses TRADES in adversarial generalization. Furthermore, we explore the impact of trying different types of adversarial training on the pretrain-then-tune paradigm.

Deep Learning


Deep Learning
  • Author : Ian Goodfellow
  • Publisher : MIT Press
  • Release : 2016-11-10
  • ISBN : 9780262337373
  • Language : En, Es, Fr & De
GET BOOK

An introduction to a broad range of topics in deep learning, covering mathematical and conceptual background, deep learning techniques used in industry, and research perspectives. “Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.” —Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

Machine Learning with Provable Robustness Guarantees


Machine Learning with Provable Robustness Guarantees
  • Author : Huan Zhang
  • Publisher :
  • Release : 2020
  • ISBN : OCLC:1229055139
  • Language : En, Es, Fr & De
GET BOOK

Although machine learning has achieved great success in numerous complicated tasks, many machine learning models lack robustness under the presence of adversaries and can be misled by imperceptible adversarial noises. In this dissertation, we first study the robustness verification problem of machine learning, which gives provable guarantees on worst case performance under arbitrarily strong adversaries. We study two popular machine learning models, deep neural networks (DNNs) and ensemble trees, and design efficient and effective algorithms to provably verify the robustness of these models. For neural networks, we develop a linear relaxation based framework, CROWN, where we relax the non-linear units in DNNs using linear bounds, and propagate linear bounds through the network. We generalize CROWN into a linear relaxation based perturbation analysis (LiRPA) algorithm on any computational graphs and general network architectures to handle irregular neural networks used in practice, and released an open source software package, auto_LiRPA, to facilitate the use of LiRPA for researchers in other fields. For tree ensembles, we reduce the robustness verification algorithm to a max-clique finding problem on a specially created graph, which is very efficient compared to existing approaches and can produce high quality lower or upper bounds for the output of a tree ensemble based classifier. After developing our robustness verification algorithms, we utilize them to create a certified adversarial defense for neural networks, where we explicitly optimize the bounds obtained from verification to greatly improve network robustness in a provable manner. Our LiRPA based training method is very efficient: it can scale to large datasets such as downscaled ImageNet and modern computer vision models such as DenseNet. Lastly, we study the robustness of reinforcement learning (RL), which is more challenging than the problem in supervised learning settings. We focus on the robustness of state observations for a RL agent, and develop the state-adversarial Markov decision process (SA-MDP) to characterize the behavior of a RL agent under adversarially perturbed observations. Based on SA-MDP, we develop two orthogonal approaches to improve the robustness of RL: a state-adversarial regularization helping to improve the robustness of function approximators, and alternating training with learned adversaries (ATLA) to mitigate the intrinsic weakness in a policy. Both approaches are evaluated in various simulated environments and they significantly improve the robustness of RL agents under strong adversarial attacks, including a few novel adversarial attacks proposed by us.