reklama - zainteresowany?

Strengthening Deep Neural Networks. Making AI Less Susceptible to Adversarial Trickery - Helion

Strengthening Deep Neural Networks. Making AI Less Susceptible to Adversarial Trickery
ebook
Autor: Katy Warr
ISBN: 978-14-920-4490-1
stron: 246, Format: ebook
Data wydania: 2019-07-03
Księgarnia: Helion

Cena książki: 211,65 zł (poprzednio: 246,10 zł)
Oszczędzasz: 14% (-34,45 zł)

Dodaj do koszyka Strengthening Deep Neural Networks. Making AI Less Susceptible to Adversarial Trickery

As deep neural networks (DNNs) become increasingly common in real-world applications, the potential to deliberately "fool" them with data that wouldn’t trick a human presents a new attack vector. This practical book examines real-world scenarios where DNNs—the algorithms intrinsic to much of AI—are used daily to process image, audio, and video data.

Author Katy Warr considers attack motivations, the risks posed by this adversarial input, and methods for increasing AI robustness to these attacks. If you’re a data scientist developing DNN algorithms, a security architect interested in how to make AI systems more resilient to attack, or someone fascinated by the differences between artificial and biological perception, this book is for you.

  • Delve into DNNs and discover how they could be tricked by adversarial input
  • Investigate methods used to generate adversarial input capable of fooling DNNs
  • Explore real-world scenarios and model the adversarial threat
  • Evaluate neural network robustness; learn methods to increase resilience of AI systems to adversarial data
  • Examine some ways in which AI might become better at mimicking human perception in years to come

Dodaj do koszyka Strengthening Deep Neural Networks. Making AI Less Susceptible to Adversarial Trickery

 

Osoby które kupowały "Strengthening Deep Neural Networks. Making AI Less Susceptible to Adversarial Trickery", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • DevOps w praktyce. Kurs video. Jenkins, Ansible, Terraform i Docker
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone

Dodaj do koszyka Strengthening Deep Neural Networks. Making AI Less Susceptible to Adversarial Trickery

Spis treści

Strengthening Deep Neural Networks. Making AI Less Susceptible to Adversarial Trickery eBook -- spis treści

  • Preface
    • Who Should Read This Book
    • How This Book Is Organized
    • Conventions Used in This Book
    • Using Code Examples
    • The Mathematics in This Book
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgments
  • I. An Introduction to Fooling AI
  • 1. Introduction
    • A Shallow Introduction to Deep Learning
    • A Very Brief History of Deep Learning
    • AI Optical Illusions: A Surprising Revelation
    • What Is Adversarial Input?
      • Adversarial Perturbation
      • Unnatural Adversarial Input
      • Adversarial Patches
      • Adversarial Examples in the Physical World
    • The Broader Field of Adversarial Machine Learning
    • Implications of Adversarial Input
  • 2. Attack Motivations
    • Circumventing Web Filters
    • Online Reputation and Brand Management
    • Camouflage from Surveillance
    • Personal Privacy Online
    • Autonomous Vehicle Confusion
    • Voice Controlled Devices
  • 3. Deep Neural Network (DNN) Fundamentals
    • Machine Learning
    • A Conceptual Introduction to Deep Learning
    • DNN Models as Mathematical Functions
      • DNN Inputs and Outputs
      • DNN Internals and Feed-Forward Processing
      • How a DNN Learns
    • Creating a Simple Image Classifier
  • 4. DNN Processing for Image, Audio, and Video
    • Image
      • Digital Representation of Images
      • DNNs for Image Processing
      • Introducing CNNs
    • Audio
      • Digital Representation of Audio
      • DNNs for Audio Processing
      • Introducing RNNs
      • Speech Processing
    • Video
      • Digital Representation of Video
      • DNNs for Video Processing
    • Adversarial Considerations
    • Image Classification Using ResNet50
  • II. Generating Adversarial Input
  • 5. The Principles of Adversarial Input
    • The Input Space
      • Generalizations from Training Data
      • Experimenting with Out-of-Distribution Data
    • Whats the DNN Thinking?
    • Perturbation Attack: Minimum Change, Maximum Impact
    • Adversarial Patch: Maximum Distraction
    • Measuring Detectability
      • A Mathematical Approach to Measuring Perturbation
      • Considering Human Perception
    • Summary
  • 6. Methods for Generating Adversarial Perturbation
    • White Box Methods
      • Searching the Input Space
      • Exploiting Model Linearity
      • Adversarial Saliency
      • Increasing Adversarial Confidence
      • Variations on White Box Approaches
    • Limited Black Box Methods
    • Score-Based Black Box Methods
    • Summary
  • III. Understanding the Real-World Threat
  • 7. Attack Patterns for Real-World Systems
    • Attack Patterns
      • Direct Attack
      • Replica Attack
      • Transfer Attack
      • Universal Transfer Attack
    • Reusable Patches and Reusable Perturbation
    • Bringing It Together: Hybrid Approaches and Trade-offs
  • 8. Physical-World Attacks
    • Adversarial Objects
      • Object Fabrication and Camera Capabilities
      • Viewing Angles and Environment
    • Adversarial Sound
      • Audio Reproduction and Microphone Capabilities
      • Audio Positioning and Environment
    • The Feasibility of Physical-World Adversarial Examples
  • IV. Defense
  • 9. Evaluating Model Robustness to Adversarial Inputs
    • Adversarial Goals, Capabilities, Constraints, and Knowledge
      • Goals
      • Capabilities, Knowledge, and Access
    • Model Evaluation
      • Empirically Derived Robustness Metrics
      • Theoretically Derived Robustness Metrics
    • Summary
  • 10. Defending Against Adversarial Inputs
    • Improving the Model
      • Gradient Masking
      • Adversarial Training
      • Out-of-Distribution Confidence Training
      • Randomized Dropout Uncertainty Measurements
    • Data Preprocessing
      • Preprocessing in the Broader Processing Chain
      • Intelligently Removing Adversarial Content
    • Concealing the Target
    • Building Strong Defenses Against Adversarial Input
      • Open Projects
      • Taking a Holistic View
  • 11. Future Trends: Toward Robust AI
    • Increasing Robustness Through Outline Recognition
    • Multisensory Input
    • Object Composition and Hierarchy
    • Finally
  • A. Mathematics Terminology Reference
  • Index

Dodaj do koszyka Strengthening Deep Neural Networks. Making AI Less Susceptible to Adversarial Trickery

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.