reklama - zainteresowany?

Practicing Trustworthy Machine Learning - Helion

Practicing Trustworthy Machine Learning
ebook
Autor: Yada Pruksachatkun, Matthew Mcateer, Subho Majumdar
ISBN: 9781098120238
stron: 302, Format: ebook
Data wydania: 2023-01-03
Księgarnia: Helion

Cena książki: 271,15 zł (poprzednio: 319,00 zł)
Oszczędzasz: 15% (-47,85 zł)

Dodaj do koszyka Practicing Trustworthy Machine Learning

With the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations spend a lot of time and money to make ML models trustworthy. Many books on the subject offer deep dives into theories and concepts. This guide provides a practical starting point to help development teams produce models that are secure, more robust, less biased, and more explainable.

Authors Yada Pruksachatkun, Matthew McAteer, and Subhabrata Majumdar translate best practices in the academic literature for curating datasets and building models into a blueprint for building industry-grade trusted ML systems. With this book, engineers and data scientists will gain a much-needed foundation for releasing trustworthy ML applications into a noisy, messy, and often hostile world.

You'll learn:

  • Methods to explain ML models and their outputs to stakeholders
  • How to recognize and fix fairness concerns and privacy leaks in an ML pipeline
  • How to develop ML systems that are robust and secure against malicious attacks
  • Important systemic considerations, like how to manage trust debt and which ML obstacles require human intervention

Dodaj do koszyka Practicing Trustworthy Machine Learning

 

Osoby które kupowały "Practicing Trustworthy Machine Learning", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • DevOps w praktyce. Kurs video. Jenkins, Ansible, Terraform i Docker
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone

Dodaj do koszyka Practicing Trustworthy Machine Learning

Spis treści

Practicing Trustworthy Machine Learning eBook -- spis treści

  • Preface
    • Implementing Machine Learning in Production
    • The Transformer Convergence
    • An Explosion of Large and Highly Capable ML Models
    • Why We Wrote This Book
    • Who This Book Is For
    • AI Safety and Alignment
    • Use of HuggingFace PyTorch for AI Models
    • Foundations
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgments
  • 1. Privacy
    • Attack Vectors for Machine Learning Pipelines
    • Improperly Implemented Privacy Features in ML: Case Studies
      • Case 1: Apples CSAM
      • Case 2: GitHub Copilot
      • Case 3: Model and Data Theft from No-Code ML Tools
    • Definitions
      • Definition of Privacy
      • Proxies and Metrics for Privacy
        • Adversarial success
        • Indistinguishability
        • Data similarity
        • Accuracy and precision
        • Uncertainty
        • Information gain/loss
        • Time spent
      • Legal Definitions of Privacy
      • k-Anonymity
    • Types of Privacy-Invading Attacks on ML Pipelines
      • Membership Attacks
      • Model Inversion
      • Model Extraction
    • Stealing a BERT-Based Language Model
      • Defenses Against Model Theft from Output Logits
      • Privacy-Testing Tools
    • Methods for Preserving Privacy
      • Differential Privacy
      • Stealing a Differentially Privately Trained Model
      • Further Differential Privacy Tooling
      • Homomorphic Encryption
      • Secure Multi-Party Computation
      • SMPC Example
      • Further SMPC Tooling
      • Federated Learning
    • Conclusion
  • 2. Fairness and Bias
    • Case 1: Social Media
    • Case 2: Triaging Patients in Healthcare Systems
    • Case 3: Legal Systems
    • Key Concepts in Fairness and Fairness-Related Harms
      • Individual Fairness
      • Parity Fairness
      • Calculating Parity Fairness
        • Step 1: Dividing your test data into cohorts
        • Step 2: Get model performance results
        • Step 3: Evaluate for disparity
    • Scenario 1: Language Generation
    • Scenario 2: Image Captioning
    • Fairness Harm Mitigation
      • Mitigation Methods in the Pre-Processing Stage
      • Mitigation Methods in the In-Processing Stage
        • Adversarial bias mitigation
        • Regularization
      • Mitigation Methods in the Post-Processing Stage
    • Fairness Tool Kits
    • How Can You Prioritize Fairness in Your Organization?
    • Conclusion
    • Further Reading
  • 3. Model Explainability and Interpretability
    • Explainability Versus Interpretability
    • The Need for Interpretable and Explainable Models
    • A Possible Trade-off Between Explainability and Privacy
    • Evaluating the Usefulness of Interpretation or Explanation Methods
    • Definitions and Categories
      • Black Box
      • Global Versus Local Interpretability
      • Model-Agnostic Versus Model-Specific Methods
      • Interpreting GPT-2
    • Methods for Explaining Models and Interpreting Outputs
      • Inherently Explainable Models
        • Linear regression
        • Logistic regression
        • Generalized linear model
        • Generalized additive models
        • Generalized additive models plus interactions
        • Symbolic regression
        • Support vector machines
        • Decision tree
        • Decision rules
        • Beyond intrinsically interpretable models
      • Local Model-Agnostic Interpretability Methods
        • Local interpretable model-agnostic explanation
        • Deep dive example: LIME on Vision Transformer models
        • Shapley and SHAP
        • Deep dive example: SHAP on Vision Transformer models
      • Global Model-Agnostic Interpretability Methods
        • Permutation feature importance
        • Global surrogate models
        • Prototypes and criticisms
      • Explaining Neural Networks
      • Saliency Mapping
      • Deep Dive: Saliency Mapping with CLIP
      • Adversarial Counterfactual Examples
    • Overcome the Limitations of Interpretability with a Security Mindset
    • Limitations and Pitfalls of Explainable and Interpretable Methods
    • Risks of Deceptive Interpretability
    • Conclusion
  • 4. Robustness
    • Evaluating Robustness
    • Non-Adversarial Robustness
      • Step 1: Apply Perturbations
        • Computer vision
        • Language
      • Step 2: Defining and Applying Constraints
        • Natural language processing
          • Fluency
          • Preserving semantic meaning
        • Computer vision
      • Deep Dive: Word Substitution with Cosine Similarity Constraints
    • Adversarial Robustness
      • Deep Dive: Adversarial Attacks in Computer Vision
        • The HopSkipJump attack on ImageNet
      • Creating Adversarial Examples
    • Improving Robustness
    • Conclusion
  • 5. Secure and Trustworthy Data Generation
    • Case 1: Unsecured AWS Buckets
    • Case 2: Clearview AI Scraping Photos from Social Media
    • Case 3: Improperly Stored Medical Data
    • Issues in Procuring Real-World Data
      • Using the Right Data for the Modeling Goal
      • Consent
      • PII, PHI, and Secrets
      • Proportionality and Sampling Techniques
      • Undescribed Variation
      • Unintended Proxies
      • Failures of External Validity
      • Data Integrity
      • Setting Reasonable Expectations
      • Tools for Addressing Data Collection Issues
        • Getting consent
        • Identifying PHI, PII, and other sensitive data
        • Proportionality and sampling techniques
        • Tracking unintended variation
        • Tracking unintended proxies
        • Data integrity
        • Improperly organized splits
        • Setting reasonable expectations
    • Synthetically Generated Data
      • DALLE, GPT-3, and Synthetic Data
      • Improving Pattern Recognition with Synthetic Data
        • Process-driven synthetic data
        • Data-driven synthetic data
      • Deep Dive: Pre-Training a Model with a Process-Driven Synthetic Dataset
      • Facial Recognition, Pose Detection, and Human-Centric Tasks
      • Object Recognition and Related Tasks
      • Environment Navigation
      • Unity and Unreal Environments
      • Limitations of Synthetic Data in Healthcare
      • Limitations of Synthetic Data in NLP
      • Self-Supervised Learned Models Versus Giant Natural Datasets
      • Repurposing Quality Control Metrics for Security Purposes
    • Conclusion
  • 6. More State-of-the-Art Research Questions
    • Making Sense of Improperly Overhyped Research Claims
      • Shallow Human-AI Comparison Antipattern
      • Downplaying the Limitations of the Technique Antipattern
      • Uncritical PR Piece Antipattern
      • Hyperbolic or Just Plain Wrong Antipattern
      • Getting Past These Antipatterns
    • Quantized ML
      • Tooling for Quantized ML
      • Privacy, Bias, Interpretability, and Stability in Quantized ML
    • Diffusion-Based Energy Models
    • Homomorphic Encryption
    • Simulating Federated Learning
    • Quantum Machine Learning
      • Tooling and Resources for Quantum Machine Learning
      • Why QML Will Not Solve Your Regular ML Problems
    • Making the Leap from Theory to Practice
  • 7. From Theory to Practice
    • Part I: Additional Technical Factors
      • Causal Machine Learning
        • Steps to causal inference
        • Tools for causal inference
        • Causality and trust
      • Sparsity and Model Compression
        • Pruning
        • Sparse training
        • Trust elements in sparse models
      • Uncertainty Quantification
        • Aleatoric uncertainty
        • Epistemic uncertainty
        • Confidence intervals
        • Bootstrap resampling
        • Are you certain I can trust you?
    • Part II: Implementation Challenges
      • Motivating Stakeholders to Develop Trustworthy ML Systems
        • Debt management
        • Risk management
      • Trust Debts
        • Technical trust debt
        • Ethical debt
      • Important Aspects of Trust
      • Evaluation and Feedback
      • Trustworthiness and MLOps
        • Scaling challenges
        • Data drift
        • Model monitoring and observability
        • Techniques
          • Anomaly detection
          • Change point detection
          • Control charts
    • Conclusion
  • 8. An Ecosystem of Trust
    • Tooling
      • LiFT
      • Datasheets
      • Model Cards
      • DAG Cards
    • Human-in-the-Loop Steps
      • Oversight Guidelines
      • Stages of Assessment
        • Scoping
        • Data collection
        • Model training
        • Model validation
    • The Need for a Cross-Project Approach
      • MITRE ATLAS
      • Benchmarks
      • AI Incident Database
      • Bug Bounties
    • Deep Dive: Connecting the Dots
      • Data
      • Pre-Processing
      • Model Training
      • Model Inference
      • Trust Components
    • Conclusion
  • A. Synthetic Data Generation Tools
  • B. Other Interpretability and Explainability Tool Kits
    • Interpretable or Fair Modeling Packages
    • Other Python Packages for General Explainability
  • Index

Dodaj do koszyka Practicing Trustworthy Machine Learning

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.