reklama - zainteresowany?

Machine Learning for High-Risk Applications - Helion

Machine Learning for High-Risk Applications
ebook
Autor: Patrick Hall, James Curtis, Parul Pandey
ISBN: 9781098102395
stron: 470, Format: ebook
Data wydania: 2023-04-17
Księgarnia: Helion

Cena książki: 245,65 zł (poprzednio: 285,64 zł)
Oszczędzasz: 14% (-39,99 zł)

Dodaj do koszyka Machine Learning for High-Risk Applications

The past decade has witnessed the broad adoption of artificial intelligence and machine learning (AI/ML) technologies. However, a lack of oversight in their widespread implementation has resulted in some incidents and harmful outcomes that could have been avoided with proper risk management. Before we can realize AI/ML's true benefit, practitioners must understand how to mitigate its risks.

This book describes approaches to responsible AI—a holistic framework for improving AI/ML technology, business processes, and cultural competencies that builds on best practices in risk management, cybersecurity, data privacy, and applied social science. Authors Patrick Hall, James Curtis, and Parul Pandey created this guide for data scientists who want to improve real-world AI/ML system outcomes for organizations, consumers, and the public.

  • Learn technical approaches for responsible AI across explainability, model validation and debugging, bias management, data privacy, and ML security
  • Learn how to create a successful and impactful AI risk management practice
  • Get a basic guide to existing standards, laws, and assessments for adopting AI technologies, including the new NIST AI Risk Management Framework
  • Engage with interactive resources on GitHub and Colab

Dodaj do koszyka Machine Learning for High-Risk Applications

 

Osoby które kupowały "Machine Learning for High-Risk Applications", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • DevOps w praktyce. Kurs video. Jenkins, Ansible, Terraform i Docker
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone

Dodaj do koszyka Machine Learning for High-Risk Applications

Spis treści

Machine Learning for High-Risk Applications eBook -- spis treści

  • Foreword
  • Preface
    • Who Should Read This Book
    • What Readers Will Learn
    • Alignment with the NIST AI Risk Management Framework
    • Book Outline
      • Part I
      • Part II
      • Part III
    • Example Datasets
      • Taiwan Credit Data
      • Kaggle Chest X-Ray Data
    • Conventions Used in This Book
    • Online Figures
    • Using Code Examples
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgments
      • Patrick Hall
      • James Curtis
      • Parul Pandey
  • I. Theories and Practical Applications of AI Risk Management
  • 1. Contemporary Machine Learning Risk Management
    • A Snapshot of the Legal and Regulatory Landscape
      • The Proposed EU AI Act
      • US Federal Laws and Regulations
      • State and Municipal Laws
      • Basic Product Liability
      • Federal Trade Commission Enforcement
    • Authoritative Best Practices
    • AI Incidents
    • Cultural Competencies for Machine Learning Risk Management
      • Organizational Accountability
      • Culture of Effective Challenge
      • Diverse and Experienced Teams
      • Drinking Our Own Champagne
      • Moving Fast and Breaking Things
    • Organizational Processes for Machine Learning Risk Management
      • Forecasting Failure Modes
        • Known past failures
        • Failures of imagination
      • Model Risk Management Processes
        • Risk tiering
        • Model documentation
        • Model monitoring
        • Model inventories
        • System validation and process auditing
        • Change management
      • Beyond Model Risk Management
        • Model audits and assessments
        • Impact assessments
        • Appeal, override, and opt out
        • Pair and double programming
        • Security permissions for model deployment
        • Bug bounties
        • AI incident response
    • Case Study: The Rise and Fall of Zillows iBuying
      • Fallout
      • Lessons Learned
    • Resources
  • 2. Interpretable and Explainable Machine Learning
    • Important Ideas for Interpretability and Explainability
    • Explainable Models
      • Additive Models
        • Penalized regression
        • Generalized additive models
        • GA2M and explainable boosting machines
      • Decision Trees
        • Single decision trees
        • Constrained XGBoost models
      • An Ecosystem of Explainable Machine Learning Models
    • Post Hoc Explanation
      • Feature Attribution and Importance
        • Local explanations and feature attribution
          • Shapley values
          • Critical applications of local explanations and feature importance
        • Global feature importance
      • Surrogate Models
        • Decision tree surrogates
        • Linear models and local interpretable model-agnostic explanations
        • Anchors and rules
      • Plots of Model Performance
        • Partial dependence and individual conditional expectation
        • Accumulated local effect
      • Cluster Profiling
    • Stubborn Difficulties of Post Hoc Explanation in Practice
    • Pairing Explainable Models and Post Hoc Explanation
    • Case Study: Graded by Algorithm
    • Resources
  • 3. Debugging Machine Learning Systems for Safety and Performance
    • Training
      • Reproducibility
      • Data Quality
      • Model Specification for Real-World Outcomes
        • Benchmarks and alternatives
        • Calibration
        • Construct validity
        • Assumptions and limitations
        • Default loss functions
        • Multiple comparisons
        • The future of safe and robust machine learning
    • Model Debugging
      • Software Testing
      • Traditional Model Assessment
      • Common Machine Learning Bugs
        • Distribution shifts
        • Epistemic uncertainty and data sparsity
        • Instability
        • Leakage
        • Looped inputs
        • Overfitting
        • Shortcut learning
        • Underfitting
        • Underspecification
      • Residual Analysis
        • Analysis and visualizations of residuals
        • Modeling residuals
        • Local contribution to residuals
      • Sensitivity Analysis
      • Benchmark Models
      • Remediation: Fixing Bugs
    • Deployment
      • Domain Safety
      • Model Monitoring
        • Model decay and concept drift
        • Detecting and addressing drift
        • Monitoring multiple key performance indicators
        • Out-of-range values
        • Anomaly detection and benchmark models
        • Kill switches
    • Case Study: Death by Autonomous Vehicle
      • Fallout
      • An Unprepared Legal System
      • Lessons Learned
    • Resources
  • 4. Managing Bias in Machine Learning
    • ISO and NIST Definitions for Bias
      • Systemic Bias
      • Statistical Bias
      • Human Biases and Data Science Culture
    • Legal Notions of ML Bias in the United States
    • Who Tends to Experience Bias from ML Systems
    • Harms That People Experience
    • Testing for Bias
      • Testing Data
      • Traditional Approaches: Testing for Equivalent Outcomes
        • Statistical significance testing
        • Practical significance testing
      • A New Mindset: Testing for Equivalent Performance Quality
      • On the Horizon: Tests for the Broader ML Ecosystem
      • Summary Test Plan
    • Mitigating Bias
      • Technical Factors in Mitigating Bias
      • The Scientific Method and Experimental Design
      • Bias Mitigation Approaches
      • Human Factors in Mitigating Bias
    • Case Study: The Bias Bug Bounty
    • Resources
  • 5. Security for Machine Learning
    • Security Basics
      • The Adversarial Mindset
      • CIA Triad
      • Best Practices for Data Scientists
    • Machine Learning Attacks
      • Integrity Attacks: Manipulated Machine Learning Outputs
        • Adversarial example attacks
        • Backdoor attacks
        • Data poisoning attacks
        • Impersonation and evasion attacks
        • Attacks on machine learning explanations
      • Confidentiality Attacks: Extracted Information
        • Model extraction and inversion attacks
        • Membership inference attacks
    • General ML Security Concerns
    • Countermeasures
      • Model Debugging for Security
        • Adversarial example searches and sensitivity analysis
        • Auditing for insider data poisoning
        • Bias testing
        • Ethical hacking: model extraction attacks
      • Model Monitoring for Security
      • Privacy-Enhancing Technologies
        • Federated learning
        • Differential privacy
      • Robust Machine Learning
      • General Countermeasures
    • Case Study: Real-World Evasion Attacks
      • Evasion Attacks
      • Lessons Learned
    • Resources
  • II. Putting AI Risk Management into Action
  • 6. Explainable Boosting Machines and Explaining XGBoost
    • Concept Refresher: Machine Learning Transparency
      • Additivity Versus Interactions
      • Steps Toward Causality with Constraints
      • Partial Dependence and Individual Conditional Expectation
      • Shapley Values
      • Model Documentation
    • The GAM Family of Explainable Models
      • Elastic NetPenalized GLM with Alpha and Lambda Search
      • Generalized Additive Models
      • GA2M and Explainable Boosting Machines
    • XGBoost with Constraints and Post Hoc Explanation
      • Constrained and Unconstrained XGBoost
      • Explaining Model Behavior with Partial Dependence and ICE
      • Decision Tree Surrogate Models as an Explanation Technique
      • Shapley Value Explanations
      • Problems with Shapley values
      • Better-Informed Model Selection
    • Resources
  • 7. Explaining a PyTorch Image Classifier
    • Explaining Chest X-Ray Classification
    • Concept Refresher: Explainable Models and Post Hoc Explanation Techniques
      • Explainable Models Overview
      • Occlusion Methods
      • Gradient-Based Methods
      • Explainable AI for Model Debugging
    • Explainable Models
      • ProtoPNet and Variants
      • Other Explainable Deep Learning Models
    • Training and Explaining a PyTorch Image Classifier
      • Training Data
      • Addressing the Dataset Imbalance Problem
      • Data Augmentation and Image Cropping
      • Model Training
      • Evaluation and Metrics
      • Generating Post Hoc Explanations Using Captum
        • Occlusion
        • Input * gradient
        • Integrated gradients
        • Layer-wise Relevance Propagation
      • Evaluating Model Explanations
      • The Robustness of Post Hoc Explanations
    • Conclusion
    • Resources
  • 8. Selecting and Debugging XGBoost Models
    • Concept Refresher: Debugging ML
      • Model Selection
      • Sensitivity Analysis
      • Residual Analysis
      • Remediation
    • Selecting a Better XGBoost Model
    • Sensitivity Analysis for XGBoost
      • Stress Testing XGBoost
      • Stress Testing Methodology
      • Altering Data to Simulate Recession Conditions
      • Adversarial Example Search
    • Residual Analysis for XGBoost
      • Analysis and Visualizations of Residuals
      • Segmented Error Analysis
      • Modeling Residuals
    • Remediating the Selected Model
      • Overemphasis of PAY_0
      • Miscellaneous Bugs
    • Conclusion
    • Resources
  • 9. Debugging a PyTorch Image Classifier
    • Concept Refresher: Debugging Deep Learning
    • Debugging a PyTorch Image Classifier
      • Data Quality and Leaks
      • Software Testing for Deep Learning
      • Sensitivity Analysis for Deep Learning
        • Domain and subpopulation shift testing
        • Adversarial example attacks
        • Perturbing computational hyperparameters
      • Remediation
        • Data fixes
        • Software fixes
      • Sensitivity Fixes
        • Noise injection
        • Additional stability fixes
    • Conclusion
    • Resources
  • 10. Testing and Remediating Bias with XGBoost
    • Concept Refresher: Managing ML Bias
    • Model Training
    • Evaluating Models for Bias
      • Testing Approaches for Groups
        • Testing performance
        • Traditional testing of outcomes rates
      • Individual Fairness
      • Proxy Bias
    • Remediating Bias
      • Preprocessing
      • In-processing
      • Postprocessing
      • Model Selection
    • Conclusion
    • Resources
  • 11. Red-Teaming XGBoost
    • Concept Refresher
      • CIA Triad
      • Attacks
      • Countermeasures
    • Model Training
    • Attacks for Red-Teaming
      • Model Extraction Attacks
      • Adversarial Example Attacks
      • Membership Attacks
      • Data Poisoning
      • Backdoors
    • Conclusion
    • Resources
  • III. Conclusion
  • 12. How to Succeed in High-Risk Machine Learning
    • Who Is in the Room?
    • Science Versus Engineering
      • The Data-Scientific Method
      • The Scientific Method
    • Evaluation of Published Results and Claims
    • Apply External Standards
    • Commonsense Risk Mitigation
    • Conclusion
    • Resources
  • Index

Dodaj do koszyka Machine Learning for High-Risk Applications

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.