reklama - zainteresowany?

Effective Machine Learning Teams - Helion

Effective Machine Learning Teams
ebook
Autor: David Tan, Ada Leung, David Colls
ISBN: 9781098144593
stron: 402, Format: ebook
Data wydania: 2024-02-29
Księgarnia: Helion

Cena książki: 218,26 zł (poprzednio: 276,28 zł)
Oszczędzasz: 21% (-58,02 zł)

Dodaj do koszyka Effective Machine Learning Teams

Gain the valuable skills and techniques you need to accelerate the delivery of machine learning solutions. With this practical guide, data scientists, ML engineers, and their leaders will learn how to bridge the gap between data science and Lean product delivery in a practical and simple way. David Tan, Ada Leung, and Dave Colls show you how to apply time-tested software engineering skills and Lean product delivery practices to reduce toil and waste, shorten feedback loops, and improve your team's flow when building ML systems and products.

Based on the authors' experience across multiple real-world data and ML projects, the proven techniques in this book will help your team avoid common traps in the ML world, so you can iterate and scale more quickly and reliably. You'll learn how to overcome friction and experience flow when delivering ML solutions.

You'll also learn how to:

  • Write automated tests for ML systems, containerize development environments, and refactor problematic codebases
  • Apply MLOps and CI/CD practices to accelerate experimentation cycles and improve reliability of ML solutions
  • Apply Lean delivery and product practices to improve your odds of building the right product for your users
  • Identify suitable team structures and intra- and inter-team collaboration techniques to enable fast flow, reduce cognitive load, and scale ML within your organization

Dodaj do koszyka Effective Machine Learning Teams

 

Osoby które kupowały "Effective Machine Learning Teams", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone
  • Od hierarchii do turkusu, czyli jak zarzÄ…dzać w XXI wieku

Dodaj do koszyka Effective Machine Learning Teams

Spis treści

Effective Machine Learning Teams eBook -- spis treści

  • Preface
    • Who This Book Is For
    • How This Book Is Organized
      • Part I: Product and Delivery
      • Part II: Engineering
      • Part III: Teams
    • Additional Thoughts
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgments
      • From David Tan
      • From Ada Leung
      • From David Colls
  • 1. Challenges and Better Paths in Delivering ML Solutions
    • ML: Promises and Disappointments
      • Continued Optimism in ML
      • Why ML Projects Fail
        • High-level view: Barriers to success
        • Microlevel view: Everyday impediments to success
          • Lifecycle of a story in a low-effectiveness environment
          • Lifecycle of a story in a high-effectiveness environment
    • Is There a Better Way? How Systems Thinking and Lean Can Help
      • You Cant MLOps Your Problems Away
      • See the Whole: A Systems Thinking Lens for Effective ML Delivery
      • The Five Disciplines Required for Effective ML Delivery
        • What is Lean, and why should ML practitioners care?
        • The first discipline: Product
          • Discovery
          • Prototype testing
        • The second discipline: Delivery
          • Vertically sliced work
          • Vertically sliced teams, or cross-functional teams
          • Ways of Working
          • Measuring delivery metrics
        • The third discipline: Engineering
          • Automated testing
          • Refactoring
          • Code editor effectiveness
          • Continuous delivery for ML
        • The fourth discipline: ML
          • Framing ML problems
          • ML systems design
          • Responsible AI and ML governance
        • The fifth discipline: Data
          • Closing the data collection loop
          • Data security and privacy
    • Conclusion
  • I. Product and Delivery
  • 2. Product and Delivery Practices for ML Teams
    • ML Product Discovery
      • Discovering Product Opportunities
      • Canvases to Define Product Opportunities
        • Data Product Canvas
        • Hypothesis Canvas
      • Techniques for Rapidly Designing, Delivering, and Testing Solutions
        • Prototypes
          • A range of prototypes
          • Technical prototypes, or proofs of concept (PoCs)
        • Riskiest Assumption Test
    • Inception: Setting Teams Up for Success
      • Inception: What Is It and How Do We Do It?
      • How to Plan and Run an Inception
      • User Stories: Building Blocks of an MVP
        • User stories are vertically sliced
        • Slicing and dicing user stories
        • User story: The promise for a conversation
    • Product Delivery
      • Cadence of Delivery Activities
      • Measuring Product and Delivery
        • Delivery measures
        • Product measures
        • Model measures
        • Discovery measures
        • Commentary on measures
    • Conclusion
  • II. Engineering
  • 3. Effective Dependency Management: Principles and Tools
    • What If Our Code Worked Everywhere, Every Time?
      • A Better Way: Check Out and Go
      • Principles for Effective Dependency Management
        • Dependencies as code
        • Production-like development environments from day one
        • Application-level environment isolation
        • OS-level environment isolation
      • Tools for Dependency Management
        • Managing OS-level dependencies
          • Misconception 1: Docker is overcomplicated and unnecessary
          • Misconception 2: I dont need Docker because I already use X (e.g., conda)
          • Misconception 3: Docker will have a significant performance impact
          • Complicating the picture: Differing CPU chips and instruction sets
        • Managing application-level dependencies
    • A Crash Course on Docker and batect
      • What Are Containers?
      • Reduce the Number of Moving Parts in Docker with batect
        • Benefit 1: Simpler command-line interface
        • Benefit 2: Simple task composition
        • Benefit 3: Local-CI symmetry
        • Benefit 4: Faster builds with caches
        • How to use batect in your projects
    • Conclusion
  • 4. Effective Dependency Management in Practice
    • In Context: ML Development Workflow
      • Identifying What to Containerize
      • Hands-On Exercise: Reproducible Development Environments, Aided by Containers
        • 1. Check out and go: Install prerequisite dependencies
        • 2. Create our local development environment
        • 3. Start our local development environment
        • 4. Serve the ML model locally as a web API
        • 5. Configure our code editor
        • 6. Train model on the cloud
        • 7. Deploy model web API
    • Secure Dependency Management
      • Remove Unnecessary Dependencies
      • Automate Checks for Security Vulnerabilities
    • Conclusion
  • 5. Automated Testing: Move Fast Without Breaking Things
    • Automated Tests: The Foundation for Iterating Quickly and Reliably
      • Starting with Why: Benefits of Test Automation
      • If Automated Testing Is So Important, Why Arent We Doing It?
        • Reason 1: We think writing automated tests slows us down
        • Reason 2: We have CI/CD
        • Reason 3: We just dont know how to test ML systems
    • Building Blocks for a Comprehensive Test Strategy for ML Systems
      • The What: Identifying Components For Testing
        • Software logic
        • ML models
        • Putting it together: The ML Systems Test Pyramid
      • Characteristics of a Good Test and Pitfalls to Avoid
        • Tests should be independent and idempotent
        • Tests should fail fast and fail loudly
        • Tests should check behavior, not implementation
        • Tests should be runnable in your development environment
        • Tests must be part of feature development
        • Tests let us catch bugs once
      • The How: Structure of a Test
    • Software Tests
      • Unit Tests
        • How to design unit-testable code
        • How do I write a unit test?
      • Training Smoke Tests
        • How do I write these tests?
      • API Tests
        • How do I write these tests?
        • Recommended practice: Assert on the whole elephant
      • Post-deployment Tests
        • How do I write these tests?
    • Conclusion
  • 6. Automated Testing: ML Model Tests
    • Model Tests
      • The Necessity of Model Tests
      • Challenges of Testing ML Models
      • Fitness Functions for ML Models
      • Model Metrics Tests (Global and Stratified)
        • How do I write these tests?
        • Advantages and limitations of metrics tests
      • Behavioral Tests
      • Testing Large Language Models: Why and How
        • Guidelines for designing an LLM test strategy
        • LLM testing techniques
          • Manual exploratory tests
          • Example-based tests
          • Benchmark tests
          • Property-based tests
          • LLM-based tests (aka auto-evaluator tests)
    • Essential Complementary Practices for Model Tests
      • Error Analysis and Visualization
      • Learn from Production by Closing the Data Collection Loop
      • Open-Closed Test Design
      • Exploratory Testing
      • Means to Improve the Model
      • Designing to Minimize the Cost of Failures
      • Monitoring in Production
      • Bringing It All Together
    • Next Steps: Applying What Youve Learned
      • Make Incremental Improvements
      • Demonstrate Value
    • Conclusion
  • 7. Supercharging Your Code Editor with Simple Techniques
    • The Benefits (and Surprising Simplicity) of Knowing Our IDE
      • Why Should We Care About IDEs?
      • If IDEs Are So Important, Why Havent I Learned About Them Yet?
    • The Plan: Getting Productive in Two Stages
      • Stage 1: Configuring Your IDE
        • Install IDE and basic navigation shortcuts
        • Clone code repository
        • Create a virtual environment
        • Configure virtual environment: PyCharm
        • Configure virtual environment: VS Code
        • Testing that weve configured everything correctly
      • Stage 2: The Star of the ShowKeyboard Shortcuts
        • Coding
          • Code completion suggestions
          • Inline documentation / parameter information
          • Auto-fix suggestions
          • Linting
          • Move/copy lines
        • Formatting
          • Reformat code
        • Refactoring
          • Rename variable
          • Extract variable/method/function
        • Navigating code without getting lost
          • Opening things (files, classes, methods, functions) by name
          • Navigating the flow of code
          • Screen real estate management
      • You Did It!
        • Guidelines for setting up a code repository for your team
        • Additional tools and techniques
    • Conclusion
  • 8. Refactoring and Technical Debt Management
    • Technical Debt: The Sand in Our Gears
      • Getting to a Healthy Level of Debt Through Tests, Design, and Refactoring
      • Refactoring 101
    • How to Refactor a Notebook (or a Problematic Codebase)
      • The Map: Planning Your Journey
      • The Journey: Hitting the Road
        • Step 1. Run the notebook or code and ensure it works as expected
        • Step 2. Remove print statements
        • Step 3. List code smells
        • Step 4. Convert the notebook to a Python file
        • Step 5. Adding characterization tests
        • Step 6. Refactor iteratively
          • The first refactoring: Remove dead code
          • The second refactoring: Abstract away implementation details
          • The third refactoring: Abstract away implementation details (again)
      • Looking Back at What Weve Achieved
        • Design principles that helped guide us
          • Separation of concerns
          • Open-closed design
          • Prefer obvious over obscure code (or explicit over implicit)
          • Reduce tight coupling (or couple to interfaces, not to implementation)
          • Simple design
    • Technical Debt Management in the Real World
      • Technical Debt Management Techniques
        • Make debt visible
        • The 80/20 rule
        • Make it cheap and safe
        • Demonstrate value of paying off technical debt
      • A Positive Lens on Debt: Systems Health Ratings
    • Conclusion: Make Good Easy
  • 9. MLOps and Continuous Delivery for ML (CD4ML)
    • MLOps: Strengths and Missing Puzzle Pieces
      • MLOps 101
      • Smells: Hints That We Missed Something
        • MLOps smell 1: CI/CD pipelines with no tests
        • MLOps smell 2: Infrequent model deployments to production or preproduction environments
        • MLOps smell 3: Data in production goes to waste
        • MLOps smell 4: X is another teams responsibility
    • Continuous Delivery for Machine Learning
      • Benefits of CD4ML
      • A Crash Course on Continuous Delivery Principles
      • Building Blocks of CD4ML: Creating a Production-Ready ML System
        • Build quality into the product
          • Test automation
          • Shift left on security
        • Work in small batches
          • Practice pair programming
          • Use version control for all production artifacts
          • Implement continuous integration (CI)
          • Apply trunk-based development
        • Automation: Computers perform repetitive tasks, people solve problems
          • Automate development environment setup
          • Automate deployments (minimally to a preproduction environment)
          • Monitoring in production
        • Kaizen: Relentlessly pursue continuous improvement
        • Everyone is responsible: Rationalizing and cultivating ownership by adopting the appropriate team topologies
    • How CD4ML Supports ML Governance and Responsible AI
    • Conclusion
  • III. Teams
  • 10. Building Blocks of Effective ML Teams
    • Common Challenges Faced by ML Teams
    • Effective Team Internals
      • Trust as the Foundational Building Block
        • Daring Greatly
        • Tuckmans stages of group development
        • Belbins Team Roles
      • Communication
        • Models of communication
        • Crucial Conversations framework
        • Candor in feedback
      • Diverse Membership
        • Primary and secondary dimensions: Sociodemographic diversity
        • Tertiary dimensions: Functional and role diversity
      • Purposeful, Shared Progress
      • Internal Tactics to Build Effective Teams
    • Improving Flow with Engineering Effectiveness
      • Feedback Loops
      • Cognitive Load
      • Flow State
    • Conclusion
  • 11. Effective ML Organizations
    • Common Challenges Faced by ML Organizations
    • Effective Organizations as Teams of Teams
      • The Role of Value-Driven Portfolio Management
      • Team Topologies Model
      • Team Topologies for ML Teams
        • Stream-aligned team: ML product team
        • Complicated subsystem team: ML domain team
        • Platform team: ML and data platform team
        • Enabling team: Specialists in some aspect of ML product development
        • Combining and evolving topologies
        • An example topology
        • Limitations of Team Topologies
      • Organizational Tactics to Build Effective Teams
    • Intentional Leadership
      • Create Structures and Systems for Effective Teams
      • Engage Stakeholders and Coordinate Organizational Resources
      • Cultivate Psychological Safety
      • Champion Continuous Improvement
      • Embrace Failure as a Learning Opportunity
      • Build the Culture We Wish We Had
      • Encourage Teams to Play at Work
    • Conclusion
      • Epilogue: Danas Journey
  • Index

Dodaj do koszyka Effective Machine Learning Teams

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2025 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.