The Developer's Playbook for Large Language Model Security - Helion
ISBN: 9781098162160
stron: 200, Format: ebook
Data wydania: 2024-09-03
Księgarnia: Helion
Cena książki: 254,15 zł (poprzednio: 299,00 zł)
Oszczędzasz: 15% (-44,85 zł)
Large language models (LLMs) are not just shaping the trajectory of AI, they're also unveiling a new era of security challenges. This practical book takes you straight to the heart of these threats. Author Steve Wilson, chief product officer at Exabeam, focuses exclusively on LLMs, eschewing generalized AI security to delve into the unique characteristics and vulnerabilities inherent in these models.
Complete with collective wisdom gained from the creation of the OWASP Top 10 for LLMs list—a feat accomplished by more than 400 industry experts—this guide delivers real-world guidance and practical strategies to help developers and security teams grapple with the realities of LLM applications. Whether you're architecting a new application or adding AI features to an existing one, this book is your go-to resource for mastering the security landscape of the next frontier in AI.
You'll learn:
- Why LLMs present unique security challenges
- How to navigate the many risk conditions associated with using LLM technology
- The threat landscape pertaining to LLMs and the critical trust boundaries that must be maintained
- How to identify the top risks and vulnerabilities associated with LLMs
- Methods for deploying defenses to protect against attacks on top vulnerabilities
- Ways to actively manage critical trust boundaries on your systems to ensure secure execution and risk minimization
Osoby które kupowały "The Developer's Playbook for Large Language Model Security", wybierały także:
- Sztuczna inteligencja w Azure. Kurs video. Uczenie maszynowe i Azure Machine Learning Service 199,00 zł, (69,65 zł -65%)
- Web scraping w Data Science. Kurs video. Uczenie maszynowe i architektura splotowych sieci neuronowych 178,97 zł, (62,64 zł -65%)
- Sztuczna inteligencja w Azure. Kurs video. Us 199,00 zł, (69,65 zł -65%)
- AI w praktyce. Kurs video. Narz 164,31 zł, (59,15 zł -64%)
- Uczenie g 129,00 zł, (51,60 zł -60%)
Spis treści
The Developer's Playbook for Large Language Model Security eBook -- spis treści
- Preface
- Who Should Read This Book
- Why I Wrote This Book
- Navigating This Book
- Section 1: Laying the Foundation (Chapters 13)
- Section 2: Risks, Vulnerabilities, and Remediations (Chapters 49)
- Section 3: Building a Security Process and Preparing for the Future (Chapters 1012)
- Conventions Used in This Book
- OReilly Online Learning
- How to Contact Us
- Acknowledgments
- 1. Chatbots Breaking Bad
- Lets Talk About Tay
- Tays Rapid Decline
- Why Did Tay Break Bad?
- Its a Hard Problem
- 2. The OWASP Top 10 for LLM Applications
- About OWASP
- The Top 10 for LLM Applications Project
- Project Execution
- Reception
- Keys to Success
- This Book and the Top 10 List
- 3. Architectures and Trust Boundaries
- AI, Neural Networks, and Large Language Models: Whats the Difference?
- The Transformer Revolution: Origins, Impact, and the LLM Connection
- Origins of the Transformer
- Transformer Architectures Impact on AI
- Types of LLM-Based Applications
- LLM Application Architecture
- Trust Boundaries
- The Model
- Public APIs: The convenience and the risks
- Privately hosted models: More control, different risks
- Risk considerations
- User Interaction
- Training Data
- Access to Live External Data Sources
- Access to Internal Services
- Conclusion
- 4. Prompt Injection
- Examples of Prompt Injection Attacks
- Forceful Suggestion
- Reverse Psychology
- Misdirection
- Universal and Automated Adversarial Prompting
- The Impacts of Prompt Injection
- Direct Versus Indirect Prompt Injection
- Direct Prompt Injection
- Indirect Prompt Injection
- Key Differences
- Mitigating Prompt Injection
- Rate Limiting
- Rule-Based Input Filtering
- Filtering with a Special-Purpose LLM
- Adding Prompt Structure
- Adversarial Training
- Pessimistic Trust Boundary Definition
- Conclusion
- Examples of Prompt Injection Attacks
- 5. Can Your LLM Know Too Much?
- Real-World Examples
- Lee Luda
- GitHub Copilot and OpenAIs Codex
- Knowledge Acquisition Methods
- Model Training
- Foundation Model Training
- Security Considerations for Foundation Models
- Model Fine-Tuning
- Training Risks
- Retrieval-Augmented Generation
- Direct Web Access
- Scraping a specific URL
- Using a search engine followed by content scraping
- Example risks
- Accessing a Database
- Relational databases
- Vector databases
- Reducing database risk
- Direct Web Access
- Learning from User Interaction
- Conclusion
- Real-World Examples
- 6. Do Language Models Dream of Electric Sheep?
- Why Do LLMs Hallucinate?
- Types of Hallucinations
- Examples
- Imaginary Legal Precedents
- Airline Chatbot Lawsuit
- Unintentional Character Assassination
- Open Source Package Hallucinations
- Whos Responsible?
- Mitigation Best Practices
- Expanded Domain-Specific Knowledge
- Model fine-tuning for specialization
- RAG for enhanced domain expertise
- Chain of Thought Prompting for Increased Accuracy
- Feedback Loops: The Power of User Input in Mitigating Risks
- Clear Communication of Intended Use and Limitations
- User Education: Empowering Users Through Knowledge
- Expanded Domain-Specific Knowledge
- Conclusion
- 7. Trust No One
- Zero Trust Decoded
- Why Be So Paranoid?
- Implementing a Zero Trust Architecture for Your LLM
- Watch for Excessive Agency
- Excessive permissions
- Excessive autonomy
- Excessive functionality
- Securing Your Output Handling
- Common risks
- Handling toxicity
- Screening for PII
- Preventing unforeseen execution
- Watch for Excessive Agency
- Building Your Output Filter
- Looking for PII with Regex
- Evaluating for Toxicity
- Linking Your Filters to Your LLM
- Sanitize for Safety
- Conclusion
- 8. Dont Lose Your Wallet
- DoS Attacks
- Volume-Based Attacks
- Protocol Attacks
- Application Layer Attacks
- An Epic DoS Attack: Dyn
- Model DoS Attacks Targeting LLMs
- Scarce Resource Attacks
- Context Window Exhaustion
- Unpredictable User Input
- DoW Attacks
- Model Cloning
- Mitigation Strategies
- Domain-Specific Guardrails
- Input Validation and Sanitization
- Robust Rate Limiting
- Resource Use Capping
- Monitoring and Alerts
- Financial Thresholds and Alerts
- Conclusion
- DoS Attacks
- 9. Find the Weakest Link
- Supply Chain Basics
- Software Supply Chain Security
- The Equifax Breach
- Impact
- Lessons learned
- The SolarWinds Hack
- Impact
- Lessons learned
- The Log4Shell Vulnerability
- Impact
- Lessons learned
- Understanding the LLM Supply Chain
- Open Source Model Risk
- Training Data Poisoning
- Accidentally Unsafe Training Data
- Unsafe Plug-ins
- Creating Artifacts to Track Your Supply Chain
- Importance of SBOMs
- Model Cards
- Model Cards Versus SBOMs
- Purpose and focus
- Content
- Use in security and compliance
- Industry application
- CycloneDX: The SBOM Standard
- The Rise of the ML-BOM
- Building a Sample ML-BOM
- The Future of LLM Supply Chain Security
- Digital Signing and Watermarking
- Vulnerability Classifications and Databases
- MITRE CVE
- MITRE ATLAS
- Conclusion
- Supply Chain Basics
- 10. Learning from Future History
- Reviewing the OWASP Top 10 for LLM Apps
- Case Studies
- Independence Day: A Celebrated Security Disaster
- Behind the scenes
- Chain of events
- Vulnerability disclosure
- 2001: A Space Odyssey of Security Flaws
- Behind the scenes
- Chain of events
- Vulnerability disclosure
- Independence Day: A Celebrated Security Disaster
- Conclusion
- 11. Trust the Process
- The Evolution of DevSecOps
- MLOps
- LLMOps
- Building Security into LLMOps
- Security in the LLM Development Process
- Securing Your CI/CD
- Implementing robust security practices
- Fostering a culture of security awareness
- LLM-Specific Security Testing Tools
- TextAttack
- Garak
- Responsible AI Toolbox
- Giskard LLM Scan
- Integrating security tools into DevOps
- Managing Your Supply Chain
- Securing Your CI/CD
- Protect Your App with Guardrails
- The Role of Guardrails in an LLM Security Strategy
- Input validation
- Output validation
- Open Source Versus Commercial Guardrail Solutions
- Mixing Custom and Packaged Guardrails
- The Role of Guardrails in an LLM Security Strategy
- Monitoring Your App
- Logging Every Prompt and Response
- Centralized Log and Event Management
- User and Entity Behavior Analytics
- Build Your AI Red Team
- Advantages of AI Red Teaming
- Red Teams Versus Pen Tests
- Tools and Approaches
- Red team automation tooling
- Red team as a service
- Continuous Improvement
- Establishing and Tuning Guardrails
- Managing Data Access and Quality
- Leveraging RLHF for Alignment and Security
- Conclusion
- The Evolution of DevSecOps
- 12. A Practical Framework for Responsible AI Security
- Power
- GPUs
- Cloud
- Open Source
- Multimodal
- Autonomous Agents
- Responsibility
- The RAISE Framework
- Limit your domain
- Balance your knowledge base
- Implement zero trust
- Manage your supply chain
- Build an AI red team
- Monitor continuously
- The RAISE Checklist
- The RAISE Framework
- Conclusion
- Power
- Index