Prompt Engineering for LLMs - Helion
ISBN: 9781098156114
stron: 282, Format: ebook
Data wydania: 2024-11-04
Księgarnia: Helion
Cena książki: 237,15 zł (poprzednio: 285,72 zł)
Oszczędzasz: 17% (-48,57 zł)
Large language models (LLMs) are revolutionizing the world, promising to automate tasks and solve complex problems. A new generation of software applications are using these models as building blocks to unlock new potential in almost every domain, but reliably accessing these capabilities requires new skills. This book will teach you the art and science of prompt engineering-the key to unlocking the true potential of LLMs.
Industry experts John Berryman and Albert Ziegler share how to communicate effectively with AI, transforming your ideas into a language model-friendly format. By learning both the philosophical foundation and practical techniques, you'll be equipped with the knowledge and confidence to build the next generation of LLM-powered applications.
Understand LLM architecture and learn how to best interact with itDesign a complete prompt-crafting strategy for an applicationGather, triage, and present context elements to make an efficient promptMaster specific prompt-crafting techniques like few-shot learning, chain-of-thought prompting, and RAG
Osoby które kupowały "Prompt Engineering for LLMs", wybierały także:
- Windows Media Center. Domowe centrum rozrywki 66,67 zł, (8,00 zł -88%)
- Ruby on Rails. Ćwiczenia 18,75 zł, (3,00 zł -84%)
- Przywództwo w świecie VUCA. Jak być skutecznym liderem w niepewnym środowisku 58,64 zł, (12,90 zł -78%)
- Scrum. O zwinnym zarządzaniu projektami. Wydanie II rozszerzone 58,64 zł, (12,90 zł -78%)
- Od hierarchii do turkusu, czyli jak zarządzać w XXI wieku 58,64 zł, (12,90 zł -78%)
Spis treści
Prompt Engineering for LLMs eBook -- spis treści
- Preface
- Who Is This Book For?
- What You Will Learn
- Conventions Used in This Book
- OReilly Online Learning
- How to Contact Us
- Acknowledgments
- From John
- From Albert
- I. Foundations
- 1. Introduction to Prompt Engineering
- LLMs Are Magic
- Language Models: How Did We Get Here?
- Early Language Models
- GPT Enters the Scene
- Prompt Engineering
- Conclusion
- 2. Understanding LLMs
- What Are LLMs?
- Completing a Document
- Human Thought Versus LLM Processing
- Hallucinations
- How LLMs See the World
- Difference 1: LLMs Use Deterministic Tokenizers
- Difference 2: LLMs Cant Slow Down and Examine Letters
- Difference 3: LLMs See Text Differently
- Counting Tokens
- One Token at a Time
- Auto-Regressive Models
- Patterns and Repetitions
- Temperature and Probabilities
- The Transformer Architecture
- Conclusion
- What Are LLMs?
- 3. Moving to Chat
- Reinforcement Learning from Human Feedback
- The Process of Building an RLHF Model
- Supervised fine-tuning model
- Reward model
- RLHF model
- Keeping LLMs Honest
- Avoiding Idiosyncratic Behavior
- RLHF Packs a Lot of Bang for the Buck
- Beware of the Alignment Tax
- The Process of Building an RLHF Model
- Moving from Instruct to Chat
- Instruct Models
- Chat Models
- The Changing API
- Chat Completion API
- Comparing Chat with Completion
- Moving Beyond Chat to Tools
- Prompt Engineering as Playwriting
- Conclusion
- Reinforcement Learning from Human Feedback
- 4. Designing LLM Applications
- The Anatomy of the Loop
- The Users Problem
- Converting the Users Problem to the Model Domain
- Example: Converting the users problem into a homework problem
- Chat models versus completion models
- Using the LLM to Complete the Prompt
- Transforming Back to User Domain
- Zooming In to the Feedforward Pass
- Building the Basic Feedforward Pass
- Context retrieval
- Snippetizing context
- Scoring and prioritizing snippets
- Prompt assembly
- Exploring the Complexity of the Loop
- Persisting application state
- External context
- Increasing reasoning depth
- Tool usage
- Building the Basic Feedforward Pass
- Evaluating LLM Application Quality
- Offline Evaluation
- Online Evaluation
- Conclusion
- The Anatomy of the Loop
- II. Core Techniques
- 5. Prompt Content
- Sources of Content
- Static Content
- Clarifying Your Question
- Few-Shot Prompting
- Drawback 1: Few-shotting scales poorly with context
- Drawback 2: Few-shotting biases the model toward the examples
- Drawback 3: Few-shotting can suggest spurious patterns
- Dynamic Content
- Finding Dynamic Context
- Retrieval-Augmented Generation
- Lexical retrieval
- Neural retrieval
- Snippetizing documents
- Embedding models
- Vector storage
- Building a simple RAG application
- Neural versus lexical retrieval
- Summarization
- Hierarchical summarization
- General and specific summaries
- Conclusion
- 6. Assembling the Prompt
- Anatomy of the Ideal Prompt
- What Kind of Document?
- The Advice Conversation
- The Analytic Report
- The Structured Document
- Formatting Snippets
- More on Inertness
- Formatting Few-Shot Examples
- Elastic Snippets
- Relationships Among Prompt Elements
- Position
- Importance
- Dependency
- Putting It All Together
- Conclusion
- 7. Taming the Model
- Anatomy of the Ideal Completion
- The Preamble
- Recognizable Start and End
- Postscript
- Beyond the Text: Logprobs
- How Good Is the Completion?
- LLMs for Classification
- Critical Points in the Prompt
- Choosing the Model
- Conclusion
- Anatomy of the Ideal Completion
- III. An Expert of the Craft
- 8. Conversational Agency
- Tool Usage
- LLMs Trained for Tool Usage
- Defining and using tools
- Take a look under the hood
- Guidelines for Tool Definitions
- Selecting the right tools
- Naming tools and arguments
- Defining tools
- Dealing with arguments
- Dealing with tool outputs
- Dealing with tool errors
- Executing dangerous tools
- LLMs Trained for Tool Usage
- Reasoning
- Chain of Thought
- ReAct: Iterative Reasoning and Action
- Beyond ReAct
- Context for Task-Based Interactions
- Sources for Context
- Selecting and Organizing Context
- Building a Conversational Agent
- Managing Conversations
- User Experience
- Conclusion
- Tool Usage
- 9. LLM Workflows
- Would a Conversational Agent Suffice?
- Basic LLM Workflows
- Tasks
- Implementing LLM-based tasks
- Templated prompt approach
- Tool-based approach
- Adding more sophistication to tasks
- Add variety to your task
- Evaluation starts at the task level
- Implementing LLM-based tasks
- Assembling the Workflow
- Example Workflow: Shopify Plug-in Marketing
- Tasks
- Advanced LLM Workflows
- Allowing an LLM Agent to Drive the Workflow
- Stateful Task Agents
- Roles and Delegation
- Conclusion
- 10. Evaluating LLM Applications
- What Are We Even Testing?
- Offline Evaluation
- Example Suites
- Finding Samples
- Evaluating Solutions
- Gold standard
- Functional testing
- LLM assessment
- SOMA Assessment
- Specific questions
- Ordinal scaled answers
- Multi-aspect coverage
- SOMA mastery
- Online Evaluation
- A/B Testing
- Metrics
- Conclusion
- 11. Looking Ahead
- Multimodality
- User Experience and User Interface
- Intelligence
- Conclusion
- Multimodality
- Index