Developing Apps with GPT-4 and ChatGPT. 2nd Edition - Helion
ISBN: 9781098168063
stron: 272, Format: ebook
Data wydania: 2024-07-10
Księgarnia: Helion
Cena książki: 144,18 zł (poprzednio: 208,96 zł)
Oszczędzasz: 31% (-64,78 zł)
This book provides an ideal guide for Python developers who want to learn how to build applications with large language models. Authors Olivier Caelen and Marie-Alice Blete cover the main features and benefits of GPT-4 and GPT-3.5 models and explain how they work. You'll also get a step-by-step guide for developing applications using the OpenAI Python library, including text generation, Q&A, and smart assistants.
Written in clear and concise language, Developing Apps with GPT-4 and ChatGPT includes easy-to-follow examples to help you understand and apply the concepts to your projects. Python code examples are available in a GitHub repository, and the book includes a glossary of key terms. Ready to harness the power of large language models in your applications? This book is a must.
You'll learn:
- Fundamentals and benefits of GPT-4 and GPT-3.5 models, including the main features and how they work
- How to integrate these models into Python-based applications, leveraging natural language processing capabilities and overcoming specific LLM-related challenges
- Examples of applications demonstrating the OpenAI API in Python for tasks including text generation, question answering, content summarization, classification, and more
- Advanced LLM topics such as prompt engineering, fine-tuning models for specific tasks, RAG, plug-ins, LangChain, LlamaIndex, GPTs, and assistants
Olivier Caelen is a machine learning researcher at Worldline and teaches machine learning courses at the University of Brussels.
Marie-Alice Blete, a software architect and data engineer in Worldline's R&D department, is interested in performance and latency issues associated with AI solutions.
Osoby które kupowały "Developing Apps with GPT-4 and ChatGPT. 2nd Edition", wybierały także:
- Cisco CCNA 200-301. Kurs video. Administrowanie bezpieczeństwem sieci. Część 3 665,00 zł, (39,90 zł -94%)
- Cisco CCNA 200-301. Kurs video. Administrowanie urządzeniami Cisco. Część 2 665,00 zł, (39,90 zł -94%)
- Cisco CCNA 200-301. Kurs video. Podstawy sieci komputerowych i konfiguracji. Część 1 665,00 zł, (39,90 zł -94%)
- Impact of P2P and Free Distribution on Book Sales 427,14 zł, (29,90 zł -93%)
- Cisco CCNP Enterprise 350-401 ENCOR. Kurs video. Programowanie i automatyzacja sieci 443,33 zł, (39,90 zł -91%)
Spis treści
Developing Apps with GPT-4 and ChatGPT. 2nd Edition eBook -- spis treści
- Preface
- Conventions Used in This Book
- Using Code Examples
- OReilly Online Learning
- How to Contact Us
- Acknowledgments
- 1. GPT-4 and ChatGPT Essentials
- Introducing Large Language Models
- Exploring the Foundations of Language Models and NLP
- Understanding the Transformer Architecture and Its Role in LLMs
- Demystifying the Tokenization and Prediction Steps in GPT Models
- Integrating Vision into an LLM
- A Brief History: From GPT-1 to GPT-4
- GPT-1
- GPT-2
- GPT-3
- From GPT-3 to InstructGPT
- GPT-3.5, ChatGPT, Codex
- GPT-4
- The Evolution of AI Toward Multimodality
- Image generation with DALL-E
- Voice recognition and synthesis
- Video generation with Sora
- LLM Use Cases and Example Products
- Be My Eyes
- Morgan Stanley
- Khan Academy
- Duolingo
- Yabble
- Waymark
- Inworld AI
- Beware of AI Hallucinations: Limitations and Considerations
- Unlocking GPT Potential with Advanced Features
- Summary
- Introducing Large Language Models
- 2. A Deep Dive into the OpenAI API
- Essential Concepts
- Models Available in the OpenAI API
- GPT Base
- InstructGPT (Legacy)
- GPT-3.5
- GPT-4
- Trying GPT Models with the OpenAI Playground
- Getting Started: The OpenAI Python Library
- OpenAI Access and API Key
- Hello World Example
- Using Chat Completion Models
- Input Options for the Chat Completion Endpoint
- Required input parameters
- Length of conversations and tokens
- Additional optional parameters
- Playing with temperature and top_p
- Output Result Format for the Chat Completion Endpoint
- Vision
- Requiring a JSON Output
- JSON output format
- Tools and functions
- Input Options for the Chat Completion Endpoint
- Using Other Text Completion Models
- Input Options for the Text Completion Endpoint
- Main input parameters
- Length of prompts and tokens
- Additional optional parameters
- Output Result Format for the Text Completion Endpoint
- Input Options for the Text Completion Endpoint
- Considerations
- Pricing and Token Limitations
- Security and Privacy: Caution!
- Other OpenAI APIs and Functionalities
- Embeddings
- Moderation
- Text-to-Speech
- Speech-to-Text
- Images API
- Image generations
- Image edits
- Image variations
- Summary (and Cheat Sheet)
- 3. Navigating LLM-Powered Applications: Capabilities and Challenges
- App Development Overview
- API Key Management
- The user provides the API key
- You provide the API key
- Security and Data Privacy
- API Key Management
- Software Architecture Design Principles
- Integrating LLM Capabilities into Your Projects
- Conversational Capabilities
- Language Processing Capabilities
- Human-Computer Interaction Capabilities
- Combining Capabilities
- Example Projects
- Project 1: Building a News Generator SolutionLanguage Processing
- Project 2: Summarizing YouTube VideosLanguage Processing
- Project 3: Creating an Expert for Zelda BOTWLanguage Processing and Conversations
- Redis
- Information retrieval service
- Intent service
- Response service
- Putting it all together
- Project 4: Having a Personal AssistantHuman-Computer Interface
- Speech-to-text with Whisper
- Assistant with GPT-3.5 Turbo
- UI with Gradio
- Demonstration
- Project 5: Organizing DocumentsLanguage Processing
- Project 6: Analyzing SentimentsLanguage Processing
- Evaluation of classification model
- Cost Management
- LLM-Powered App Vulnerabilities
- Analyzing Inputs and Outputs
- The Inevitability of Prompt Injection
- Working with an External API
- Handling Errors and Unexpected Latency Issues
- Rate Limits
- Improving Responsiveness and User Experience
- Streaming
- Asynchronous programming
- Other design strategies
- Summary
- App Development Overview
- 4. Advanced LLM Integration Strategies with OpenAI
- Prompt Engineering
- Designing Effective Prompts with Roles, Contexts, and Tasks
- The context
- The task
- The role
- Thinking Step by Step
- Implementing Few-Shot Learning
- Iterative Refinement with User Feedback
- Improving Prompt Effectiveness
- Instruct the model to ask more questions
- Format the output
- Repeat the instructions
- Use negative prompts
- Add length constraints
- Prompt chaining
- Shadow prompting
- Designing Effective Prompts with Roles, Contexts, and Tasks
- Fine-Tuning
- Getting Started
- Adapting GPT models for domain-specific needs
- Fine-tuning versus few-shot learning
- Fine-Tuning with the OpenAI API
- Preparing your data
- Making your data available
- Creating a fine-tuned model
- Listing fine-tuning jobs
- Canceling a fine-tuning job
- Getting status updates for a fine-tuning job
- Getting info about a fine-tuning job
- Fine-Tuning with the Web Interface of OpenAI
- Fine-Tuning Applications
- Legal document analysis
- Automated code review
- Financial document summarization
- Technical document translation
- News article generation for niche topics
- Generating and Fine-Tuning Synthetic Data for an Email Marketing Campaign
- Creating a synthetic dataset
- Fine-tuning a model with the synthetic dataset
- Evaluating the fine-tuned model
- Using the fine-tuned model for text completion
- Cost of Fine-Tuning
- Getting Started
- RAG
- Naive RAG
- Advanced RAG
- Preprocess the users query
- Preprocess the knowledge base
- Improving search
- Postprocessing
- RAG Limitations
- Choosing Between Strategies
- Strategy Comparison
- Evaluations
- From a Standard Application to an LLM-Powered Solution
- Prompt Sensitivity
- Nondeterminism
- Hallucinations
- Summary
- Prompt Engineering
- 5. Advancing LLM Capabilities with Frameworks, Plug-Ins, and More
- The LangChain Framework
- LangChain Libraries
- Dynamic Prompts
- Agents and Tools
- Memory
- Embeddings
- The LlamaIndex Framework
- Demonstration: RAG in 10 Lines of Code
- LlamaIndex Principles
- Customization
- GPT-4 Plug-Ins
- Overview
- The API
- The Plug-In Manifest
- The OpenAPI Specification
- Descriptions
- GPTs
- The Assistant API
- Creating an Assistant API
- Managing a Conversation with Your Assistant API
- Function Calling
- The Assistants on the OpenAI Web Platform
- Testing an assistant on the OpenAI website
- Creating an assistant on the OpenAI website
- Summary
- The LangChain Framework
- 6. Putting It All Together
- Key Takeaways
- Putting It All Together: The Assistant Use Case
- Step 1: Ideation
- Step 2: Defining the Requirements
- Step 3: Building a Prototype
- Step 4: Improving, Iterating
- Step 5: Making the Solution Robust
- Lessons Learned
- Glossary of Key Terms
- A. Tools, Libraries, and Frameworks
- Index