reklama - zainteresowany?

Designing Large Language Model Applications - Helion

Designing Large Language Model Applications
ebook
Autor: Suhas Pai
ISBN: 9781098150464
stron: 366, Format: ebook
Data wydania: 2025-03-06
Księgarnia: Helion

Cena książki: 228,65 zł (poprzednio: 278,84 zł)
Oszczędzasz: 18% (-50,19 zł)

Dodaj do koszyka Designing Large Language Model Applications

Large language models (LLMs) have proven themselves to be powerful tools for solving a wide range of tasks, and enterprises have taken note. But transitioning from demos and prototypes to full-fledged applications can be difficult. This book helps close that gap, providing the tools, techniques, and playbooks that practitioners need to build useful products that incorporate the power of language models.

Experienced ML researcher Suhas Pai offers practical advice on harnessing LLMs for your use cases and dealing with commonly observed failure modes. You’ll take a comprehensive deep dive into the ingredients that make up a language model, explore various techniques for customizing them such as fine-tuning, learn about application paradigms like RAG (retrieval-augmented generation) and agents, and more.

  • Understand how to prepare datasets for training and fine-tuning
  • Develop an intuition about the Transformer architecture and its variants
  • Adapt pretrained language models to your own domain and use cases
  • Learn effective techniques for fine-tuning, domain adaptation, and inference optimization
  • Interface language models with external tools and data and integrate them into an existing software ecosystem

Dodaj do koszyka Designing Large Language Model Applications

 

Osoby które kupowały "Designing Large Language Model Applications", wybierały także:

  • Biologika Sukcesji Pokoleniowej. Sezon 3. Konflikty na terytorium
  • Windows Media Center. Domowe centrum rozrywki
  • PodrÄ™cznik startupu. Budowa wielkiej firmy krok po kroku
  • Ruby on Rails. Ćwiczenia
  • Prawa ludzkiej natury

Dodaj do koszyka Designing Large Language Model Applications

Spis treści

Designing Large Language Model Applications eBook -- spis treści

  • Preface
    • Who This Book Is For
    • How This Book Is Structured
    • What This Book Is Not About
    • How to Read the Book
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgments
  • I. LLM Ingredients
  • 1. Introduction
    • Defining LLMs
    • A Brief History of LLMs
      • Early Years
      • The Modern LLM Era
    • The Impact of LLMs
    • LLM Usage in the Enterprise
    • Prompting
      • Zero-Shot Prompting
      • Few-Shot Prompting
      • Chain-of-Thought Prompting
      • Prompt Chaining
      • Adversarial Prompting
    • Accessing LLMs Through an API
    • Strengths and Limitations of LLMs
    • Building Your First Chatbot Prototype
    • From Prototype to Production
    • Summary
  • 2. Pre-Training Data
    • Ingredients of an LLM
    • Pre-Training Data Requirements
    • Popular Pre-Training Datasets
    • Synthetic Pre-Training Data
    • Training Data Preprocessing
      • Data Filtering and Cleaning
      • Selecting Quality Documents
        • Token-distribution K-L divergence
        • Classifier-based approaches
        • Perplexity for quality selection
      • Deduplication
      • Removing Personally Identifiable Information
        • PII detection
        • PII remediation
      • Training Set Decontamination
      • Data Mixtures
    • Effect of Pre-Training Data on Downstream Tasks
    • Bias and Fairness Issues in Pre-Training Datasets
    • Summary
  • 3. Vocabulary and Tokenization
    • Vocabulary
    • Tokenizers
    • Tokenization Pipeline
      • Normalization
      • Pre-Tokenization
      • Tokenization
      • Byte Pair Encoding
        • Training stage
        • Inference stage
      • WordPiece
        • Postprocessing
      • Special Tokens
    • Summary
  • 4. Architectures and Learning Objectives
    • Preliminaries
    • Representing Meaning
    • The Transformer Architecture
      • Self-Attention
      • Positional Encoding
      • Feedforward Networks
      • Layer Normalization
    • Loss Functions
    • Intrinsic Model Evaluation
    • Transformer Backbones
      • Encoder-Only Architectures
      • Encoder-Decoder Architectures
      • Decoder-Only Architectures
      • Mixture of Experts
    • Learning Objectives
      • Full Language Modeling
      • Prefix Language Modeling
      • Masked Language Modeling
      • Which Learning Objectives Are Better?
    • Pre-Training Models
    • Summary
  • II. Utilizing LLMs
  • 5. Adapting LLMs to Your Use Case
    • Navigating the LLM Landscape
      • Who Are the LLM providers?
      • Model Flavors
        • Instruct-models
        • Chat-models
        • Long-context models
        • Domain-adapted or task-adapted models
      • Open Source LLMs
    • How to Choose an LLM for Your Task
      • Open Source Versus Proprietary LLMs
      • LLM Evaluation
        • Eleuther AI LM Evaluation Harness
        • Hugging Face Open LLM Leaderboard
        • HELM
        • Elo Rating
        • Interpreting benchmark results
    • Loading LLMs
      • Hugging Face Accelerate
      • Ollama
      • LLM Inference APIs
    • Decoding Strategies
      • Greedy Decoding
      • Beam Search
      • Top-k Sampling
      • Top-p Sampling
    • Running Inference on LLMs
    • Structured Outputs
    • Model Debugging and Interpretability
    • Summary
  • 6. Fine-Tuning
    • The Need for Fine-Tuning
    • Fine-Tuning: A Full Example
      • Learning Algorithms Parameters
        • Optimizers
        • Learning rates
        • Learning schedules
      • Memory Optimization Parameters
        • Gradient checkpointing
        • Gradient accumulation
        • Quantization
      • Regularization Parameters
        • Label smoothing
        • Noise Embeddings
      • Batch Size
      • Parameter-Efficient Fine-Tuning
      • Working with Reduced Precision
      • Putting It All Together
    • Fine-Tuning Datasets
      • Utilizing Publicly Available Instruction-Tuning Datasets
      • LLM-Generated Instruction-Tuning Datasets
    • Summary
  • 7. Advanced Fine-Tuning Techniques
    • Continual Pre-Training
      • Replay (Memory)
      • Parameter Expansion
    • Parameter-Efficient Fine-Tuning
      • Adding New Parameters
        • Bottleneck adapters
        • Prefix-tuning
        • Prompt tuning
      • Subset Methods
    • Combining Multiple Models
      • Model Ensembling
        • PairRanker
        • GenFuser
      • Model Fusion
      • Adapter Merging
    • Summary
  • 8. Alignment Training and Reasoning
    • Defining Alignment Training
    • Reinforcement Learning
      • Types of Human Feedback
      • RLHF Example
    • Hallucinations
    • Mitigating Hallucinations
      • Self-Consistency
      • Chain-of-Actions
      • Recitation
      • Sampling Methods for Addressing Hallucination
      • Decoding by Contrasting Layers
    • In-Context Hallucinations
    • Hallucinations Due to Irrelevant Information
    • Reasoning
      • Deductive Reasoning
      • Inductive Reasoning
      • Abductive Reasoning
      • Common Sense Reasoning
    • Inducing Reasoning in LLMs
      • Verifiers for Improving Reasoning
        • Iterative backprompting
        • Top-k guessing
      • Inference-Time Computation
        • Repeated sampling
        • Search
      • Fine-Tuning for Reasoning
    • Summary
  • 9. Inference Optimization
    • LLM Inference Challenges
    • Inference Optimization Techniques
    • Techniques for Reducing Compute
      • K-V Caching
      • Early Exit
        • Sequence-level early exit
        • Token-level early exit
      • Knowledge Distillation
        • Distillation data preparation
        • Distillation
    • Techniques for Accelerating Decoding
      • Speculative Decoding
      • Parallel Decoding
    • Techniques for Reducing Storage Needs
      • Symmetric Quantization
      • Asymmetric Quantization
    • Summary
  • III. LLM Application Paradigms
  • 10. Interfacing LLMs with External Tools
    • LLM Interaction Paradigms
      • Passive Approach
      • The Explicit Approach
      • The Autonomous Approach
    • Defining Agents
    • Agentic Workflow
    • Components of an Agentic System
      • Models
      • Tools
        • Web search
        • API connectors
        • Code interpreter
        • Database connectors
      • Data Stores
        • Prompt repository
        • Session memory
        • Tools data
      • Agent Loop Prompt
        • ReAct
        • Reflection
      • Guardrails and Verifiers
        • Safety Guardrails
        • Verification modules
      • Agent Orchestration Software
    • Summary
  • 11. Representation Learning and Embeddings
    • Introduction to Embeddings
    • Semantic Search
    • Similarity Measures
    • Fine-Tuning Embedding Models
      • Base Models
      • Training Dataset
      • Loss Functions
    • Instruction Embeddings
    • Optimizing Embedding Size
      • Matryoshka Embeddings
      • Binary and Integer Embeddings
      • Product Quantization
    • Chunking
      • Sliding Window Chunking
      • Metadata-Aware Chunking
      • Layout-Aware Chunking
      • Semantic Chunking
      • Late Chunking
    • Vector Databases
    • Interpreting Embeddings
    • Summary
  • 12. Retrieval-Augmented Generation
    • The Need for RAG
    • Typical RAG Scenarios
    • Deciding When to Retrieve
    • The RAG Pipeline
      • Rewrite
      • Retrieve
        • Generative retrieval
        • Tightly-coupled retrievers
        • GraphRAG
      • Rerank
        • Query likelihood model (QLM)
        • LLM distillation for ranking
      • Refine
        • Summarization
        • Chain-of-note
      • Insert
      • Generate
    • RAG for Memory Management
    • RAG for Selecting In-Context Training Examples
    • RAG for Model Training
    • Limitations of RAG
    • RAG Versus Long Context
    • RAG Versus Fine-Tuning
    • Summary
  • 13. Design Patterns and System Architecture
    • Multi-LLM Architectures
      • LLM Cascades
      • Routers
      • Task-Specialized LLMs
    • Programming Paradigms
      • DSPy
        • Modules
        • Optimizers
      • LMQL
    • Summary
  • Index

Dodaj do koszyka Designing Large Language Model Applications

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2025 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.