reklama - zainteresowany?

Learning LangChain - Helion

Learning LangChain
ebook
Autor: Mayo Oshin, Nuno Campos
ISBN: 9781098167240
stron: 296, Format: ebook
Data wydania: 2025-02-13
Księgarnia: Helion

Cena książki: 237,15 zł (poprzednio: 285,72 zł)
Oszczędzasz: 17% (-48,57 zł)

Dodaj do koszyka Learning LangChain

If you're looking to build production-ready AI applications that can reason and retrieve external data for context-awareness, you'll need to master LangChain—a popular development framework and platform for building, running, and managing agentic applications. LangChain is used by several leading companies, including Zapier, Replit, Databricks, and many more. This guide is an indispensable resource for developers who understand Python or JavaScript but are beginners eager to harness the power of AI.

Authors Mayo Oshin and Nuno Campos demystify the use of LangChain through practical insights and in-depth tutorials. Starting with basic concepts, this book shows you step-by-step how to build a production-ready AI agent that uses your data.

  • Harness the power of retrieval-augmented generation (RAG) to enhance the accuracy of LLMs using external up-to-date data
  • Develop and deploy AI applications that interact intelligently and contextually with users
  • Make use of the powerful agent architecture with LangGraph
  • Integrate and manage third-party APIs and tools to extend the functionality of your AI applications
  • Monitor, test, and evaluate your AI applications to improve performance
  • Understand the foundations of LLM app development and how they can be used with LangChain

Dodaj do koszyka Learning LangChain

 

Osoby które kupowały "Learning LangChain", wybierały także:

  • Biologika Sukcesji Pokoleniowej. Sezon 3. Konflikty na terytorium
  • Windows Media Center. Domowe centrum rozrywki
  • PodrÄ™cznik startupu. Budowa wielkiej firmy krok po kroku
  • Ruby on Rails. Ćwiczenia
  • Prawa ludzkiej natury

Dodaj do koszyka Learning LangChain

Spis treści

Learning LangChain eBook -- spis treści

  • Preface
    • Brief Primer on LLMs
      • Instruction-Tuned LLMs
      • Dialogue-Tuned LLMs
      • Fine-Tuned LLMs
    • Brief Primer on Prompting
      • Zero-Shot Prompting
      • Chain-of-Thought
      • Retrieval-Augmented Generation
      • Tool Calling
      • Few-Shot Prompting
    • LangChain and Why Its Important
    • What to Expect from This Book
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgments
  • 1. LLM Fundamentals with LangChain
    • Getting Set Up with LangChain
    • Using LLMs in LangChain
    • Making LLM Prompts Reusable
    • Getting Specific Formats out of LLMs
      • JSON Output
      • Other Machine-Readable Formats with Output Parsers
    • Assembling the Many Pieces of an LLM Application
      • Using the Runnable Interface
      • Imperative Composition
      • Declarative Composition
    • Summary
  • 2. RAG Part I: Indexing Your Data
    • The Goal: Picking Relevant Context for LLMs
    • Embeddings: Converting Text to Numbers
      • Embeddings Before LLMs
      • LLM-Based Embeddings
      • Semantic Embeddings Explained
    • Converting Your Documents into Text
    • Splitting Your Text into Chunks
    • Generating Text Embeddings
    • Storing Embeddings in a Vector Store
      • Getting Set Up with PGVector
      • Working with Vector Stores
    • Tracking Changes to Your Documents
    • Indexing Optimization
      • MultiVectorRetriever
      • RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
      • ColBERT: Optimizing Embeddings
    • Summary
  • 3. RAG Part II: Chatting with Your Data
    • Introducing Retrieval-Augmented Generation
      • Retrieving Relevant Documents
      • Generating LLM Predictions Using Relevant Documents
    • Query Transformation
      • Rewrite-Retrieve-Read
      • Multi-Query Retrieval
      • RAG-Fusion
      • Hypothetical Document Embeddings
    • Query Routing
      • Logical Routing
      • Semantic Routing
    • Query Construction
      • Text-to-Metadata Filter
      • Text-to-SQL
    • Summary
  • 4. Using LangGraph to Add Memory to Your Chatbot
    • Building a Chatbot Memory System
    • Introducing LangGraph
    • Creating a StateGraph
    • Adding Memory to StateGraph
    • Modifying Chat History
      • Trimming Messages
      • Filtering Messages
      • Merging Consecutive Messages
    • Summary
  • 5. Cognitive Architectures with LangGraph
    • Architecture #1: LLM Call
    • Architecture #2: Chain
    • Architecture #3: Router
    • Summary
  • 6. Agent Architecture
    • The Plan-Do Loop
    • Building a LangGraph Agent
    • Always Calling a Tool First
    • Dealing with Many Tools
    • Summary
  • 7. Agents II
    • Reflection
    • Subgraphs in LangGraph
      • Calling a Subgraph Directly
      • Calling a Subgraph with a Function
    • Multi-Agent Architectures
      • Supervisor Architecture
    • Summary
  • 8. Patterns to Make the Most of LLMs
    • Structured Output
      • Intermediate Output
      • Streaming LLM Output Token-by-Token
      • Human-in-the-Loop Modalities
        • Resume
        • Restart
        • Edit state
        • Fork
      • Multitasking LLMs
        • Refuse concurrent inputs
        • Handle independently
        • Queue concurrent inputs
        • Interrupt
        • Fork and merge
    • Summary
  • 9. Deployment: Launching Your AI Application into Production
    • Prerequisites
      • Install Dependencies
      • Large Language Model
      • Vector Store
      • Backend API
      • Create a LangSmith Account
    • Understanding the LangGraph Platform API
      • Data Models
        • Assistants
        • Threads
        • Runs
        • Cron jobs
      • Features
        • Streaming
        • Human-in-the-loop
        • Double texting
        • Stateless runs
        • Webhooks
    • Deploying Your AI Application on LangGraph Platform
      • Create a LangGraph API Config
      • Test Your LangGraph App Locally
      • Deploy from the LangSmith UI
        • Deployment details
        • Development type
        • Environment variables
      • Launch LangGraph Studio
    • Security
    • Summary
  • 10. Testing: Evaluation, Monitoring, and Continuous Improvement
    • Testing Techniques Across the LLM App Development Cycle
    • The Design Stage: Self-Corrective RAG
    • The Preproduction Stage
      • Creating Datasets
      • Defining Your Evaluation Criteria
        • Improving LLM-as-a-judge evaluators performance
        • Pairwise evaluation
      • Regression Testing
      • Evaluating an Agents End-to-End Performance
        • Testing an agents final response
        • Testing a single step of an agent
        • Testing an agents trajectory
    • Production
      • Tracing
      • Collect Feedback in Production
      • Classification and Tagging
      • Monitoring and Fixing Errors
    • Summary
  • 11. Building with LLMs
    • Interactive Chatbots
    • Collaborative Editing with LLMs
    • Ambient Computing
    • Summary
  • Index

Dodaj do koszyka Learning LangChain

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2025 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.