reklama - zainteresowany?

Applied AI for Enterprise Java Development. Leveraging Generative AI, LLMs, and Machine Learning in the Java Enterprise - Helion

Applied AI for Enterprise Java Development. Leveraging Generative AI, LLMs, and Machine Learning in the Java Enterprise
ebook
Autor: Alex Soto Bueno, Markus Eisele, Natale Vinto
ISBN: 9781098174460
stron: 430, Format: ebook
Data wydania: 2025-11-07
Księgarnia: Helion

Cena książki: 186,15 zł (poprzednio: 216,45 zł)
Oszczędzasz: 14% (-30,30 zł)

Dodaj do koszyka Applied AI for Enterprise Java Development. Leveraging Generative AI, LLMs, and Machine Learning in the Java Enterprise

As a Java enterprise developer or architect, you know that embracing AI isn't just optional—it's critical to keeping your competitive edge. The question is, how can you skillfully incorporate these groundbreaking AI technologies into your applications without getting mired in complexity?

Enter this clear-cut, no-nonsense guide to integrating generative AI into your Java enterprise ecosystem. With insights from authors Alex Soto Bueno, Markus Eisele, and Natale Vinto, you'll learn to marry the robustness of Java's enterprise world with the dynamism of AI. It's more than just a how-to—it's a way to elevate enterprise software with savvy AI integrations, ensuring your skills and your applications remain on the cutting edge. Inside, you'll unlock the power to:

  • Demystify GenAI's role and impact on contemporary software development
  • Craft actionable, AI-driven applications using Java's rich ecosystem of open source frameworks
  • Implement field-tested AI patterns tailored for prod-ready, enterprise-strength applications
  • Access and integrate top-tier open source AI models with Java's Inference APIs
  • Navigate the Java framework landscape with AI-centric agility and confidence

Dodaj do koszyka Applied AI for Enterprise Java Development. Leveraging Generative AI, LLMs, and Machine Learning in the Java Enterprise

 

Osoby które kupowały "Applied AI for Enterprise Java Development. Leveraging Generative AI, LLMs, and Machine Learning in the Java Enterprise", wybierały także:

  • Jak zhakowa
  • Biologika Sukcesji Pokoleniowej. Sezon 3. Konflikty na terytorium
  • Windows Media Center. Domowe centrum rozrywki
  • Podręcznik startupu. Budowa wielkiej firmy krok po kroku
  • Ruby on Rails. Ćwiczenia

Dodaj do koszyka Applied AI for Enterprise Java Development. Leveraging Generative AI, LLMs, and Machine Learning in the Java Enterprise

Spis treści

Applied AI for Enterprise Java Development. Leveraging Generative AI, LLMs, and Machine Learning in the Java Enterprise eBook -- spis treści

  • Preface
    • Beyond Prototypes: Building Resilient AI-Infused Applications with Java
    • Who Should Read This Book
    • How the Book Is Organized
    • Prerequisites and Software
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgments
      • Alex
      • Markus
      • Natale
  • 1. The Enterprise AI Conundrum
    • The AI Landscape: A Technical Perspective All the Way to GenAI
      • Machine Learning: The Foundation of Todays AI
      • Deep Learning: A Powerful Tool in the AI Arsenal
      • Generative AI: The Future of Content Generation
    • Open Source Models and Training Data
      • Why Open Source Is an Important Driver for GenAI
      • The Hidden Cost of Bad Data: Understanding Model Behavior Through Training Inputs
      • Adding Company-Specific Data to LLMs
      • Explainable and Transparent AI Decisions
    • Ethical and Sustainability Considerations
    • The Lifecycle of LLMs and Ways to Influence Their Behavior
    • MLOps Versus DevOps (and the Rise of AIOps and GenAIOps)
    • Conclusion
  • 2. The New Types of Applications
    • Understanding Large Language Models
      • Key Elements of a Large Language Model
        • How LLMs generate responses
        • Model architectures
          • Encoder-only models
          • Decoder-only models
          • Encoder-decoder models
        • Size and complexity
          • Wait, what does 7 billion parameters even mean here?
          • Optimizing model size with quantization and compression
      • Deployment of Models
        • Popular inference engines
        • Key hyperparameters for model inference
        • Model tuning: Beyond tweaking the output
          • Prompt tuning
          • Prompt learning
          • PEFT and LoRA
          • Full fine-tuning
          • Alignment tuning
    • Choosing the Right LLM for Your Application
      • Model Type
      • Model Size and Efficiency
      • Deployment Approaches
      • Supported Precision and Hardware Optimization
      • Ethical Considerations and Bias
      • Community and Documentation Support
      • Closed Versus Open Source
      • Example Categorization
      • Foundation Models or Expert Models: Where Are We Headed?
        • Industry perspectives: Large, small, task oriented, or domain specific
        • Mixture of experts, multimodal models, model chaining, and so on
        • DeepSeek and the future of model architectures
    • Using Supporting Technologies
      • Embedding Models and Vector Databases
      • Caching and Performance Optimization
      • AI Agent Frameworks
      • Model Context Protocol
      • API Integration
      • Model Security, Compliance, and Access Control
    • Conclusion
  • 3. Prompts for Developers: Why Prompts Matter in AI-Infused Applications
    • Types of Prompts
      • User Prompts: Direct Input from the User
      • System Prompts: Instructions That Guide Model Behavior
      • Contextual Prompts: Prepopulated or Dynamically Generated Inputs
    • Principles of Writing Effective Prompts
    • Prompting Techniques
      • Zero-Shot Prompting: Asking Without Context
      • Few-Shot Prompting: Providing Examples to Guide Responses
      • Chain-of-Thought Prompting: Encouraging Step-by-Step Reasoning
      • Self-Consistency: Improving Accuracy by Generating Multiple Responses
      • Instruction Prompting: Directing the Model Explicitly
      • Retrieval-Augmented Generation: Enhancing Prompts with External Data
    • Advanced Strategies
      • Constructing Dynamic Prompts: Combining Static and Generated Inputs
      • Using Prompt Chaining to Maintain Context
      • Using Guardrails and Validations for Safer Outputs
      • Leveraging APIs for Prompt Customization
      • Optimizing for Performance Versus Cost
      • Debugging Prompts: Troubleshooting Poor Responses
    • Tool Use and Function Calling
    • Context Engineering as the New Prompt Engineering
    • Designing Memory and Storage for Context
      • Fast Access with In-Memory Caches
      • Hot Memory for Short-Term Context
      • Vector Databases for Long-Term Semantic Memory
      • Cold Storage for Archival Data and Large Repositories
      • Combining Storage Tiers for Effective Context Delivery
    • Conclusion
  • 4. AI Architectures for Applications
    • Beyond Traditional Architectures: Why AI-Infused Systems Require a New Approach
    • Overview of Core Architectural Pillars: A Roadmap for the Chapter
    • Application Components
      • Queries and Data: Managing Application Inputs
      • The AI Gateway: Managing Inputs and Outputs
      • Context and Memory
      • Interaction and Transport: Using Tools and Agents
        • Complementing agents with a rules engine
        • Routing MCP traffic with Wanaku
    • Discovery and Access Control
    • Model Serving
    • The Data Preparation Pipeline
    • Observability and Monitoring: The End-to-End AI Stack
    • Conclusion
  • 5. Embedding Vectors, Vector Stores, and Running Models Locally
    • Embedding Vectors and Their Role
      • Why Are Embeddings Needed?
      • Structure of an Embedding Vector
      • Measuring Similarity: Cosine Similarity and Distance
      • Common Embedding Models
      • How Are Embeddings Used in AI Applications?
        • Clustering and classification
        • Personalization and recommendations
        • Anomaly detection
        • Conversational context and memory
      • Other Similarity Methods
        • Dot product
        • Euclidean distance
        • Manhattan distance
        • Hamming distance (for binary embeddings)
        • When to choose what?
      • Uncommon Uses of Embedding Vectors
        • Behavior-based identity
        • Code similarity and pattern matching
        • Model drift or concept change detection
        • Creative blending with centroid embeddings
        • Tracking meaning over time
    • Vector Stores and Querying Mechanisms
      • How Vector Databases Store and Retrieve Embeddings
      • Examples of Common Vector Stores
    • Retrieval-Augmented Generation
    • Indexing or Generating Vector Embeddings at Scale
    • Why Run Models Locally?
      • Ollama: Local Inferencing with a Simple Interface
        • Installing and running a model with Ollama
        • Interacting with Ollama
      • Podman Desktop: Using Containerized Environments for AI Workloads
        • Introduction to Podman Desktop
        • Model deployment with the Podman Desktop AI extension
        • AI recipes in Podman AI Lab
        • Calling a Podman AI model from curl
      • Jlama: Java-Native Model Inferencing for JVM-Based Applications
        • Setting up Jllama in a Java project
        • Processing model outputs in Java
      • Comparing Local Inferencing Methods
    • Using OpenAIs REST API
      • Overview of OpenAIs Models and Endpoints
        • API key authentication and rate limits
        • OpenAI Java SDK
      • Generating Embeddings with OpenAIs API
        • Making raw API requests without the SDK
        • Handling API responses and errors
    • Conclusion
  • 6. Inference APIs
    • What Is an Inference API?
      • Benefits of an Inference API
      • Examples of Inference APIs
        • OpenAI
        • Ollama
    • Deploying Inference Models in Java
      • Inferencing Models with DJL
        • Adding the dependencies
        • Creating the POJOs
        • Loading the model
        • Implementing the transformer
        • Predicting
        • Creating the REST controller
        • Testing the example
      • Looking Under the Hood
      • Inferencing Models with gRPC
        • Using Protocol Buffers
        • Implementing the gRPC server
    • Conclusion
  • 7. Accessing the Inference Model with Java
    • Connecting to an Inference API with Quarkus
      • The Architecture
      • The Fraud Inference API
      • The Quarkus Project
      • The REST Client Interface
      • The REST Resource
      • Testing the Example
    • Connecting to an Inference API with Spring Boot WebClient
      • Adding WebClient Dependency
      • Using the WebClient
    • Connecting to the Inference API with the Quarkus gRPC Client
      • Adding gRPC Dependencies
      • Implementing the gRPC Client
    • Conclusion
  • 8. LangChain4j
    • What Is LangChain4j?
      • Unified APIs
      • Prompt Templates
      • Structured Outputs
      • Memory
      • Data Augmentation
      • Tools
      • High-Level API
        • Prompting
        • Memory
        • Data augmentation and tooling
    • LangChain4j with Plain Java
      • Extracting Information from Unstructured Text
      • Performing Text Classification
      • Generating Images and Descriptions
    • Spring Boot Integration
      • Adding Spring Boot Dependencies
      • Defining the AI Service
      • Creating a REST Controller
    • Quarkus Integration
      • Quarkus Dependencies
      • Frontend
      • The AI Service
      • WebSocket
      • Optical Character Recognition
    • Tools
      • Dependencies
      • Rides Persistence
      • Waiting Times Service
      • AI Service
      • REST Endpoint
      • Dynamic Tooling
      • Final Notes About Tooling
    • Memory
      • Dependencies
      • Changes to Code
    • Conclusion
  • 9. Vector Embeddings and Stores
    • Calculating Vector Embeddings
      • Vector Embeddings Using DJL
        • Adding DJL dependencies
        • Inferencing the model
      • Vector Embeddings Using In-Process LangChain4j
        • Setting in-process LangChain4j dependencies
        • Calculating vectors
        • Plotting vectors
        • Using supported models
      • Vector Embeddings Using Remote Models with LangChain4j
        • Adding dependencies
        • Calculating vector embedding
    • Text Classifier
      • Embedding Text-Classification Dependencies
      • Providing Examples and Categorizing Inputs
    • Text Clustering
      • Adding Text Clustering Dependencies
      • Reading Headline News
      • Calculating the Vector Embedding
      • Clustering News
      • Summarizing News Headlines
    • Semantic Search
      • Adding Semantic Search Dependencies
      • Importing Movies
      • Querying for Similarities
    • Semantic Cache
    • RAG
      • Ingestion
        • Generating text files
        • Using a document parser
        • Splitting documents
        • Using the embedding store ingestor
      • Retrieval
      • Reranking
      • Query Router
        • Adding the Tavily dependency
        • Implementing WebSearchContentRetriever
        • Configuring Tavily
        • Modifying the AI service
      • Ingestion Splitting Window
      • Filtering Results
    • Conclusion
  • 10. LangGraph4j
    • Understanding Graphs in LangGraph4j
      • Nodes
      • Edges
      • State
    • Using LangGraph4j
      • Defining a State
      • Defining a Node
      • Defining a Graph
      • Adding Conditional Edges
      • Appending Values
    • Using LangChain4j with LangGraph4j
      • Routing Agents
        • Defining the AI services
        • Defining the graph
      • Human Interaction with LangGraph4j
        • Configuring the graph
        • Setting the identification parameter
        • Resuming the execution
        • Setting the currency exchange rates
          • Defining the state
          • Defining the graph
          • Defining the node actions
          • Defining the REST API
      • Advanced RAG Schema with Self-Reflection
    • Exploring Additional Features
      • Subgraphs
      • Parallel Execution
      • Time Travel
    • Conclusion
  • 11. Image Processing
    • OpenCV
      • Initializing the Library
        • Manual installation
        • Bundled installation
      • Loading and Saving Images
      • Performing Basic Transformations
        • Converting to grayscale
        • Resizing
        • Cropping
      • Overlaying Elements
        • Drawing boundaries
        • Overlapping images
      • Image Processing
        • Gaussian blur
        • Binarization
        • Noise reduction
        • Edge detection
        • Perspective correction
      • Reading Barcodes and QR Codes
        • Barcodes
        • QR codes
    • Stream Processing
      • Processing Videos
      • Processing Webcam Images
    • OpenCV and Java
    • OCR
    • Conclusion
  • 12. Advanced Topics in AI Java Development
    • Streaming
      • Streaming with a Low-Level API
      • Streaming with AI Services
      • Using LangChain4j and Streaming Integrations
    • Guardrails
      • Input Guardrail
      • Output Guardrail
      • Guardrail Use Cases
        • Input guardrails
        • Output guardrails
    • Model Context Protocol
      • MCP Architecture
        • stdio transport
        • Streamable HTTP
      • MCP Client with Java
        • LangChain4j dependencies
        • MCP client configuration
        • Application using MCP client
        • Execution workflow
      • MCP Client with Quarkus
        • Quarkus dependencies
        • Tool provider
        • AI service
        • MCP client injection
      • MCP Server with Quarkus
        • Adding the Quarkus MCP server dependency
        • Implementing the Quarkus MCP server logic
        • Packaging the application
        • Using MCP Inspector to test the MCP server
        • Using the MCP server with Quarkus and streamable transport
      • Key Benefits of MCP
    • Next Steps
  • Index

Dodaj do koszyka Applied AI for Enterprise Java Development. Leveraging Generative AI, LLMs, and Machine Learning in the Java Enterprise

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2025 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.