v0.4.9 - LangGraph Integration
🕸️ LangGraph Integration - Build Smarter Multi-Step Agents
This release introduces seamless integration with LangGraph, enabling you to train sophisticated ReAct-style agents that improve through reinforcement learning without manual prompt engineering.
✨ Major Features
- 🆕 LangGraph Integration - Drop-in replacement for LangGraph's LLM initialization with automatic logging and trajectory capture
- 🔄 Multi-Step Agent Training - Train agents that reason, use tools, and adapt their behavior over time
- 📊 Auto Trajectory Generation - Automatic conversion of LangGraph agent executions into ART training data
- ⚡ RULER Compatibility - Use ART's general-purpose reward function without hand-crafted rewards
🔧 Improvements
- Type Safety - Enhanced type annotations and fixes for LangGraph integration
- Memory Management - Better CUDA cache management and garbage collection utilities
- Dependencies - Pinned litellm to version 1.74.1 for stability
- Code Quality - Refactored logger imports and async tokenizer methods
📚 Documentation & Examples
- New Documentation - Comprehensive LangGraph integration guide with examples
- Updated README - Featured LangGraph integration in main project description
- Example Notebook - ART•E LangGraph notebook for training email search agents
- License Updates - Updated third-party notices and licensing information
🔧 Code Example
import art
from art.langgraph import wrap_rollout, init_chat_model
from langgraph import create_react_agent
@wrap_rollout(model)
async def run_agent(scenario: str) -> art.Trajectory:
agent = create_react_agent(init_chat_model(), tools)
result = await agent.ainvoke({"messages": [("user", scenario)]})
return art.Trajectory() # Automatically captured
# Train with RULER - no reward engineering needed!
await art.train(model, reward_function="ruler")