-
Notifications
You must be signed in to change notification settings - Fork 3.7k
feat: Add optional Langfuse observability integration #2298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This contribution adds optional Langfuse support for LLM observability and tracing. Langfuse provides a drop-in replacement for the OpenAI client that automatically tracks all LLM interactions without requiring code changes. Features: - Optional Langfuse integration with graceful fallback - Automatic LLM request/response tracing - Token usage tracking - Latency metrics - Error tracking - Zero code changes required for existing functionality Implementation: - Modified lightrag/llm/openai.py to conditionally use Langfuse's AsyncOpenAI - Falls back to standard OpenAI client if Langfuse is not installed - Logs observability status on import Configuration: To enable Langfuse tracing, install the observability extras and set environment variables: ```bash pip install lightrag-hku[observability] export LANGFUSE_PUBLIC_KEY="your_public_key" export LANGFUSE_SECRET_KEY="your_secret_key" export LANGFUSE_HOST="https://cloud.langfuse.com" # or your self-hosted instance ``` If Langfuse is not installed or environment variables are not set, LightRAG will use the standard OpenAI client without any functionality changes. Changes: - Modified lightrag/llm/openai.py (added optional Langfuse import) - Updated pyproject.toml with optional 'observability' dependencies Dependencies (optional): - langfuse>=3.8.1
|
@codex review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
lightrag/llm/openai.py
Outdated
| # Try to import Langfuse for LLM observability (optional) | ||
| # Falls back to standard OpenAI client if not available | ||
| try: | ||
| from langfuse.openai import AsyncOpenAI | ||
| LANGFUSE_ENABLED = True | ||
| logger.info("Langfuse observability enabled for OpenAI client") | ||
| except ImportError: | ||
| from openai import AsyncOpenAI | ||
| LANGFUSE_ENABLED = False | ||
| logger.debug("Langfuse not available, using standard OpenAI client") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid referencing logger before it is imported
The new Langfuse fallback block calls logger.info/logger.debug before logger is imported from lightrag.utils. Because module level code executes top‑down, the NameError raised in either branch prevents lightrag.llm.openai from importing at all, breaking every callsite that relies on the OpenAI client. Import logger before this block or defer logging until after the import has completed.
Useful? React with 👍 / 👎.
|
Fixed the linting issues identified by the code review:
All pre-commit checks now passing locally. Ready for CI approval and review. |
|
As this PR combines RAGAS-related changes, it is no longer possible to merge it independently. |
danielaskdd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Waiting for PR #2297 merged before further review can proceed.
e0f8de5 to
bf97b5f
Compare
|
RAGAS evaluation files have been removed from this branch. The PR now contains only Langfuse observability changes and can be merged independently. Ready for review. |
Moved logger import before Langfuse block to fix NameError.
bf97b5f to
9495778
Compare
|
@codex review |
|
Codex Review: Didn't find any major issues. Nice work! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
Description
This PR adds optional Langfuse support for LLM observability and tracing in LightRAG. Langfuse provides a drop-in replacement for the OpenAI client that automatically tracks all LLM interactions, enabling developers to monitor, debug, and optimize their RAG systems without code changes.
Related Issues
Addresses the need for production-ready LLM observability and monitoring in LightRAG deployments.
Changes Made
Modified Files
lightrag/llm/openai.pylangfuse.openai.AsyncOpenAIif availableopenai.AsyncOpenAIif Langfuse not installedLANGFUSE_ENABLEDflag for runtime detectionpyproject.tomlobservabilitydependencies group:langfuse>=3.8.1(LLM observability platform)env.exampleLANGFUSE_SECRET_KEY- Secret key from Langfuse dashboardLANGFUSE_PUBLIC_KEY- Public key from Langfuse dashboardLANGFUSE_HOST- Cloud or self-hosted instance URLLANGFUSE_ENABLE_TRACE- Enable/disable tracingImplementation Details
Before:
After:
Installation
Standard Installation (no observability)
pip install lightrag-hku # Uses standard OpenAI clientWith Observability
Configuration Example
From
env.example:Usage
No Code Changes Required
Once installed and configured, Langfuse automatically traces all OpenAI LLM calls:
Langfuse Dashboard Features
Checklist
Additional Notes
Design Principles
Why Langfuse?
AsyncOpenAIwraps the standard clientUse Cases
Performance Impact
Privacy & Security
Getting Started with Langfuse
Option 1: Cloud (Easiest)
.envfilepip install lightrag-hku[observability]Option 2: Self-Hosted
LANGFUSE_HOSTto your instance.envfileTesting
Tested with:
Future Enhancements
This implementation enables future observability features:
Thank you for reviewing this contribution!