-
Notifications
You must be signed in to change notification settings - Fork 690
Open
Labels
bugSomething isn't workingSomething isn't working
Description
First Check
- I added a very descriptive title to this issue.
- I searched existing issues and documentation.
Memori Version
3.0.6
OS / Python Version
macOS
LLM Provider
OpenAI
LLM Model & Version
gpt-5-mini
Database
SQLite
Description
When using Memori (v3.0.6) with Agno (v2.3.8) and running an agent with OpenAIChat in streaming mode (Agent.arun(..., streaming=True)), the process consistently throws:
Error from OpenAI API: cannot pickle '_thread.RLock' object
Model provider error after 1 attempts: cannot pickle '_thread.RLock' object
This issue does not occur when switching the model provider to Gemini, suggesting the bug is specific to the OpenAIChat integration.
Environment
- Memori: 3.0.6
- Agno: 2.3.8
- Python: (same environment using
uv run) - Model Provider: OpenAIChat → ❌ error
- Alternative provider (Gemini) → ✅ works correctly
- Running example based on Agno documentation:
https://docs.agno.com/integrations/memory/memori#memori
Steps to Reproduce
-
Install latest Memori (
3.0.6) and Agno (2.3.8). -
Use the code example from Agno’s documentation and modify the call to:
async for chunk in agent.arun( "Hi, I'd like to order a large pepperoni pizza with extra cheese", stream=True ): if chunk.event == RunEvent.run_content: print(chunk.content, end="", flush=True) print("\n")
-
Run the file:
uv run python main.py -
Observe the error after the first user message.
Expected Behavior
- Agent should stream responses normally when using OpenAIChat.
- Behavior should match Gemini provider (which works as expected).
Actual Behavior
Execution fails with:
Error from OpenAI API: cannot pickle '_thread.RLock' object
Model provider error after 1 attempts: cannot pickle '_thread.RLock' object
Traceback (most recent call last):
...
The error appears immediately after the agent receives the first instruction.
Additional Notes
Here's my full code that I use
import asyncio
import os
from agno.agent import Agent, RunOutput
from agno.models.google import Gemini
from agno.models.openai import OpenAIChat
from agno.run.agent import RunEvent
from dotenv import load_dotenv
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from memori import Memori
load_dotenv()
db_path = os.getenv("DATABASE_PATH", "memori_agno.db")
engine = create_engine(f"sqlite:///{db_path}")
Session = sessionmaker(bind=engine)
model = Gemini(
id="gemini-2.5-flash",
)
model = OpenAIChat(
id="gpt-5-mini",
)
# mem = Memori(conn=Session).agno.register(gemini=model)
mem = Memori(conn=Session).agno.register(openai_chat=model)
mem.attribution(entity_id="customer-456", process_id="support-agent")
mem.config.storage.build()
agent = Agent(
model=model,
instructions=[
"You are a helpful customer support agent.",
"Remember customer preferences and history from previous conversations.",
],
markdown=True,
)
async def main():
print("Customer: Hi, I'd like to order a large pepperoni pizza with extra cheese")
print("Agent: ", end="")
async for chunk in agent.arun(
"Hi, I'd like to order a large pepperoni pizza with extra cheese", stream=True
):
if chunk.event == RunEvent.run_content:
print(chunk.content, end="", flush=True)
print("\n")
print("Customer: Actually, can you remind me what I just ordered?")
print("Agent: ", end="")
async for chunk in agent.arun(
"Actually, can you remind me what I just ordered?", stream=True
):
if chunk.event == RunEvent.run_content:
print(chunk.content, end="", flush=True)
print("\n")
print("Customer: Perfect! And what size was that again?")
print("Agent: ", end="")
async for chunk in agent.arun(
"Perfect! And what size was that again?", stream=True
):
if chunk.event == RunEvent.run_content:
print(chunk.content, end="", flush=True)
print()
if __name__ == "__main__":
asyncio.run(main())Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working