-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Description
What happened?
When using the OpenAI Agents SDK with GPT reasoning models (e.g., gpt-5), the SDK correctly returns the reasoning.summary items.
However, when routing the same agent through LiteLLM using LitellmModel(model="gpt-5"), the reasoning summary is missing, even though the same model, same instructions, and same ModelSettings(reasoning={"summary": "auto"}) are used.
This results in inconsistent behavior:
OpenAI models → return ReasoningItem + summary
Same model via LiteLLM → No ReasoningItem nor summary
Steps to Reproduce
-
Install agents SDK and litellm
-
Create an agent using model="gpt-5" and enable reasoning summaries:
ModelSettings(reasoning={"summary": "auto"})
- Run:
result = await Runner.run(agent, prompt)
→ Result contains ReasoningItem with summary
- Now swap the model for:
model = LitellmModel(model="gpt-5")
-
Run the same code again.
-
The result does not contain the reasoning item nor summary.
Expected Behavior
LiteLLM should forward all reasoning-related parameters (e.g., reasoning={"summary": "auto"}) to the upstream OpenAI API exactly as the Agents SDK does.
The response returned through LiteLLM should include the same ResponseReasoningItem.summary objects so that:
reasoning.summary appears in the result
the agent’s output structure remains consistent
reasoning-enabled models via LiteLLM behave the same as direct OpenAI calls
Actual Behavior
When using LiteLLM:
The reasoning parameter is not forwarded or not mapped correctly.
The OpenAI response returned through LiteLLM contains no reasoning summary.
The Agents SDK receives an incomplete response and produces no ResponseReasoningItem.summary.
But when using direct OpenAI models:
Reasoning summaries appear correctly.
Root Cause
LiteLLM does not pass reasoning-related fields (e.g., reasoning, reasoning_effort, response_format for reasoning) through to the OpenAI API.
OpenAI reasoning models require these fields to be transmitted or the API will not return reasoning summaries.
Suggested Fix
Forward the reasoning block exactly as provided to LiteLLM, including:
reasoning.summary
reasoning.effort
any other reasoning-related OpenAI parameters
Ensure the request body passed to OpenAI supports the new reasoning schema introduced for GPT-4.1-R / GPT-5 reasoning models.
Add tests verifying that reasoning summaries appear when using LitellmModel in the Agents SDK.
Relevant log output
from agents import Agent, ModelSettings, Runner
from agents.extensions.models.litellm_model import LitellmModel
# Works – returns reasoning summary
reasoning_agent = Agent(
name="Reasoning Agent",
instructions="Largest city in the 3rd largest country in the world?",
model="gpt-5",
model_settings=ModelSettings(reasoning={"summary": "auto"}),
)
result = await Runner.run(
reasoning_agent,
"Largest city in the 3rd largest country in the world?"
)
for item in result.new_items:
print(item)
# Fails – summary missing
reasoning_agent_with_litellm = Agent(
name="Reasoning Agent with Litellm",
instructions="Largest city in the 3rd largest country in the world?",
model=LitellmModel(model="gpt-5"),
model_settings=ModelSettings(reasoning={"summary": "auto"}),
)
result_litellm = await Runner.run(
reasoning_agent_with_litellm,
"Largest city in the 3rd largest country in the world?"
)
for item in result_litellm.new_items:
print(item)
Output without LiteLLM:
ReasoningItem(agent=Agent(name='Reasoning Agent', ... status=None), type='reasoning_item')
MessageOutputItem(agent=Agent(name='Reasoning Agent', ..., type='message'), type='message_output_item')
Output with LiteLLM:
MessageOutputItem(agent=Agent(name='Reasoning Agent with Litellm', handoff_description=None, ..., type='message'), type='message_output_item')Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.77.3
Twitter / LinkedIn details
No response