-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Description
What happened?
Describe the bug
When using AutoGen with the Anthropic Sonnet model in a multi-agent setup, if an agent’s output message is split into multiple parts due to token output limit(max_tokens), the subsequent agent in the workflow receives an empty input for its function tool call. The intended input (the full content from the previous agent) is lost.
To Reproduce
Standard way to use AgentChat to set up a group of agents with AnthropicChatCompletionClient enabled
Configure a multi-agent AutoGen workflow with:
agent_analyst (Anthropic Sonnet model) producing a large markdown output without streaming.
agent_publisher with a function tool to process the output from agent_analyst to create html file.
Trigger the workflow with large input so that agent_analyst produces a message exceeding the output limit.
Observe that agent_analyst output is split into multiple messages/chunks.
Check the input received by agent_publisher function tool.
Observed Behavior
The agent_publisher function call is triggered as below:
agent_publisher: [FunctionCall(id='toolu_bdrk_019ARoa5gsqetn5uwUsLKt8j', arguments='{}', name='convert_markdown_to_html')]
Arguments are empty ({}) instead of containing the full markdown content from agent_analyst.
Workflow cannot continue correctly because the downstream agent loses the input.
Expected Behavior
agent_publisher receives the complete markdown content, even if agent_analyst output is split into multiple chunks.
Multi-agent workflows should handle split messages gracefully for tool calls.
Which packages was the bug in?
Python AgentChat (autogen-agentchat>=0.4.0)
AutoGen library version.
Python 0.7.5
Other library version.
No response
Model used
Anthropic Sonnet 4
Model provider
AWS Bedrock
Other model provider
No response
Python version
3.10
.NET version
None
Operating system
CentOS