Skip to content

[Bug]: When using the anthropic endpoint of deepseek-v3.1, the async_data_generator log reports an error. #13867

@toomanyopenfiles

Description

@toomanyopenfiles

What happened?

When I use the litellm proxy to access the anthropic endpoint of deepseek-v3.1, an error is reported in the DEBUG log, causing the Logs in the UI backend to fail to record, and other otel logs also cannot be recorded.

However, it seems that the main process of the program is not affected, and the application can provide services normally.

I noticed that the error message indicates that there is a "data: [DONE]\n\n" marker at the last line of the SSE stream log parsing, which causes the json decode to fail.

Relevant log output

litellm                     | 18:40:59 - LiteLLM Proxy:DEBUG: common_request_processing.py:742 - async_data_generator: received streaming chunk - b'event: content_block_delta\ndata: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":" Request"}}\n\n'
litellm                     | 18:40:59 - LiteLLM Proxy:DEBUG: common_request_processing.py:742 - async_data_generator: received streaming chunk - b'event: content_block_delta\ndata: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"\\"}"}}\n\nevent: content_block_stop\ndata: {"type":"content_block_stop","index":0}\n\nevent: message_delta\ndata: {"type":"message_delta","delta":{"stop_reason":"end_turn","stop_sequence":null},"usage":{"output_tokens":14}}\n\nevent: message_stop\ndata: {"type":"message_stop"}\n\ndata: [DONE]\n\n'
litellm                     | Task exception was never retrieved
litellm                     | future: <Task finished name='Task-1119' coro=<PassThroughStreamingHandler._route_streaming_logging_to_handler() done, defined at /usr/lib/python3.13/site-packages/litellm/proxy/pass_through_endpoints/streaming_handler.py:63> exception=JSONDecodeError('Expecting value: line 1 column 3 (char 2)')>
litellm                     | Traceback (most recent call last):
litellm                     |   File "/usr/lib/python3.13/site-packages/litellm/proxy/pass_through_endpoints/streaming_handler.py", line 90, in _route_streaming_logging_to_handler
litellm                     |     anthropic_passthrough_logging_handler_result = AnthropicPassthroughLoggingHandler._handle_logging_anthropic_collected_chunks(
litellm                     |         litellm_logging_obj=litellm_logging_obj,
litellm                     |     ...<6 lines>...
litellm                     |         end_time=end_time,
litellm                     |     )
litellm                     |   File "/usr/lib/python3.13/site-packages/litellm/proxy/pass_through_endpoints/llm_provider_handlers/anthropic_passthrough_logging_handler.py", line 175, in _handle_logging_anthropic_collected_chunks
litellm                     |     AnthropicPassthroughLoggingHandler._build_complete_streaming_response(
litellm                     |     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
litellm                     |         all_chunks=all_chunks,
litellm                     |         ^^^^^^^^^^^^^^^^^^^^^^
litellm                     |         litellm_logging_obj=litellm_logging_obj,
litellm                     |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
litellm                     |         model=model,
litellm                     |         ^^^^^^^^^^^^
litellm                     |     )
litellm                     |     ^
litellm                     |   File "/usr/lib/python3.13/site-packages/litellm/proxy/pass_through_endpoints/llm_provider_handlers/anthropic_passthrough_logging_handler.py", line 223, in _build_complete_streaming_response
litellm                     |     transformed_openai_chunk = anthropic_model_response_iterator.convert_str_chunk_to_generic_chunk(
litellm                     |         chunk=_chunk_str
litellm                     |     )
litellm                     |   File "/usr/lib/python3.13/site-packages/litellm/llms/anthropic/chat/handler.py", line 936, in convert_str_chunk_to_generic_chunk
litellm                     |     data_json = json.loads(str_line[5:])
litellm                     |   File "/usr/lib/python3.13/json/__init__.py", line 346, in loads
litellm                     |     return _default_decoder.decode(s)
litellm                     |            ~~~~~~~~~~~~~~~~~~~~~~~^^^
litellm                     |   File "/usr/lib/python3.13/json/decoder.py", line 345, in decode
litellm                     |     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
litellm                     |                ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
litellm                     |   File "/usr/lib/python3.13/json/decoder.py", line 363, in raw_decode
litellm                     |     raise JSONDecodeError("Expecting value", s, err.value) from None
litellm                     | json.decoder.JSONDecodeError: Expecting value: line 1 column 3 (char 2)
litellm                     | 18:40:59 - LiteLLM Proxy:DEBUG: common_request_processing.py:742 - async_data_generator: received streaming chunk - b'event: content_block_delta\ndata: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"aude"}}\n\nevent: content_block_stop\ndata: {"type":"content_block_stop","index":0}\n\nevent: message_delta\ndata: {"type":"message_delta","delta":{"stop_reason":"end_turn","stop_sequence":null},"usage":{"output_tokens":2}}\n\nevent: message_stop\ndata: {"type":"message_stop"}\n\ndata: [DONE]\n\n'
litellm                     | Task exception was never retrieved
litellm                     | future: <Task finished name='Task-1120' coro=<PassThroughStreamingHandler._route_streaming_logging_to_handler() done, defined at /usr/lib/python3.13/site-packages/litellm/proxy/pass_through_endpoints/streaming_handler.py:63> exception=JSONDecodeError('Expecting value: line 1 column 3 (char 2)')>
litellm                     | Traceback (most recent call last):
litellm                     |   File "/usr/lib/python3.13/site-packages/litellm/proxy/pass_through_endpoints/streaming_handler.py", line 90, in _route_streaming_logging_to_handler
litellm                     |     anthropic_passthrough_logging_handler_result = AnthropicPassthroughLoggingHandler._handle_logging_anthropic_collected_chunks(
litellm                     |         litellm_logging_obj=litellm_logging_obj,
litellm                     |     ...<6 lines>...
litellm                     |         end_time=end_time,
litellm                     |     )
litellm                     |   File "/usr/lib/python3.13/site-packages/litellm/proxy/pass_through_endpoints/llm_provider_handlers/anthropic_passthrough_logging_handler.py", line 175, in _handle_logging_anthropic_collected_chunks
litellm                     |     AnthropicPassthroughLoggingHandler._build_complete_streaming_response(
litellm                     |     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
litellm                     |         all_chunks=all_chunks,
litellm                     |         ^^^^^^^^^^^^^^^^^^^^^^
litellm                     |         litellm_logging_obj=litellm_logging_obj,
litellm                     |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
litellm                     |         model=model,
litellm                     |         ^^^^^^^^^^^^
litellm                     |     )
litellm                     |     ^
litellm                     |   File "/usr/lib/python3.13/site-packages/litellm/proxy/pass_through_endpoints/llm_provider_handlers/anthropic_passthrough_logging_handler.py", line 223, in _build_complete_streaming_response
litellm                     |     transformed_openai_chunk = anthropic_model_response_iterator.convert_str_chunk_to_generic_chunk(
litellm                     |         chunk=_chunk_str
litellm                     |     )
litellm                     |   File "/usr/lib/python3.13/site-packages/litellm/llms/anthropic/chat/handler.py", line 936, in convert_str_chunk_to_generic_chunk
litellm                     |     data_json = json.loads(str_line[5:])
litellm                     |   File "/usr/lib/python3.13/json/__init__.py", line 346, in loads
litellm                     |     return _default_decoder.decode(s)
litellm                     |            ~~~~~~~~~~~~~~~~~~~~~~~^^^
litellm                     |   File "/usr/lib/python3.13/json/decoder.py", line 345, in decode
litellm                     |     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
litellm                     |                ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
litellm                     |   File "/usr/lib/python3.13/json/decoder.py", line 363, in raw_decode
litellm                     |     raise JSONDecodeError("Expecting value", s, err.value) from None
litellm                     | json.decoder.JSONDecodeError: Expecting value: line 1 column 3 (char 2)

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.75.7

Twitter / LinkedIn details

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions