-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
Closed
Labels
Description
What happened?
Trying to use partial image loading with image generations like:
stream = client.images.generate(
model="gpt-image-1",
prompt="A cute sea otter",
n=1,
size="1024x1024",
stream=True
partial_images=2
)
for event in stream:
if event.type == "image_generation.partial_image":
idx = event.partial_image_index
image_base64 = event.b64_json
image_bytes = base64.b64decode(image_base64)
with open(f"river{idx}.png", "wb") as f:
f.write(image_bytes)Yields this:
openai.BadRequestError: Error code: 400 - {'error': {'message': 'litellm.BadRequestError: AzureException BadRequestError - {\n "error": {\n "message": "Unknown parameter: \'extra_body\'.",\n "type": "invalid_request_error",\n "param": "extra_body",\n "code": "unknown_parameter"\n }\n}. Received Model Group=openai_gpt_image_1\nAvailable Model Group Fallbacks=None', 'type': None, 'param': None, 'code': '400'}}This is because the request body sent to Azure has the field extra_body, like:
{'model': 'gpt-image-1', 'prompt': 'A cute sea otter', 'n': 1, 'size': '1024x1024', 'extra_body': {'partial_images': 2, 'stream': True}}(look at the variable data at litellm/llms/azure/azure.py:1130)
but Azure doesn't know what extra_body is and expects:
{'model': 'gpt-image-1', 'prompt': 'A cute sea otter', 'n': 1, 'size': '1024x1024', 'partial_images': 2, 'stream': True}This request body is originally created in get_optional_params_image_gen() at litellm/utils.py:2535.
So the data should probably be formatted and flattened correctly in get_optional_params_image_gen() (or at a later point). But I unfortunately don't have the time at the moment to actually understand what get_optional_params_image_gen() does well enough to suggest an actual change, will leave that up to you!
What LiteLLM version are you on ?
v1.77.5
Twitter / LinkedIn details
No response