-
-
Notifications
You must be signed in to change notification settings - Fork 5k
Description
LiteLLM's cost calculation for OpenAI's gpt-image-1 model is incomplete and significantly underestimates actual costs by completely ignoring input token costs.
Current behavior:
The cost calculator only computes the cost of generating output images using a fixed per-pixel rate:
# litellm/cost_calculator.py:1166
return cost_info["input_cost_per_pixel"] * height * width * nWhat's missing:
LiteLLM completely ignores input token costs:
- Input text token costs: Every gpt-image-1 call includes a text prompt that should be billed at $1.00-$5.00/1M tokens (depending on tier)
- Input image token costs: When processing input images (for editing/variation), should be billed at $10.00/1M tokens
Expected behavior:
Cost calculation should be:
total_cost = (
prompt_tokens * text_token_rate + # Missing: input prompt costs
input_image_tokens * image_input_rate + # Missing: input image costs
current_output_image_cost # Keep existing per-pixel calculation
)Impact:
This causes significant cost underestimation since input token costs are often much higher than output image costs. Users get completely inaccurate billing estimates.
Root cause:
LiteLLM treats gpt-image-1 like DALL-E 3 (pure image generation model) instead of recognizing it's a multimodal model that processes both text and image inputs.
Reference:
According to OpenAI's official pricing (https://platform.openai.com/docs/pricing), gpt-image-1 has separate pricing sections for:
- Text tokens: $1.00-$5.00/1M tokens
- Image tokens: $10.00/1M input tokens
- Image generation: $0.011-$0.25 per image (current implementation covers this)
LiteLLM version: v1.75.9