Skip to content

Commit 76d2592

Browse files
authored
[Feat] New provider - Azure AI Flux Image Generation (#13592)
* init files * add AzureFoundryModelInfo * fix api_version property * add azure_ai img gen * use AzureFoundryModelInfo * get_base_image_generation_call_args * add azure_ai/FLUX-1.1-pro * add util for route_image_generation_cost_calculator * docs azure ai flux * fixes for flux * fixes for AzureFoundryFluxImageGenerationConfig * ruff fix
1 parent fb325cb commit 76d2592

File tree

17 files changed

+617
-64
lines changed

17 files changed

+617
-64
lines changed
Lines changed: 266 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,266 @@
1+
import Tabs from '@theme/Tabs';
2+
import TabItem from '@theme/TabItem';
3+
4+
# Azure AI Image Generation
5+
6+
Azure AI provides powerful image generation capabilities using FLUX models from Black Forest Labs to create high-quality images from text descriptions.
7+
8+
## Overview
9+
10+
| Property | Details |
11+
|----------|---------|
12+
| Description | Azure AI Image Generation uses FLUX models to generate high-quality images from text descriptions. |
13+
| Provider Route on LiteLLM | `azure_ai/` |
14+
| Provider Doc | [Azure AI FLUX Models ↗](https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/black-forest-labs-flux-1-kontext-pro-and-flux1-1-pro-now-available-in-azure-ai-f/4434659) |
15+
| Supported Operations | [`/images/generations`](#image-generation) |
16+
17+
## Setup
18+
19+
### API Key & Base URL
20+
21+
```python showLineNumbers
22+
# Set your Azure AI API credentials
23+
import os
24+
os.environ["AZURE_AI_API_KEY"] = "your-api-key-here"
25+
os.environ["AZURE_AI_API_BASE"] = "your-azure-ai-endpoint" # e.g., https://your-endpoint.eastus2.inference.ai.azure.com/
26+
```
27+
28+
Get your API key and endpoint from [Azure AI Studio](https://ai.azure.com/).
29+
30+
## Supported Models
31+
32+
| Model Name | Description | Cost per Image |
33+
|------------|-------------|----------------|
34+
| `azure_ai/FLUX-1.1-pro` | Latest FLUX 1.1 Pro model for high-quality image generation | $0.04 |
35+
| `azure_ai/FLUX.1-Kontext-pro` | FLUX 1 Kontext Pro model with enhanced context understanding | $0.04 |
36+
37+
## Image Generation
38+
39+
### Usage - LiteLLM Python SDK
40+
41+
<Tabs>
42+
<TabItem value="basic" label="Basic Usage">
43+
44+
```python showLineNumbers title="Basic Image Generation"
45+
import litellm
46+
import os
47+
48+
# Set your API credentials
49+
os.environ["AZURE_AI_API_KEY"] = "your-api-key-here"
50+
os.environ["AZURE_AI_API_BASE"] = "your-azure-ai-endpoint"
51+
52+
# Generate a single image
53+
response = litellm.image_generation(
54+
model="azure_ai/FLUX.1-Kontext-pro",
55+
prompt="A cute baby sea otter swimming in crystal clear water",
56+
api_base=os.environ["AZURE_AI_API_BASE"],
57+
api_key=os.environ["AZURE_AI_API_KEY"]
58+
)
59+
60+
print(response.data[0].url)
61+
```
62+
63+
</TabItem>
64+
65+
<TabItem value="flux11" label="FLUX 1.1 Pro">
66+
67+
```python showLineNumbers title="FLUX 1.1 Pro Image Generation"
68+
import litellm
69+
import os
70+
71+
# Set your API credentials
72+
os.environ["AZURE_AI_API_KEY"] = "your-api-key-here"
73+
os.environ["AZURE_AI_API_BASE"] = "your-azure-ai-endpoint"
74+
75+
# Generate image with FLUX 1.1 Pro
76+
response = litellm.image_generation(
77+
model="azure_ai/FLUX-1.1-pro",
78+
prompt="A futuristic cityscape at night with neon lights and flying cars",
79+
api_base=os.environ["AZURE_AI_API_BASE"],
80+
api_key=os.environ["AZURE_AI_API_KEY"]
81+
)
82+
83+
print(response.data[0].url)
84+
```
85+
86+
</TabItem>
87+
88+
<TabItem value="async" label="Async Usage">
89+
90+
```python showLineNumbers title="Async Image Generation"
91+
import litellm
92+
import asyncio
93+
import os
94+
95+
async def generate_image():
96+
# Set your API credentials
97+
os.environ["AZURE_AI_API_KEY"] = "your-api-key-here"
98+
os.environ["AZURE_AI_API_BASE"] = "your-azure-ai-endpoint"
99+
100+
# Generate image asynchronously
101+
response = await litellm.aimage_generation(
102+
model="azure_ai/FLUX.1-Kontext-pro",
103+
prompt="A beautiful sunset over mountains with vibrant colors",
104+
api_base=os.environ["AZURE_AI_API_BASE"],
105+
api_key=os.environ["AZURE_AI_API_KEY"],
106+
n=1,
107+
)
108+
109+
print(response.data[0].url)
110+
return response
111+
112+
# Run the async function
113+
asyncio.run(generate_image())
114+
```
115+
116+
</TabItem>
117+
118+
<TabItem value="advanced" label="Advanced Parameters">
119+
120+
```python showLineNumbers title="Advanced Image Generation with Parameters"
121+
import litellm
122+
import os
123+
124+
# Set your API credentials
125+
os.environ["AZURE_AI_API_KEY"] = "your-api-key-here"
126+
os.environ["AZURE_AI_API_BASE"] = "your-azure-ai-endpoint"
127+
128+
# Generate image with additional parameters
129+
response = litellm.image_generation(
130+
model="azure_ai/FLUX-1.1-pro",
131+
prompt="A majestic dragon soaring over a medieval castle at dawn",
132+
api_base=os.environ["AZURE_AI_API_BASE"],
133+
api_key=os.environ["AZURE_AI_API_KEY"],
134+
n=1,
135+
size="1024x1024",
136+
quality="standard"
137+
)
138+
139+
for image in response.data:
140+
print(f"Generated image URL: {image.url}")
141+
```
142+
143+
</TabItem>
144+
</Tabs>
145+
146+
### Usage - LiteLLM Proxy Server
147+
148+
#### 1. Configure your config.yaml
149+
150+
```yaml showLineNumbers title="Azure AI Image Generation Configuration"
151+
model_list:
152+
- model_name: azure-flux-kontext
153+
litellm_params:
154+
model: azure_ai/FLUX.1-Kontext-pro
155+
api_key: os.environ/AZURE_AI_API_KEY
156+
api_base: os.environ/AZURE_AI_API_BASE
157+
model_info:
158+
mode: image_generation
159+
160+
- model_name: azure-flux-11-pro
161+
litellm_params:
162+
model: azure_ai/FLUX-1.1-pro
163+
api_key: os.environ/AZURE_AI_API_KEY
164+
api_base: os.environ/AZURE_AI_API_BASE
165+
model_info:
166+
mode: image_generation
167+
168+
general_settings:
169+
master_key: sk-1234
170+
```
171+
172+
#### 2. Start LiteLLM Proxy Server
173+
174+
```bash showLineNumbers title="Start LiteLLM Proxy Server"
175+
litellm --config /path/to/config.yaml
176+
177+
# RUNNING on http://0.0.0.0:4000
178+
```
179+
180+
#### 3. Make requests with OpenAI Python SDK
181+
182+
<Tabs>
183+
<TabItem value="openai-sdk" label="OpenAI SDK">
184+
185+
```python showLineNumbers title="Azure AI Image Generation via Proxy - OpenAI SDK"
186+
from openai import OpenAI
187+
188+
# Initialize client with your proxy URL
189+
client = OpenAI(
190+
base_url="http://localhost:4000", # Your proxy URL
191+
api_key="sk-1234" # Your proxy API key
192+
)
193+
194+
# Generate image with FLUX Kontext Pro
195+
response = client.images.generate(
196+
model="azure-flux-kontext",
197+
prompt="A serene Japanese garden with cherry blossoms and a peaceful pond",
198+
n=1,
199+
size="1024x1024"
200+
)
201+
202+
print(response.data[0].url)
203+
```
204+
205+
</TabItem>
206+
207+
<TabItem value="litellm-sdk" label="LiteLLM SDK">
208+
209+
```python showLineNumbers title="Azure AI Image Generation via Proxy - LiteLLM SDK"
210+
import litellm
211+
212+
# Configure LiteLLM to use your proxy
213+
response = litellm.image_generation(
214+
model="litellm_proxy/azure-flux-11-pro",
215+
prompt="A cyberpunk warrior in a neon-lit alleyway",
216+
api_base="http://localhost:4000",
217+
api_key="sk-1234"
218+
)
219+
220+
print(response.data[0].url)
221+
```
222+
223+
</TabItem>
224+
225+
<TabItem value="curl" label="cURL">
226+
227+
```bash showLineNumbers title="Azure AI Image Generation via Proxy - cURL"
228+
curl --location 'http://localhost:4000/v1/images/generations' \
229+
--header 'Content-Type: application/json' \
230+
--header 'Authorization: Bearer sk-1234' \
231+
--data '{
232+
"model": "azure-flux-kontext",
233+
"prompt": "A cozy coffee shop interior with warm lighting and rustic wooden furniture",
234+
"n": 1,
235+
"size": "1024x1024"
236+
}'
237+
```
238+
239+
</TabItem>
240+
</Tabs>
241+
242+
## Supported Parameters
243+
244+
Azure AI Image Generation supports the following OpenAI-compatible parameters:
245+
246+
| Parameter | Type | Description | Default | Example |
247+
|-----------|------|-------------|---------|---------|
248+
| `prompt` | string | Text description of the image to generate | Required | `"A sunset over the ocean"` |
249+
| `model` | string | The FLUX model to use for generation | Required | `"azure_ai/FLUX.1-Kontext-pro"` |
250+
| `n` | integer | Number of images to generate (1-4) | `1` | `2` |
251+
| `size` | string | Image dimensions | `"1024x1024"` | `"512x512"`, `"1024x1024"` |
252+
| `api_base` | string | Your Azure AI endpoint URL | Required | `"https://your-endpoint.eastus2.inference.ai.azure.com/"` |
253+
| `api_key` | string | Your Azure AI API key | Required | Environment variable or direct value |
254+
255+
## Getting Started
256+
257+
1. Create an account at [Azure AI Studio](https://ai.azure.com/)
258+
2. Deploy a FLUX model in your Azure AI Studio workspace
259+
3. Get your API key and endpoint from the deployment details
260+
4. Set your `AZURE_AI_API_KEY` and `AZURE_AI_API_BASE` environment variables
261+
5. Start generating images using LiteLLM
262+
263+
## Additional Resources
264+
265+
- [Azure AI Studio Documentation](https://docs.microsoft.com/en-us/azure/ai-services/)
266+
- [FLUX Models Announcement](https://techcommunity.microsoft.com/blog/azure-ai-foundry-blog/black-forest-labs-flux-1-kontext-pro-and-flux1-1-pro-now-available-in-azure-ai-f/4434659)

docs/my-website/sidebars.js

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -372,7 +372,14 @@ const sidebars = {
372372
"providers/azure/azure_embedding",
373373
]
374374
},
375-
"providers/azure_ai",
375+
{
376+
type: "category",
377+
label: "Azure AI",
378+
items: [
379+
"providers/azure_ai",
380+
"providers/azure_ai_img",
381+
]
382+
},
376383
{
377384
type: "category",
378385
label: "Vertex AI",

litellm/cost_calculator.py

Lines changed: 9 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,6 @@
3232
from litellm.llms.bedrock.cost_calculation import (
3333
cost_per_token as bedrock_cost_per_token,
3434
)
35-
from litellm.llms.bedrock.image.cost_calculator import (
36-
cost_calculator as bedrock_image_cost_calculator,
37-
)
3835
from litellm.llms.databricks.cost_calculator import (
3936
cost_per_token as databricks_cost_per_token,
4037
)
@@ -60,9 +57,6 @@
6057
cost_per_token as google_cost_per_token,
6158
)
6259
from litellm.llms.vertex_ai.cost_calculator import cost_router as google_cost_router
63-
from litellm.llms.vertex_ai.image_generation.cost_calculator import (
64-
cost_calculator as vertex_ai_image_cost_calculator,
65-
)
6660
from litellm.responses.utils import ResponseAPILoggingUtils
6761
from litellm.types.llms.openai import (
6862
HttpxBinaryResponseContent,
@@ -768,50 +762,15 @@ def completion_cost( # noqa: PLR0915
768762
)
769763
if CostCalculatorUtils._call_type_has_image_response(call_type):
770764
### IMAGE GENERATION COST CALCULATION ###
771-
if custom_llm_provider == "vertex_ai":
772-
if isinstance(completion_response, ImageResponse):
773-
return vertex_ai_image_cost_calculator(
774-
model=model,
775-
image_response=completion_response,
776-
)
777-
elif custom_llm_provider == "bedrock":
778-
if isinstance(completion_response, ImageResponse):
779-
return bedrock_image_cost_calculator(
780-
model=model,
781-
size=size,
782-
image_response=completion_response,
783-
optional_params=optional_params,
784-
)
785-
raise TypeError(
786-
"completion_response must be of type ImageResponse for bedrock image cost calculation"
787-
)
788-
elif custom_llm_provider == litellm.LlmProviders.RECRAFT.value:
789-
from litellm.llms.recraft.cost_calculator import (
790-
cost_calculator as recraft_image_cost_calculator,
791-
)
792-
793-
return recraft_image_cost_calculator(
794-
model=model,
795-
image_response=completion_response,
796-
)
797-
elif custom_llm_provider == litellm.LlmProviders.GEMINI.value:
798-
from litellm.llms.gemini.image_generation.cost_calculator import (
799-
cost_calculator as gemini_image_cost_calculator,
800-
)
801-
802-
return gemini_image_cost_calculator(
803-
model=model,
804-
image_response=completion_response,
805-
)
806-
else:
807-
return default_image_cost_calculator(
808-
model=model,
809-
quality=quality,
810-
custom_llm_provider=custom_llm_provider,
811-
n=n,
812-
size=size,
813-
optional_params=optional_params,
814-
)
765+
return CostCalculatorUtils.route_image_generation_cost_calculator(
766+
model=model,
767+
custom_llm_provider=custom_llm_provider,
768+
completion_response=completion_response,
769+
quality=quality,
770+
n=n,
771+
size=size,
772+
optional_params=optional_params,
773+
)
815774
elif (
816775
call_type == CallTypes.speech.value
817776
or call_type == CallTypes.aspeech.value

0 commit comments

Comments
 (0)