Releases: BerriAI/litellm
v1.28.1
What's Changed
- (feat) add user_api_key_alias in litellm_params metadata by @lucebert in #2217
- [FEAT] mistral azure - use with env vars + added pricing by @ishaan-jaff in #2247
- build(ui): fix admin viewer issue by @krrishdholakia in #2249
- [FEAT] proxy add pagination on /user/info endpoint (Admin UI does not load all users) by @ishaan-jaff in #2255
New Contributors
Full Changelog: v1.28.0...v1.28.1
v1.28.0
Full Changelog: v1.27.15...v1.28.0
v1.27.15
What's Changed
- [FIX] Proxy - Set different locations per vertex ai deployment on litellm proxy by @ishaan-jaff in #2234
- fix(proxy_server.py): introduces a beta endpoint for admin to view global spend by @krrishdholakia in #2236
- [FEAT] Track which models support function calling by @ishaan-jaff in #2241
- [FIX] Race Condition with Custom Callbacks where Async Streaming got triggered twice by @ishaan-jaff in #2240
- [WIP] Allow proxy admin to add others to view global spend by @krrishdholakia in #2231
- 👉 Support for Mistral AI Tool Calling Live now https://docs.litellm.ai/docs/providers/mistral
Check if a model supports function calling, parallel function calling https://docs.litellm.ai/docs/completion/function_call
Full Changelog: v1.27.14...v1.27.15
v1.27.14
What's Changed
- [Docs] Proxy - Pass vertex_params by @ishaan-jaff in #2229
- [FEAT] Clickhouse - Create Analytics Tables by @ishaan-jaff in #2223
- Drop None values from server streaming response by @krrishdholakia in #2225
- [FEAT] GET /daily_metrics by @ishaan-jaff in #2226
- feat(proxy_server.py): adds ui_access_mode to control access to proxy ui by @krrishdholakia in #2230
Full Changelog: v1.27.10...v1.27.14
v1.27.10
What's Changed
- [FIX] using mistral on azure ai studio by @ishaan-jaff in #2216
- [FIX] Minor fix for logging proxy logs on clickhouse by @ishaan-jaff in #2219
- fix(ui/create_key_button.tsx): enable user to set key budget duration by @krrishdholakia in #2220
Full Changelog: v1.27.9...v1.27.10
v1.27.9
What's Changed
- Litellm enforce team limits by @krrishdholakia in #2208
- fix(utils.py): fix compatibility between together_ai and openai-python by @zu1k in #2213
- 🐛 fix: Ollama vision models call arguments (like : llava) by @Lunik in #2201
New Contributors
Full Changelog: v1.27.8...v1.27.9
v1.27.8
What's Changed
- fix(utils.py): support returning caching streaming response for function calling streaming calls by @krrishdholakia in #2203
- build(proxy_server.py): fix /spend/logs query bug by @krrishdholakia in #2212
Full Changelog: v1.27.7...v1.27.8
v1.27.7
Use ClickhouseDB for low latency LLM Analytics / Spend Reports
(sub 1s analytics, with 100M logs)
Getting started with ClickHouse DB + LiteLLM Proxy
Docs + Docker compose for getting started with clickhouse: https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---clickhouse
Step 1: Create a config.yaml file and set litellm_settings: success_callback
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
litellm_settings:
success_callback: ["clickhouse"]Step 2: Set Required env variables for clickhouse
Env Variables for self hosted click house
CLICKHOUSE_HOST = "localhost"
CLICKHOUSE_PORT = "8123"
CLICKHOUSE_USERNAME = "admin"
CLICKHOUSE_PASSWORD = "admin"Step 3: Start the proxy, make a test request
New Models
Mistral on Azure AI Studio
Sample Usage
Ensure you have the /v1 in your api_base
from litellm import completion
import os
response = completion(
model="mistral/Mistral-large-dfgfj",
api_base="https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1",
api_key = "JGbKodRcTp****"
messages=[
{"role": "user", "content": "hello from litellm"}
],
)
print(response)[LiteLLM Proxy] Using Mistral Models
Set this on your litellm proxy config.yaml
Ensure you have the /v1 in your api_base
model_list:
- model_name: mistral
litellm_params:
model: mistral/Mistral-large-dfgfj
api_base: https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1
api_key: JGbKodRcTp****What's Changed
- [Docs] use azure ai studio + mistral large by @ishaan-jaff in #2205
- [Feat] Start Self hosted clickhouse server by @ishaan-jaff in #2206
- [FEAT] Admin UI - View /spend/logs from clickhouse data by @ishaan-jaff in #2210
- [Docs] Use Clickhouse DB + Docker compose by @ishaan-jaff in #2211
Full Changelog: v1.27.6...v1.27.7
v1.27.6
New Models
azure/text-embedding-3-largeazure/text-embedding-3-smallmistral/mistral-large-latest
Log LLM Output in ClickHouse DB
litellm.success_callback = ["clickhouse"]
await litellm.acompletion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"This is a test"}],
max_tokens=10,
temperature=0.7,
user="ishaan-2",
)What's Changed
- [FEAT] add cost for azure/text-embedding-3-large, azure/text-embedding-3-small by @ishaan-jaff in #2198
- [FEAT] Use Logging on clickhouse by @ishaan-jaff in #2187
- Litellm custom callback fix by @krrishdholakia in #2202
Full Changelog: v1.27.4...v1.27.6
v1.27.4
What's Changed
- Allow end-users to opt out of llm api calls by @krrishdholakia in #2174
- [Docs] open router - clarify we support all models by @ishaan-jaff in #2186
- (docs) using openai compatible endpoints by @ishaan-jaff in #2189
- [Fix] Fix health check when API base set for OpenAI compatible models by @ishaan-jaff in #2188
- fix(proxy_server.py): allow user to set team tpm/rpm limits/budget/models by @krrishdholakia in #2183
Full Changelog: v1.27.1...v1.27.4

