Skip to content

Releases: BerriAI/litellm

v1.28.1

29 Feb 21:48
e044d63

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.28.0...v1.28.1

v1.28.0

29 Feb 06:00

Choose a tag to compare

Full Changelog: v1.27.15...v1.28.0

v1.27.15

29 Feb 03:43

Choose a tag to compare

What's Changed

Check if a model supports function calling, parallel function calling https://docs.litellm.ai/docs/completion/function_call

codeimage-snippet_29 (1) 1

Full Changelog: v1.27.14...v1.27.15

v1.27.14

28 Feb 05:33
0591b4b

Choose a tag to compare

What's Changed

image

Full Changelog: v1.27.10...v1.27.14

v1.27.10

27 Feb 18:52
b78a6d8

Choose a tag to compare

What's Changed

Full Changelog: v1.27.9...v1.27.10

v1.27.9

27 Feb 15:43
62efbd7

Choose a tag to compare

What's Changed

  • Litellm enforce team limits by @krrishdholakia in #2208
  • fix(utils.py): fix compatibility between together_ai and openai-python by @zu1k in #2213
  • 🐛 fix: Ollama vision models call arguments (like : llava) by @Lunik in #2201

New Contributors

Full Changelog: v1.27.8...v1.27.9

v1.27.8

27 Feb 06:41

Choose a tag to compare

What's Changed

  • fix(utils.py): support returning caching streaming response for function calling streaming calls by @krrishdholakia in #2203
  • build(proxy_server.py): fix /spend/logs query bug by @krrishdholakia in #2212

Full Changelog: v1.27.7...v1.27.8

v1.27.7

27 Feb 03:33

Choose a tag to compare

Use ClickhouseDB for low latency LLM Analytics / Spend Reports

(sub 1s analytics, with 100M logs)

Getting started with ClickHouse DB + LiteLLM Proxy

Docs + Docker compose for getting started with clickhouse: https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---clickhouse

Step 1: Create a config.yaml file and set litellm_settings: success_callback

model_list:
 - model_name: gpt-3.5-turbo
    litellm_params:
      model: gpt-3.5-turbo
litellm_settings:
  success_callback: ["clickhouse"]

Step 2: Set Required env variables for clickhouse

Env Variables for self hosted click house

CLICKHOUSE_HOST = "localhost"
CLICKHOUSE_PORT = "8123"
CLICKHOUSE_USERNAME = "admin"
CLICKHOUSE_PASSWORD = "admin"

Step 3: Start the proxy, make a test request

New Models

Mistral on Azure AI Studio

Sample Usage

Ensure you have the /v1 in your api_base

from litellm import completion
import os

response = completion(
    model="mistral/Mistral-large-dfgfj", 
    api_base="https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1",
    api_key = "JGbKodRcTp****"
    messages=[
       {"role": "user", "content": "hello from litellm"}
   ],
)
print(response)

[LiteLLM Proxy] Using Mistral Models

Set this on your litellm proxy config.yaml

Ensure you have the /v1 in your api_base

model_list:
  - model_name: mistral
    litellm_params:
      model: mistral/Mistral-large-dfgfj
      api_base: https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1
      api_key: JGbKodRcTp****

What's Changed

Full Changelog: v1.27.6...v1.27.7

v1.27.6

26 Feb 21:32

Choose a tag to compare

New Models

  • azure/text-embedding-3-large
  • azure/text-embedding-3-small
  • mistral/mistral-large-latest

Log LLM Output in ClickHouse DB

litellm.success_callback = ["clickhouse"]
await litellm.acompletion(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": f"This is a test"}],
    max_tokens=10,
    temperature=0.7,
    user="ishaan-2",
)

What's Changed

Full Changelog: v1.27.4...v1.27.6

v1.27.4

25 Feb 11:20

Choose a tag to compare

What's Changed

Full Changelog: v1.27.1...v1.27.4