@@ -14,6 +14,7 @@ import TabItem from '@theme/TabItem';
1414| Meta/Llama | ` vertex_ai/meta/{MODEL} ` | [ Vertex AI - Meta Models] ( https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/llama ) |
1515| Mistral | ` vertex_ai/mistral-* ` | [ Vertex AI - Mistral Models] ( https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/mistral ) |
1616| AI21 (Jamba) | ` vertex_ai/jamba-* ` | [ Vertex AI - AI21 Models] ( https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/ai21 ) |
17+ | Qwen | ` vertex_ai/qwen/* ` | [ Vertex AI - Qwen Models] ( https://cloud.google.com/vertex-ai/generative-ai/docs/maas/qwen ) |
1718| Model Garden | ` vertex_ai/openai/{MODEL_ID} ` or ` vertex_ai/{MODEL_ID} ` | [ Vertex Model Garden] ( https://cloud.google.com/model-garden?hl=en ) |
1819
1920## Vertex AI - Anthropic (Claude)
@@ -571,6 +572,92 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
571572</Tabs >
572573
573574
575+ ## VertexAI Qwen API
576+
577+ | Property | Details |
578+ | ----------| ---------|
579+ | Provider Route | ` vertex_ai/qwen/{MODEL} ` |
580+ | Vertex Documentation | [ Vertex AI - Qwen Models] ( https://cloud.google.com/vertex-ai/generative-ai/docs/maas/qwen ) |
581+
582+ ** LiteLLM Supports all Vertex AI Qwen Models.** Ensure you use the ` vertex_ai/qwen/ ` prefix for all Vertex AI Qwen models.
583+
584+ | Model Name | Usage |
585+ | ------------------| ------------------------------|
586+ | vertex_ai/qwen/qwen3-coder-480b-a35b-instruct-maas | ` completion('vertex_ai/qwen/qwen3-coder-480b-a35b-instruct-maas', messages) ` |
587+ | vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas | ` completion('vertex_ai/qwen/qwen3-235b-a22b-instruct-2507-maas', messages) ` |
588+
589+ #### Usage
590+
591+ <Tabs >
592+ <TabItem value =" sdk " label =" SDK " >
593+
594+ ``` python
595+ from litellm import completion
596+ import os
597+
598+ os.environ[" GOOGLE_APPLICATION_CREDENTIALS" ] = " "
599+
600+ model = " qwen/qwen3-coder-480b-a35b-instruct-maas"
601+
602+ vertex_ai_project = " your-vertex-project" # can also set this as os.environ["VERTEXAI_PROJECT"]
603+ vertex_ai_location = " your-vertex-location" # can also set this as os.environ["VERTEXAI_LOCATION"]
604+
605+ response = completion(
606+ model = " vertex_ai/" + model,
607+ messages = [{" role" : " user" , " content" : " hi" }],
608+ vertex_ai_project = vertex_ai_project,
609+ vertex_ai_location = vertex_ai_location,
610+ )
611+ print (" \n Model Response" , response)
612+ ```
613+ </TabItem >
614+ <TabItem value =" proxy " label =" Proxy " >
615+
616+ ** 1. Add to config**
617+
618+ ``` yaml
619+ model_list :
620+ - model_name : vertex-qwen
621+ litellm_params :
622+ model : vertex_ai/qwen/qwen3-coder-480b-a35b-instruct-maas
623+ vertex_ai_project : " my-test-project"
624+ vertex_ai_location : " us-east-1"
625+ - model_name : vertex-qwen
626+ litellm_params :
627+ model : vertex_ai/qwen/qwen3-coder-480b-a35b-instruct-maas
628+ vertex_ai_project : " my-test-project"
629+ vertex_ai_location : " us-west-1"
630+ ` ` `
631+
632+ **2. Start proxy**
633+
634+ ` ` ` bash
635+ litellm --config /path/to/config.yaml
636+
637+ # RUNNING at http://0.0.0.0:4000
638+ ```
639+
640+ ** 3. Test it!**
641+
642+ ``` bash
643+ curl --location ' http://0.0.0.0:4000/chat/completions' \
644+ --header ' Authorization: Bearer sk-1234' \
645+ --header ' Content-Type: application/json' \
646+ --data ' {
647+ "model": "vertex-qwen", # 👈 the ' model_name' in config
648+ "messages": [
649+ {
650+ "role": "user",
651+ "content": "what llm are you"
652+ }
653+ ],
654+ }'
655+ ```
656+
657+ </TabItem >
658+ </Tabs >
659+
660+
574661## Model Garden
575662
576663::: tip
0 commit comments