-
-
Notifications
You must be signed in to change notification settings - Fork 5k
Description
What happened?
I'm trying to define a guardrail rule in my LiteLLM proxy config to deny usage of a specific tool (web_search_preview) for Azure OpenAI models.
According to the documentation, it should be possible to specify guardrails (tool_permission) in the config.yaml file.
However, no matter how I structure the configuration, the tool is still allowed.
I have tried multiple variations of defining the guardrail, even with regex and wildcards to accept all strings as the tool_name, but the tool is still allowed. below is an example of one of the variations of my config file.
config.yaml:
model_list:
model_name: gpt-5-mini
litellm_params:
model: azure/responses/gpt-5-mini
api_key: os.environ/AZURE_RESPONSES_OPENAI_API_KEY
api_base: url
api_version: version
model_name: gpt-4o
litellm_params:
model: azure/responses/gpt-5.1
api_key: os.environ/AZURE_RESPONSES_OPENAI_API_KEY
api_base: url
api_version: version
guardrails:
guardrail_name: block_web_search
litellm_params:
guardrail: tool_permission
mode: "pre_call"
rules:
- id: "block_web_search_tool"
tool_name: "web_search_preview"
decision: "deny"
default_action: "allow"
on_disallowed_action: "block"
default_on: trueDoes tool_name match the tool "type" field, or should it match something else?
Has anyone successfully restricted a specific tool for Azure OpenAI models? am I defining the guardrail for it correctly?
Environment:
Running in: LiteLLM Proxy Docker container
Model: Azure OpenAI (gpt-4o-mini)
Tool being restricted: web_search_preview
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.80.0 OAS 3.1