You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/my-website/docs/proxy/call_hooks.md
+9Lines changed: 9 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,15 @@ import Image from '@theme/IdealImage';
10
10
**Understanding Callback Hooks?** Check out our [Callback Management Guide](../observability/callback_management.md) to understand the differences between proxy-specific hooks like `async_pre_call_hook` and general logging hooks like `async_log_success_event`.
11
11
:::
12
12
13
+
## Which Hook Should I Use?
14
+
15
+
| Hook | Use Case | When It Runs |
16
+
|------|----------|--------------|
17
+
|`async_pre_call_hook`| Modify incoming request before it's sent to model | Before the LLM API call is made |
18
+
|`async_moderation_hook`| Run checks on input in parallel to LLM API call | In parallel with the LLM API call |
19
+
|`async_post_call_success_hook`| Modify outgoing response (non-streaming) | After successful LLM API call, for non-streaming responses |
20
+
|`async_post_call_streaming_hook`| Modify outgoing response (streaming) | After successful LLM API call, for streaming responses |
21
+
13
22
See a complete example with our [parallel request rate limiter](https://github.com/BerriAI/litellm/blob/main/litellm/proxy/hooks/parallel_request_limiter.py)
0 commit comments