-
Notifications
You must be signed in to change notification settings - Fork 18
Dev/steven/return object #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR makes the OpenAI response object directly accessible through the GuardrailsResponse wrapper, eliminating the need to access llm_response and making it a true drop-in replacement for OpenAI clients.
- Implemented transparent proxy pattern using
__getattr__to delegate attribute access to the underlying OpenAI response - Added deprecation warning for backward compatibility when
llm_responseis accessed (warns once per instance using WeakValueDictionary) - Updated all examples and documentation to use direct attribute access pattern (
response.output_textinstead ofresponse.llm_response.output_text)
Reviewed changes
Copilot reviewed 21 out of 21 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| src/guardrails/_base_client.py | Implemented transparent proxy pattern in GuardrailsResponse with __getattr__, added llm_response property with deprecation warning, and changed internal field to _llm_response |
| tests/unit/test_response_flattening.py | Comprehensive test suite covering direct attribute access, deprecation warnings, hasattr/getattr behavior, and backward compatibility |
| examples/internal_examples/custom_context.py | Updated to use direct attribute access pattern (response.choices[0].message.content) |
| examples/implementation_code/streaming/streaming_responses.py | Updated streaming examples to access response attributes directly |
| examples/implementation_code/streaming/streaming_completions.py | Updated streaming completions to use flattened attribute access |
| examples/implementation_code/blocking/blocking_responses.py | Updated to access output_text and id directly on response |
| examples/implementation_code/blocking/blocking_completions.py | Updated to use direct attribute access for message content |
| examples/hallucination_detection/run_hallucination_detection.py | Updated to access response attributes directly |
| examples/basic/suppress_tripwire.py | Updated to use response.output_text and response.id directly |
| examples/basic/structured_outputs_example.py | Updated to access output_parsed and id directly |
| examples/basic/pii_mask_example.py | Updated to use direct attribute access pattern |
| examples/basic/multiturn_chat_with_alignment.py | Updated to access choices directly on response |
| examples/basic/multi_bundle.py | Updated streaming example with flattened attribute access and improved comment |
| examples/basic/local_model.py | Updated to use direct attribute access for message content |
| examples/basic/hello_world.py | Updated to access output_text and id directly, removed extra blank lines |
| examples/basic/azure_implementation.py | Updated to use direct attribute access for message content |
| docs/tripwires.md | Updated documentation to show direct attribute access pattern |
| docs/ref/checks/hallucination_detection.md | Updated documentation example to use response.output_text |
| docs/quickstart.md | Updated quickstart guide to demonstrate direct attribute access and clarified drop-in replacement behavior |
| docs/index.md | Updated index documentation to use direct attribute access |
| README.md | Updated README examples to show direct attribute access pattern |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@codex review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 21 out of 21 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
|
@codex review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
|
@codex review |
|
Codex Review: Didn't find any major issues. Breezy! ℹ️ About Codex in GitHubCodex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback". |
This PR makes the OpenAI response object directly accessible
llm_responseobject, making it not truly drop-in-replacementresponse.output_textinstead ofresponse.llm_response.output_text) requiring zero changes in their codellm_responseattribute for backwards compatibility but we emit a depreciation warning if that pattern is usedllm_responseThis resolves Issue 49 and will be merged instead of draft PR 50. Thank you to @fletchersarip93 for the suggestion.