Skip to content

Conversation

@steven10a
Copy link
Collaborator

@steven10a steven10a commented Nov 25, 2025

This PR makes the OpenAI response object directly accessible

  • Currently the OpenAI response object is accessed through the llm_response object, making it not truly drop-in-replacement
  • Users can now access the OpenAI object directly (response.output_text instead of response.llm_response.output_text) requiring zero changes in their code
  • We still support the old llm_response attribute for backwards compatibility but we emit a depreciation warning if that pattern is used
"Accessing 'llm_response' is deprecated. "
"Access response attributes directly instead (e.g., use 'response.output_text' "
"instead of 'response.llm_response.output_text'). "
"The 'llm_response' attribute will be removed in future versions.",
  • All docs, tests, and examples updated to reflect no longer using llm_response

This resolves Issue 49 and will be merged instead of draft PR 50. Thank you to @fletchersarip93 for the suggestion.

Copilot AI review requested due to automatic review settings November 25, 2025 16:49
Copilot finished reviewing on behalf of steven10a November 25, 2025 16:52
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR makes the OpenAI response object directly accessible through the GuardrailsResponse wrapper, eliminating the need to access llm_response and making it a true drop-in replacement for OpenAI clients.

  • Implemented transparent proxy pattern using __getattr__ to delegate attribute access to the underlying OpenAI response
  • Added deprecation warning for backward compatibility when llm_response is accessed (warns once per instance using WeakValueDictionary)
  • Updated all examples and documentation to use direct attribute access pattern (response.output_text instead of response.llm_response.output_text)

Reviewed changes

Copilot reviewed 21 out of 21 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
src/guardrails/_base_client.py Implemented transparent proxy pattern in GuardrailsResponse with __getattr__, added llm_response property with deprecation warning, and changed internal field to _llm_response
tests/unit/test_response_flattening.py Comprehensive test suite covering direct attribute access, deprecation warnings, hasattr/getattr behavior, and backward compatibility
examples/internal_examples/custom_context.py Updated to use direct attribute access pattern (response.choices[0].message.content)
examples/implementation_code/streaming/streaming_responses.py Updated streaming examples to access response attributes directly
examples/implementation_code/streaming/streaming_completions.py Updated streaming completions to use flattened attribute access
examples/implementation_code/blocking/blocking_responses.py Updated to access output_text and id directly on response
examples/implementation_code/blocking/blocking_completions.py Updated to use direct attribute access for message content
examples/hallucination_detection/run_hallucination_detection.py Updated to access response attributes directly
examples/basic/suppress_tripwire.py Updated to use response.output_text and response.id directly
examples/basic/structured_outputs_example.py Updated to access output_parsed and id directly
examples/basic/pii_mask_example.py Updated to use direct attribute access pattern
examples/basic/multiturn_chat_with_alignment.py Updated to access choices directly on response
examples/basic/multi_bundle.py Updated streaming example with flattened attribute access and improved comment
examples/basic/local_model.py Updated to use direct attribute access for message content
examples/basic/hello_world.py Updated to access output_text and id directly, removed extra blank lines
examples/basic/azure_implementation.py Updated to use direct attribute access for message content
docs/tripwires.md Updated documentation to show direct attribute access pattern
docs/ref/checks/hallucination_detection.md Updated documentation example to use response.output_text
docs/quickstart.md Updated quickstart guide to demonstrate direct attribute access and clarified drop-in replacement behavior
docs/index.md Updated index documentation to use direct attribute access
README.md Updated README examples to show direct attribute access pattern

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@steven10a
Copy link
Collaborator Author

@codex review

@steven10a steven10a requested a review from Copilot November 25, 2025 17:05
Copilot finished reviewing on behalf of steven10a November 25, 2025 17:08
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 21 out of 21 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@steven10a
Copy link
Collaborator Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@steven10a
Copy link
Collaborator Author

@codex review

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Breezy!

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Not as "drop-in" yet because the "guardrail client" returns GuardrailsResponse instead of standard OpenAI response objects

2 participants