Releases: getsentry/sentry-python
2.43.0
Various fixes & improvements
-
Pydantic AI integration (#4906) by @constantinius
Enable the new Pydantic AI integration with the code snippet below, and you can use the Sentry AI dashboards to observe your AI calls:
import sentry_sdk from sentry_sdk.integrations.pydantic_ai import PydanticAIIntegration sentry_sdk.init( dsn="<your-dsn>", # Set traces_sample_rate to 1.0 to capture 100% # of transactions for tracing. traces_sample_rate=1.0, # Add data like inputs and responses; # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info send_default_pii=True, integrations=[ PydanticAIIntegration(), ], )
-
MCP Python SDK (#4964) by @constantinius
Enable the new Python MCP integration with the code snippet below:
import sentry_sdk from sentry_sdk.integrations.mcp import MCPIntegration sentry_sdk.init( dsn="<your-dsn>", # Set traces_sample_rate to 1.0 to capture 100% # of transactions for tracing. traces_sample_rate=1.0, # Add data like inputs and responses; # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info send_default_pii=True, integrations=[ MCPIntegration(), ], )
-
fix(strawberry): Remove autodetection, always use sync extension (#4984) by @sentrivana
Previously,
StrawberryIntegrationwould try to guess whether it should install the sync or async version of itself. This auto-detection was very brittle and could lead to us auto-enabling async code in a sync context. With this change,StrawberryIntegrationremains an auto-enabling integration, but it'll enable the sync version by default. If you want to enable the async version, pass the option explicitly:sentry_sdk.init( # ... integrations=[ StrawberryIntegration( async_execution=True ), ], )
-
fix(google-genai): Set agent name (#5038) by @constantinius
-
fix(integrations): hooking into error tracing function to find out if an execute tool span should be set to error (#4986) by @constantinius
-
fix(django): Improve logic for classifying cache hits and misses (#5029) by @alexander-alderman-webb
-
chore(metrics): Rename _metrics to metrics (#5035) by @alexander-alderman-webb
-
fix(tracemetrics): Bump metric buffer size to 1k (#5031) by @k-fish
-
build(deps): bump actions/upload-artifact from 4 to 5 (#5032) by @dependabot
-
fix(ai): truncate messages for google genai (#4992) by @shellmayr
-
fix(ai): add message truncation to litellm (#4973) by @shellmayr
-
feat(langchain): Support v1 (#4874) by @sentrivana
-
ci: Run
commontest suite on Python 3.14t (#4969) by @alexander-alderman-webb -
feat: Officially support 3.14 & run integration tests on 3.14 (#4974) by @sentrivana
-
Make logger template format safer to missing kwargs (#4981) by @sl0thentr0py
-
tests(huggingface): Support 1.0.0rc7 (#4979) by @alexander-alderman-webb
-
feat: Enable HTTP request code origin by default (#4967) by @alexander-alderman-webb
-
ci: Run
commontest suite on Python 3.14 (#4896) by @sentrivana
3.0.0a7
We are discontinuing development on 3.0. Please see
https://github.com/getsentry/sentry-python/discussions/4955 for more information.
3.0.0a7 is a maintenance release that adds a warning that there won't be
a stable 3.0 release. If you are on a 3.0 alpha release, please switch back
to 2.x to get the newest features and fixes.
2.42.1
Various fixes & improvements
- fix(gcp): Inject scopes in TimeoutThread exception with GCP (#4959) by @alexander-alderman-webb
- fix(aws): Inject scopes in TimeoutThread exception with AWS lambda (#4914) by @alexander-alderman-webb
- fix(ai): add message trunction to anthropic (#4953) by @shellmayr
- fix(ai): add message truncation to langgraph (#4954) by @shellmayr
- fix: Default breadcrumbs value for events without breadcrumbs (#4952) by @alexander-alderman-webb
- fix(ai): add message truncation in langchain (#4950) by @shellmayr
- fix(ai): correct size calculation, rename internal property for message truncation & add test (#4949) by @shellmayr
- fix(ai): introduce message truncation for openai (#4946) by @shellmayr
- fix(openai): Use non-deprecated Pydantic method to extract response text (#4942) by @JasonLovesDoggo
- ci: 🤖 Update test matrix with new releases (10/16) (#4945) by @github-actions
- Handle ValueError in scope resets (#4928) by @sl0thentr0py
- fix(litellm): Classify embeddings correctly (#4918) by @alexander-alderman-webb
- Generalize NOT_GIVEN check with omit for openai (#4926) by @sl0thentr0py
- ⚡️ Speed up function
_get_db_span_description(#4924) by @misrasaurabh1
2.42.0
Various fixes & improvements
-
feat: Add source information for slow outgoing HTTP requests (#4902) by @alexander-alderman-webb
-
tests: Update tox (#4913) by @sentrivana
-
fix(Ray): Retain the original function name when patching Ray tasks (#4858) by @svartalf
-
feat(ai): Add
python-genaiintegration (#4891) by @vgrozdanic
Enable the new Google GenAI integration with the code snippet below, and you can use the Sentry AI dashboards to observe your AI calls:import sentry_sdk from sentry_sdk.integrations.google_genai import GoogleGenAIIntegration sentry_sdk.init( dsn="<your-dsn>", # Set traces_sample_rate to 1.0 to capture 100% # of transactions for tracing. traces_sample_rate=1.0, # Add data like inputs and responses; # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info send_default_pii=True, integrations=[ GoogleGenAIIntegration(), ], )
2.41.0
Various fixes & improvements
-
feat: Add
concurrent.futurespatch to threading integration (#4770) by @alexander-alderman-webbThe SDK now makes sure to automatically preserve span relationships when using
ThreadPoolExecutor. -
chore: Remove old metrics code (#4899) by @sentrivana
Removed all code related to the deprecated experimental metrics feature (
sentry_sdk.metrics). -
ref: Remove "experimental" from log function name (#4901) by @sentrivana
-
fix(ai): Add mapping for gen_ai message roles (#4884) by @shellmayr
-
feat(metrics): Add trace metrics behind an experiments flag (#4898) by @k-fish
2.40.0
Various fixes & improvements
-
Add LiteLLM integration (#4864) by @constantinius
Once you've enabled the new LiteLLM integration, you can use the Sentry AI Agents Monitoring, a Sentry dashboard that helps you understand what's going on with your AI requests:import sentry_sdk from sentry_sdk.integrations.litellm import LiteLLMIntegration sentry_sdk.init( dsn="<your-dsn>", # Set traces_sample_rate to 1.0 to capture 100% # of transactions for tracing. traces_sample_rate=1.0, # Add data like inputs and responses; # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info send_default_pii=True, integrations=[ LiteLLMIntegration(), ], )
-
Litestar: Copy request info to prevent cookies mutation (#4883) by @alexander-alderman-webb
-
Also emit spans for MCP tool calls done by the LLM (#4875) by @constantinius
-
Option to not trace HTTP requests based on status codes (#4869) by @alexander-alderman-webb
You can now disable transactions for incoming requests with specific HTTP status codes. The newtrace_ignore_status_codesoption accepts asetof status codes as integers. If a transaction wraps a request that results in one of the provided status codes, the transaction will be unsampled.import sentry_sdk sentry_sdk.init( trace_ignore_status_codes={301, 302, 303, *range(305, 400), 404}, )
-
Move
_set_agent_datacall toai_client_spanfunction (#4876) by @constantinius -
Add script to determine lowest supported versions (#4867) by @sentrivana
-
Update
CONTRIBUTING.md(#4870) by @sentrivana
2.39.0
Various fixes & improvements
- Fix(AI): Make agents integrations set the span status in case of error (#4820) by @antonpirker
- Fix(dedupe): Use weakref in dedupe where possible (#4834) by @sl0thentr0py
- Fix(Django): Avoid evaluating complex Django object in span.data/span.attributes (#4804) by @antonpirker
- Fix(Langchain): Don't record tool call output if not include_prompt / should_send_default_pii (#4836) by @shellmayr
- Fix(OpenAI): Don't swallow userland exceptions in openai (#4861) by @sl0thentr0py
- Docs: Update contributing guidelines with instructions to run tests with tox (#4857) by @alexander-alderman-webb
- Test(Spark): Improve
test_sparkspeed (#4822) by @mgaligniana
Note: This is my last release. So long, and thanks for all the fish! by @antonpirker
2.38.0
Various fixes & improvements
- Feat(huggingface_hub): Update HuggingFace Hub integration (#4746) by @antonpirker
- Feat(Anthropic): Add proper tool calling data to Anthropic integration (#4769) by @antonpirker
- Feat(openai-agents): Add input and output to
invoke_agentspan. (#4785) by @antonpirker - Feat(AI): Create transaction in AI agents framworks, when no transaction is running. (#4758) by @constantinius
- Feat(GraphQL): Support gql 4.0-style execute (#4779) by @sentrivana
- Fix(logs): Expect
log_itemas rate limit category (#4798) by @sentrivana - Fix: CI for mypy, gevent (#4790) by @sentrivana
- Fix: Correctly check for a running transaction (#4791) by @antonpirker
- Fix: Use float for sample rand (#4677) by @sentrivana
- Fix: Avoid reporting false-positive StopAsyncIteration in the asyncio integration (#4741) by @vmarkovtsev
- Fix: Add log message when
DedupeIntegrationis dropping an error. (#4788) by @antonpirker - Fix(profiling): Re-init continuous profiler (#4772) by @Zylphrex
- Chore: Reexport module
profiler(#4535) by @zen-xu - Tests: Update tox.ini (#4799) by @sentrivana
- Build(deps): bump actions/create-github-app-token from 2.1.1 to 2.1.4 (#4795) by @dependabot
- Build(deps): bump actions/setup-python from 5 to 6 (#4774) by @dependabot
- Build(deps): bump codecov/codecov-action from 5.5.0 to 5.5.1 (#4773) by @dependabot
2.37.1
Various fixes & improvements
- Fix(langchain): Make Langchain integration work with just langchain-core (#4783) by @shellmayr
- Tests: Move quart under toxgen (#4775) by @sentrivana
- Tests: Update tox.ini (#4777) by @sentrivana
- Tests: Move chalice under toxgen (#4766) by @sentrivana
2.37.0
-
New Integration (BETA): Add support for
langgraph(#4727) by @shellmayrWe can now instrument AI agents that are created with LangGraph out of the box.
For more information see the LangGraph integrations documentation.
-
AI Agents: Improve rendering of input and output messages in AI agents integrations. (#4750) by @shellmayr
-
AI Agents: Format span attributes in AI integrations (#4762) by @antonpirker
-
CI: Fix celery (#4765) by @sentrivana
-
Tests: Move asyncpg under toxgen (#4757) by @sentrivana
-
Tests: Move beam under toxgen (#4759) by @sentrivana
-
Tests: Move boto3 tests under toxgen (#4761) by @sentrivana
-
Tests: Remove openai pin and update tox (#4748) by @sentrivana