You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/testing/examples/langgraph.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ pip install langgraph
19
19
20
20
## Agent code
21
21
22
-
You can view the agent code [here](https://github.com/invariantlabs-ai/invariant/blob/main/testing/sample_tests/langgraph/weather_agent/weather_agent.py).
22
+
You can view the agent code [here](https://github.com/invariantlabs-ai/testing/blob/main/invariant_testing/testing/sample_tests/langgraph/weather_agent/weather_agent.py).
You can view the full code example of the example agent [here](https://github.com/invariantlabs-ai/invariant/blob/main/invariant/testing/sample_tests/swarm/capital_finder_agent/capital_finder_agent.py)
91
+
You can view the full code example of the example agent [here](https://github.com/invariantlabs-ai/testing/blob/main/invariant_testing/testing/sample_tests/swarm/capital_finder_agent/capital_finder_agent.py)
Copy file name to clipboardExpand all lines: docs/testing/index.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ title: Overview
4
4
5
5
# Invariant `testing`: helps you build better AI agents through debuggable unit testing
6
6
7
-
Invariant `testing` is a lightweight library to write and run AI agent tests. It provides helpers and assertions that enable you to write robust tests for your agentic applications.
7
+
Invariant [`testing`](https://github.com/invariantlabs-ai/testing) is a lightweight library to write and run AI agent tests. It provides helpers and assertions that enable you to write robust tests for your agentic applications.
8
8
9
9
Using [**localized assertions**](writing/traces.ipynb), `testing` always points you to the exact part of the agent's behavior that caused a test to fail, making it easy to debug and resolve issues (_think: stacktraces for agents_).
10
10
@@ -15,7 +15,7 @@ Using [**localized assertions**](writing/traces.ipynb), `testing` always points
Copy file name to clipboardExpand all lines: docs/testing/writing/matchers.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,18 +13,18 @@ To accommodate this, `testing` includes several different `Matcher` implementati
13
13
14
14
Beyond that, `Matcher` is also a simple base class that allows you to write your own custom matchers, if the provided ones are not sufficient for your needs (e.g. custom properties).
Matcher for checking if a lambda function returns True on the underlying value. This can be useful to check for custom properties of outputs, while maintaining [addresses to localize failing](./tests.md) assertions.
Checks for factual equality / entailment of two sentences or words. This can be used to check if two sentences are factually equivalent, or subset/superset of each other.
Copy file name to clipboardExpand all lines: docs/testing/writing/parameterized-tests.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ In some cases, a certain agent functionality should generalize to multiple scena
12
12
In `testing`, instead of writing a separate test for each city, you can use parameterized tests to test multiple scenarios. This ensures robustness and generalization of your agent's behavior.
Copy file name to clipboardExpand all lines: docs/testing/writing/tests.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ This chapter first discusses how _localized assertions_ work and then provides e
15
15
A test case in `testing` looks a lot like a regular unit test, except that it always makes use of a `Trace` object and the corresponding `.as_context()` method. This is required to enable _localized assertions_, that maps assertions to specific ranges in the provided trace:
To make hard (leading to test failure) assertions, you can use the `assert_true`, `assert_that`, and `assert_equals` functions. These functions are similar to the ones you might know from unit testing frameworks like `unittest` or `pytest`, but they add support for localization.
@@ -126,7 +126,7 @@ Next to hard assertions, `testing` also supports _soft assertions_ that do not l
126
126
Instead, they are logged as warnings only and can be used to check (non-functional) agent properties that may not be critical to ensure functional correctness (e.g. number of tool calls, runtime, etc.), but are still important to monitor.
0 commit comments