Location: Sidebar → Automated Testing
Automated Testing turns your saved collection requests into executable test plans with configurable pass/fail criteria.
Creating a Test Plan
- Click New Test Plan and give it a name.
- Click Add Test Case.
- Select a request from your collections, or use a generated request.
- Optionally override the auth context for this specific test case.
- Choose a pass criterion (see below).
- Repeat for each test case.
Pass Criteria
| Criterion | Passes When |
|---|
| No error | The request completes without a Supabase error. Use alongside error-expecting tests for the same endpoint to build positive/negative test pairs. |
| Rows returned | The result contains at least one row. |
| Exact row count | The result contains exactly N rows (configure N). |
| Empty result | The result contains no rows. |
| Expect error (any) | The request returns any error (for negative / access-denied tests). This allows you to give a pass result on an expected error. |
| Expect error code | The request returns an error containing a specific code — e.g. an HTTP status (403), a PostgREST code (PGRST301), or a Postgres error code (23505). This allows you to give a pass result on an expected error. |
| Error message matches | The request returns an error whose message matches a regex or substring (case-insensitive). |
| File returned | A file or blob is present in the response (for storage downloads). |
| Response contains | The response body includes a specific string or value. |
| Response contains rows | The response contains all the specified rows (order-independent, partial field matching). Provide a JSON array of expected row objects. |
| Response row equals | The response contains exactly one row matching all specified fields. Provide a JSON object of expected key-value pairs. |
| Response time under | The request completes in fewer than N milliseconds. Note, this time includes client, DNS, TLS, and network delays and is not simply a SQL query response time. Factor this in when creating response time checks. |
Pass Criteria Examples
Below are realistic examples for every criterion showing the configured value, a sample API response, and the expected result. These match the exact evaluation logic used by Supatester.
No error — no value required
| Detail |
|---|
| Response | [{ "id": 1, "name": "Alice" }] |
| Error | null |
| Result | ✓ PASS — no error was returned |
| Detail |
|---|
| Response | null |
| Error | "permission denied for table users" |
| Result | ✗ FAIL — error present |
Rows returned — no value required
| Detail |
|---|
| Response | [{ "id": 1 }, { "id": 2 }] |
| Error | null |
| Result | ✓ PASS — array contains ≥ 1 row |
| Detail |
|---|
| Response | [] |
| Error | null |
| Result | ✗ FAIL — empty array (0 rows) |
Note: The response must be an array. A single object (e.g. { "id": 1 }) will fail because it is not an array.
Exact row count — value: the expected number of rows
| Detail |
|---|
| Value | 3 |
| Response | [{ "id": 1 }, { "id": 2 }, { "id": 3 }] |
| Result | ✓ PASS — array length equals 3 |
| Detail |
|---|
| Value | 3 |
| Response | [{ "id": 1 }, { "id": 2 }, { "id": 3 }, { "id": 4 }, { "id": 5 }] |
| Result | ✗ FAIL — expected 3 rows, got 5 |
| Detail |
|---|
| Value | 0 |
| Response | [] |
| Result | ✓ PASS — expected 0 rows, got 0 |
Empty result — no value required
| Detail |
|---|
| Response | [] |
| Error | null |
| Result | ✓ PASS — empty array |
| Detail |
|---|
| Response | null |
| Error | null |
| Result | ✓ PASS — null counts as empty |
| Detail |
|---|
| Response | [{ "id": 1 }] |
| Error | null |
| Result | ✗ FAIL — response contains data |
Expect error (any) — no value required
| Detail |
|---|
| Response | null |
| Error | "new row violates row-level security policy for table \"orders\"" |
| Result | ✓ PASS — an error was returned |
| Detail |
|---|
| Response | [{ "id": 1 }] |
| Error | null |
| Result | ✗ FAIL — the operation succeeded (no error) |
Expect error code — value: the code to look for (substring match against the error string)
| Detail |
|---|
| Value | 42501 |
| Error | "Error code: 42501 — insufficient_privilege" |
| Result | ✓ PASS — error string contains 42501 |
| Detail |
|---|
| Value | PGRST301 |
| Error | "PGRST301: JWSError JWSInvalidSignature" |
| Result | ✓ PASS — error string contains PGRST301 |
| Detail |
|---|
| Value | 403 |
| Error | "FetchError: 403 Forbidden" |
| Result | ✓ PASS — error string contains 403 |
| Detail |
|---|
| Value | 42501 |
| Error | "Error code: 42P01 — undefined_table" |
| Result | ✗ FAIL — error string does not contain 42501 |
Error message matches — value: a regex pattern or substring (case-insensitive)
| Detail |
|---|
| Value | permission denied |
| Error | "permission denied for table users" |
| Result | ✓ PASS — substring match (case-insensitive) |
| Detail |
|---|
| Value | ^permission |
| Error | "permission denied" |
| Result | ✓ PASS — regex matches start of string |
| Detail |
|---|
| Value | user_\d+ |
| Error | "user_123 not found" |
| Result | ✓ PASS — regex matches user_123 |
| Detail |
|---|
| Value | not found |
| Error | "record deleted" |
| Result | ✗ FAIL — neither regex nor substring match |
File returned — no value required
| Detail |
|---|
| Response | <binary data — 24 KB PNG> |
| Error | null |
| Result | ✓ PASS — non-null data returned |
| Detail |
|---|
| Response | null |
| Error | "Object not found" |
| Result | ✗ FAIL — error returned, no file data |
Response contains — value: a string to search for in the JSON-serialised response
| Detail |
|---|
| Value | alice |
| Response | [{ "name": "alice", "role": "admin" }] |
| Result | ✓ PASS — "alice" appears in the serialised response |
| Detail |
|---|
| Value | active |
| Response | [{ "id": 1, "status": "active" }, { "id": 2, "status": "inactive" }] |
| Result | ✓ PASS — "active" appears in the serialised response |
| Detail |
|---|
| Value | pending |
| Response | [{ "id": 1, "status": "active" }] |
| Result | ✗ FAIL — "pending" not found anywhere in the response |
Note: The entire response is serialised to a JSON string before searching, so this matches values inside any field at any nesting level.
Response contains rows — value: a JSON array of expected row objects (order-independent, partial field matching, extra rows in the response are ignored)
| Detail |
|---|
| Value | [{"title": "The Silent Coast", "genre": "Drama"}, {"title": "Beyond the Reef", "genre": "Adventure"}] |
Response (3 rows, different order):
[
{
"id": "1b06342d-91e7-43ea-9f79-13d7e1d60f3a",
"author_id": "a1111111-bbbb-4ccc-8ddd-eeeeeeee0001",
"title": "The Silent Coast",
"genre": "Drama",
"published_year": 2015,
"created_at": "2026-02-15T10:33:58.564366+00:00"
},
{
"id": "d4b2b5c3-cb78-4434-a4df-47b07c749429",
"author_id": "a1111111-bbbb-4ccc-8ddd-eeeeeeee0001",
"title": "Beyond the Reef",
"genre": "Adventure",
"published_year": 2019,
"created_at": "2026-02-15T10:33:58.564366+00:00"
},
{
"id": "cac3eeb5-c540-401b-96d4-fb3f0f3b4fba",
"author_id": "a1111111-bbbb-4ccc-8ddd-eeeeeeee0002",
"title": "Clockwork Fields",
"genre": "Sci-Fi",
"published_year": 2020,
"created_at": "2026-02-15T10:33:58.564366+00:00"
}
]
| Result | ✓ PASS — both expected rows are found (row order does not matter, extra rows are ignored, and only the specified fields need to match) |
|---|
| Detail |
|---|
| Value | [{"name": "Alice"}] |
| Response | [{ "name": "Bob", "age": 25 }] |
| Result | ✗ FAIL — no row has name equal to "Alice" |
Response row equals — value: a JSON object of expected field values (response must contain exactly one row)
| Detail |
|---|
| Value | {"name": "Alice", "age": 30} |
| Response | [{ "name": "Alice", "age": 30, "email": "alice@example.com" }] |
| Result | ✓ PASS — exactly 1 row, and all specified fields match (extra fields are ignored) |
| Detail |
|---|
| Value | {"name": "Alice"} |
| Response | [{ "name": "Alice" }, { "name": "Bob" }] |
| Result | ✗ FAIL — expected exactly 1 row, got 2 |
| Detail |
|---|
| Value | {"name": "Alice"} |
| Response | [{ "name": "Bob" }] |
| Result | ✗ FAIL — field name expected "Alice", got "Bob" |
Response time under — value: maximum allowed time in milliseconds (inclusive)
| Detail |
|---|
| Value | 500 |
| Execution time | 200 ms |
| Result | ✓ PASS — 200 ms ≤ 500 ms |
| Detail |
|---|
| Value | 100 |
| Execution time | 150 ms |
| Result | ✗ FAIL — took 150 ms (limit: 100 ms) |
| Detail |
|---|
| Value | 1000 |
| Execution time | 1000 ms |
| Result | ✓ PASS — exactly at the limit (inclusive) |
Positive / Negative Test Pairs
The No error and Expect error code criteria are designed to be used together. For the same endpoint, create one test case asserting success (No error) and another asserting that an unauthorised or invalid request returns the expected error code. This mirrors the lives_ok() / throws_ok() pattern that is used by other programs like pgTAP.
Running a Test Plan
Click Run. Test cases execute sequentially and results appear in real time — each test shows a pass (✓) or fail (✗) result, the response time, and any error message on failure.
Click Stop at any time to abort the run.
Variables in Test Plans
If any request in the plan uses {{variable}} syntax, the variable extraction configuration on that request determines how values flow into later steps. Variables can be extracted from responses using JSON Path (e.g. data[0].id) or a Regex capture group.
Built-in variables are also available for auto-generated values: timestamps, UUIDs, and ULIDs — no extraction step required.
Supatester validates variable dependencies before running and will warn you if a variable is referenced before it is defined, if variables are unused, or if a circular dependency exists.
By clicking on variable tags on test cases you can manually enter a variable to be used for ensuring that the test case responds to the provided variable as expected. This method only provided a temporary variable that exist while you are on the page. The expectation is that the variable will be extracted by a previous test case when running with supatester-cli.
Run History
Every execution is saved to the Run History tab. Each entry includes a timestamp, overall pass/fail status, and individual test results. Configure how many historical runs to retain in the plan settings.
Use run history to compare results between schema changes and spot regressions.
Reports
Click Generate Report on any completed run to produce a formatted text summary suitable for sharing with stakeholders or attaching to a pull request.
File Attachments
For test cases that test storage uploads, click Attach File on the test case to select a file from disk. The file will be used as the upload payload when that test case runs.
Import and Export
Export any test plan as a JSON file to share with your team or commit to source control. Import a previously exported plan with conflict resolution for duplicate names.
Snapshots (Result-Set Comparison)
Snapshots let you capture the results of a test-plan run and compare future runs against them. Allowing for you to do pre/post database change comparisons.
- Run your test plan so that results are available.
- In the Snapshots tab, click Save Current Results.
- Up to 5 snapshots can be stored per test plan.
- Click the eye icon on any snapshot to compare it against the current results.
- Comparison highlights matches, mismatches, new tests (not in snapshot), and missing tests (in snapshot but not executed).
Ignore Ordering of Response
Snapshots have the “Ignore ordering” checkbox enabled by default. This means the comparison will not consider the order of rows in the response (for example, rows may appear in a different order, but all rows must still be present). Clear this checkbox if you want an exact comparison of the JSON responses.
Exclude Fields From Snapshot Comparison
In some cases, you may want to exclude certain fields from the snapshot comparison (for example, fields containing random values, timestamps, or other non‑deterministic data). To remove a field or column from comparison:
- Open the Snapshot tab.
- Click the snapshot item to expand it and display all test cases and results.
- For each relevant test case, click Response comparison selection to view all fields and columns included in the response. Select the fields or columns you want the snapshot comparison to ignore.
Snapshots are automatically included when you export a test plan and restored when you import it.