Skip to main content

Automated Testing

Location: Sidebar → Automated Testing

Automated Testing turns your saved collection requests into executable test plans with configurable pass/fail criteria.

Creating a Test Plan

  1. Click New Test Plan and give it a name.
  2. Click Add Test Case.
  3. Select a request from your collections, or use a generated request.
  4. Optionally override the auth context for this specific test case.
  5. Choose a pass criterion (see below).
  6. Repeat for each test case.

Pass Criteria

CriterionPasses When
No errorThe request completes without a Supabase error. Use alongside error-expecting tests for the same endpoint to build positive/negative test pairs.
Rows returnedThe result contains at least one row.
Exact row countThe result contains exactly N rows (configure N).
Empty resultThe result contains no rows.
Expect error (any)The request returns any error (for negative / access-denied tests). This allows you to give a pass result on an expected error.
Expect error codeThe request returns an error containing a specific code — e.g. an HTTP status (403), a PostgREST code (PGRST301), or a Postgres error code (23505). This allows you to give a pass result on an expected error.
Error message matchesThe request returns an error whose message matches a regex or substring (case-insensitive).
File returnedA file or blob is present in the response (for storage downloads).
Response containsThe response body includes a specific string or value.
Response contains rowsThe response contains all the specified rows (order-independent, partial field matching). Provide a JSON array of expected row objects.
Response row equalsThe response contains exactly one row matching all specified fields. Provide a JSON object of expected key-value pairs.
Response time underThe request completes in fewer than N milliseconds. Note, this time includes client, DNS, TLS, and network delays and is not simply a SQL query response time. Factor this in when creating response time checks.
Pass Criteria Examples

Below are realistic examples for every criterion showing the configured value, a sample API response, and the expected result. These match the exact evaluation logic used by Supatester.


No error — no value required

Detail
Response[{ "id": 1, "name": "Alice" }]
Errornull
Result✓ PASS — no error was returned
Detail
Responsenull
Error"permission denied for table users"
Result✗ FAIL — error present

Rows returned — no value required

Detail
Response[{ "id": 1 }, { "id": 2 }]
Errornull
Result✓ PASS — array contains ≥ 1 row
Detail
Response[]
Errornull
Result✗ FAIL — empty array (0 rows)

Note: The response must be an array. A single object (e.g. { "id": 1 }) will fail because it is not an array.


Exact row count — value: the expected number of rows

Detail
Value3
Response[{ "id": 1 }, { "id": 2 }, { "id": 3 }]
Result✓ PASS — array length equals 3
Detail
Value3
Response[{ "id": 1 }, { "id": 2 }, { "id": 3 }, { "id": 4 }, { "id": 5 }]
Result✗ FAIL — expected 3 rows, got 5
Detail
Value0
Response[]
Result✓ PASS — expected 0 rows, got 0

Empty result — no value required

Detail
Response[]
Errornull
Result✓ PASS — empty array
Detail
Responsenull
Errornull
Result✓ PASS — null counts as empty
Detail
Response[{ "id": 1 }]
Errornull
Result✗ FAIL — response contains data

Expect error (any) — no value required

Detail
Responsenull
Error"new row violates row-level security policy for table \"orders\""
Result✓ PASS — an error was returned
Detail
Response[{ "id": 1 }]
Errornull
Result✗ FAIL — the operation succeeded (no error)

Expect error code — value: the code to look for (substring match against the error string)

Detail
Value42501
Error"Error code: 42501 — insufficient_privilege"
Result✓ PASS — error string contains 42501
Detail
ValuePGRST301
Error"PGRST301: JWSError JWSInvalidSignature"
Result✓ PASS — error string contains PGRST301
Detail
Value403
Error"FetchError: 403 Forbidden"
Result✓ PASS — error string contains 403
Detail
Value42501
Error"Error code: 42P01 — undefined_table"
Result✗ FAIL — error string does not contain 42501

Error message matches — value: a regex pattern or substring (case-insensitive)

Detail
Valuepermission denied
Error"permission denied for table users"
Result✓ PASS — substring match (case-insensitive)
Detail
Value^permission
Error"permission denied"
Result✓ PASS — regex matches start of string
Detail
Valueuser_\d+
Error"user_123 not found"
Result✓ PASS — regex matches user_123
Detail
Valuenot found
Error"record deleted"
Result✗ FAIL — neither regex nor substring match

File returned — no value required

Detail
Response<binary data — 24 KB PNG>
Errornull
Result✓ PASS — non-null data returned
Detail
Responsenull
Error"Object not found"
Result✗ FAIL — error returned, no file data

Response contains — value: a string to search for in the JSON-serialised response

Detail
Valuealice
Response[{ "name": "alice", "role": "admin" }]
Result✓ PASS — "alice" appears in the serialised response
Detail
Valueactive
Response[{ "id": 1, "status": "active" }, { "id": 2, "status": "inactive" }]
Result✓ PASS — "active" appears in the serialised response
Detail
Valuepending
Response[{ "id": 1, "status": "active" }]
Result✗ FAIL — "pending" not found anywhere in the response

Note: The entire response is serialised to a JSON string before searching, so this matches values inside any field at any nesting level.


Response contains rows — value: a JSON array of expected row objects (order-independent, partial field matching, extra rows in the response are ignored)

Detail
Value[{"title": "The Silent Coast", "genre": "Drama"}, {"title": "Beyond the Reef", "genre": "Adventure"}]

Response (3 rows, different order):

[
{
"id": "1b06342d-91e7-43ea-9f79-13d7e1d60f3a",
"author_id": "a1111111-bbbb-4ccc-8ddd-eeeeeeee0001",
"title": "The Silent Coast",
"genre": "Drama",
"published_year": 2015,
"created_at": "2026-02-15T10:33:58.564366+00:00"
},
{
"id": "d4b2b5c3-cb78-4434-a4df-47b07c749429",
"author_id": "a1111111-bbbb-4ccc-8ddd-eeeeeeee0001",
"title": "Beyond the Reef",
"genre": "Adventure",
"published_year": 2019,
"created_at": "2026-02-15T10:33:58.564366+00:00"
},
{
"id": "cac3eeb5-c540-401b-96d4-fb3f0f3b4fba",
"author_id": "a1111111-bbbb-4ccc-8ddd-eeeeeeee0002",
"title": "Clockwork Fields",
"genre": "Sci-Fi",
"published_year": 2020,
"created_at": "2026-02-15T10:33:58.564366+00:00"
}
]
Result✓ PASS — both expected rows are found (row order does not matter, extra rows are ignored, and only the specified fields need to match)
Detail
Value[{"name": "Alice"}]
Response[{ "name": "Bob", "age": 25 }]
Result✗ FAIL — no row has name equal to "Alice"

Response row equals — value: a JSON object of expected field values (response must contain exactly one row)

Detail
Value{"name": "Alice", "age": 30}
Response[{ "name": "Alice", "age": 30, "email": "alice@example.com" }]
Result✓ PASS — exactly 1 row, and all specified fields match (extra fields are ignored)
Detail
Value{"name": "Alice"}
Response[{ "name": "Alice" }, { "name": "Bob" }]
Result✗ FAIL — expected exactly 1 row, got 2
Detail
Value{"name": "Alice"}
Response[{ "name": "Bob" }]
Result✗ FAIL — field name expected "Alice", got "Bob"

Response time under — value: maximum allowed time in milliseconds (inclusive)

Detail
Value500
Execution time200 ms
Result✓ PASS — 200 ms ≤ 500 ms
Detail
Value100
Execution time150 ms
Result✗ FAIL — took 150 ms (limit: 100 ms)
Detail
Value1000
Execution time1000 ms
Result✓ PASS — exactly at the limit (inclusive)

Positive / Negative Test Pairs

The No error and Expect error code criteria are designed to be used together. For the same endpoint, create one test case asserting success (No error) and another asserting that an unauthorised or invalid request returns the expected error code. This mirrors the lives_ok() / throws_ok() pattern that is used by other programs like pgTAP.

Running a Test Plan

Click Run. Test cases execute sequentially and results appear in real time — each test shows a pass (✓) or fail (✗) result, the response time, and any error message on failure.

Click Stop at any time to abort the run.

Variables in Test Plans

If any request in the plan uses {{variable}} syntax, the variable extraction configuration on that request determines how values flow into later steps. Variables can be extracted from responses using JSON Path (e.g. data[0].id) or a Regex capture group.

Built-in variables are also available for auto-generated values: timestamps, UUIDs, and ULIDs — no extraction step required.

Supatester validates variable dependencies before running and will warn you if a variable is referenced before it is defined, if variables are unused, or if a circular dependency exists.

By clicking on variable tags on test cases you can manually enter a variable to be used for ensuring that the test case responds to the provided variable as expected. This method only provided a temporary variable that exist while you are on the page. The expectation is that the variable will be extracted by a previous test case when running with supatester-cli.

Run History

Every execution is saved to the Run History tab. Each entry includes a timestamp, overall pass/fail status, and individual test results. Configure how many historical runs to retain in the plan settings.

Use run history to compare results between schema changes and spot regressions.

Reports

Click Generate Report on any completed run to produce a formatted text summary suitable for sharing with stakeholders or attaching to a pull request.

File Attachments

For test cases that test storage uploads, click Attach File on the test case to select a file from disk. The file will be used as the upload payload when that test case runs.

Import and Export

Export any test plan as a JSON file to share with your team or commit to source control. Import a previously exported plan with conflict resolution for duplicate names.

Snapshots (Result-Set Comparison)

Snapshots let you capture the results of a test-plan run and compare future runs against them. Allowing for you to do pre/post database change comparisons.

  1. Run your test plan so that results are available.
  2. In the Snapshots tab, click Save Current Results.
  3. Up to 5 snapshots can be stored per test plan.
  4. Click the eye icon on any snapshot to compare it against the current results.
  5. Comparison highlights matches, mismatches, new tests (not in snapshot), and missing tests (in snapshot but not executed).

Ignore Ordering of Response

Snapshots have the “Ignore ordering” checkbox enabled by default. This means the comparison will not consider the order of rows in the response (for example, rows may appear in a different order, but all rows must still be present). Clear this checkbox if you want an exact comparison of the JSON responses.

Exclude Fields From Snapshot Comparison

In some cases, you may want to exclude certain fields from the snapshot comparison (for example, fields containing random values, timestamps, or other non‑deterministic data). To remove a field or column from comparison:

  1. Open the Snapshot tab.
  2. Click the snapshot item to expand it and display all test cases and results.
  3. For each relevant test case, click Response comparison selection to view all fields and columns included in the response. Select the fields or columns you want the snapshot comparison to ignore.

Snapshots are automatically included when you export a test plan and restored when you import it.