Skip to content

Generate — Create Artifacts

The generate tool creates production-ready artifacts from captured browser data. Tests, reproduction scripts, reports, and exports — all generated from real browser sessions.

Need one runnable call + response shape + failure fix for every mode? See Generate Executable Examples.

generate({what: "test"}) // Playwright regression test
generate({what: "reproduction"}) // Bug reproduction script
generate({what: "har", url: "/api"}) // HTTP Archive export
generate({what: "sarif"}) // Accessibility SARIF report
generate({what: "csp", mode: "strict"}) // Content Security Policy
generate({what: "sri"}) // Subresource Integrity hashes
generate({what: "pr_summary"}) // PR performance summary
generate({what: "visual_test", annot_session: "checkout"}) // Visual test from annotations
generate({what: "test_from_context", context: "error"}) // Test from error context
generate({what: "test_heal", action: "analyze", test_file: "tests/login.spec.ts"}) // Heal broken selectors
generate({what: "test_classify", action: "failure", failure: {error: "timeout"}}) // Classify failure

These parameters are used across multiple generate modes:

ParameterTypeDescription
whatstring (required)Artifact type to generate
formatstringDeprecated alias for what
telemetry_modestringTelemetry metadata mode: off, auto, full
save_tostringOutput file path when writing artifacts to disk
test_namestringOptional name for generated test artifacts

Generates a complete Playwright test from the current browser session. Captures user actions, correlates them with API calls, and produces tests with real assertions — not just click replay.

generate({what:"test"})
generate({what:"test",
test_name: "guest-checkout",
base_url: "http://localhost:3000",
assert_network: true,
assert_no_errors: true,
assert_response_shape: true})
ParameterTypeDefaultDescription
test_namestringDerived from URLName for the test() block
base_urlstringCaptured originReplace origin in URLs for portability
assert_networkbooleanInclude waitForResponse + status code assertions
assert_no_errorsbooleanAssert consoleErrors.length === 0
assert_response_shapebooleanAssert response body structure matches (types only, never values)
  • User actions translated to Playwright commands (click, fill, getByRole, etc.)
  • Multi-strategy selectors prioritized: data-testid > ARIA role > label > text > ID > CSS
  • Network assertions with waitForResponse and status code checks
  • Response shape validation — field names and types, never actual values
  • Console error collection — asserts zero errors during the flow
  • Password redaction — passwords replaced with [user-provided]
import { test, expect } from '@playwright/test';
test('submit-form flow', async ({ page }) => {
const consoleErrors = [];
page.on('console', msg => {
if (msg.type() === 'error') consoleErrors.push(msg.text());
});
await page.goto('http://localhost:3000/login');
await page.getByLabel('Email').fill('user@example.com');
await page.getByLabel('Password').fill('[user-provided]');
const loginResp = page.waitForResponse(r => r.url().includes('/api/auth/login'));
await page.getByRole('button', { name: 'Sign In' }).click();
const resp = await loginResp;
expect(resp.status()).toBe(200);
expect(consoleErrors).toHaveLength(0);
});

Generates a Playwright script that reproduces the user’s actions leading up to a bug. Unlike test, this focuses on replaying the exact sequence rather than asserting outcomes.

generate({what:"reproduction"})
generate({what:"reproduction",
error_message: "TypeError: Cannot read property 'id' of undefined",
last_n: 10,
base_url: "http://localhost:3000",
include_screenshots: true})
ParameterTypeDefaultDescription
error_messagestringError message to include as context in the script
last_nnumberAllUse only the last N recorded actions
base_urlstringCaptured originReplace origin in URLs
include_screenshotsbooleanInsert page.screenshot() calls between steps
generate_fixturesbooleanGenerate fixture files from captured network data
visual_assertionsbooleanAdd toHaveScreenshot() assertions

Exports captured network traffic in HAR 1.2 format. HAR files can be imported into Chrome DevTools, Charles Proxy, or any HAR viewer for analysis.

generate({what:"har"})
generate({what:"har",
url: "/api",
method: "POST",
status_min: 400,
save_to: "/tmp/debug.har"})
ParameterTypeDescription
urlstringFilter by URL substring
methodstringFilter by HTTP method
status_minnumberMinimum status code
status_maxnumberMaximum status code
save_tostringFile path to save the HAR file

Exports accessibility audit results in SARIF format (Static Analysis Results Interchange Format). SARIF files integrate with GitHub Code Scanning, VS Code, and CI/CD pipelines.

generate({what:"sarif"})
generate({what:"sarif",
scope: "#main-content",
include_passes: true,
save_to: "/tmp/a11y.sarif"})
ParameterTypeDescription
scopestringCSS selector to limit audit scope
include_passesbooleanInclude passing rules (not just violations)
save_tostringFile path to save the SARIF file

csp — Content Security Policy Generation

Section titled “csp — Content Security Policy Generation”

Generates a Content Security Policy header from observed network traffic. Gasoline sees which origins your page loads resources from and produces a CSP that allows exactly those origins.

generate({what:"csp"})
generate({what:"csp",
mode: "strict",
exclude_origins: ["https://analytics.google.com"],
include_report_uri: true})
ParameterTypeDescription
modestringStrictness: strict, moderate, or report_only
exclude_originsarrayOrigins to exclude from the generated CSP
include_report_uribooleanInclude a report-uri directive

Generates SRI hashes for external scripts and stylesheets. SRI ensures that fetched resources haven’t been tampered with.

generate({what:"sri"})
generate({what:"sri",
resource_types: ["script"],
origins: ["https://cdn.example.com"]})
ParameterTypeDescription
resource_typesarrayFilter: script, stylesheet
originsarrayFilter by specific CDN origins

Generates a performance impact summary suitable for pull request descriptions. Compares before/after metrics and highlights regressions or improvements.

generate({what:"pr_summary"})

No additional parameters. Uses the current performance snapshot data.

Output includes:

  • Web Vitals comparison (before/after)
  • Regression/improvement verdicts
  • Network request count changes
  • Bundle size impact (if measurable)

visual_test — Visual Regression Test from Annotations

Section titled “visual_test — Visual Regression Test from Annotations”

Generates a Playwright visual regression test from a draw mode annotation session. Each annotation becomes a visual assertion.

generate({what:"visual_test", annot_session: "checkout-flow"})
generate({what:"visual_test",
session: "checkout-flow",
test_name: "checkout-visual-regression"})
ParameterTypeDescription
annot_sessionstringNamed annotation session from draw mode
test_namestringName for the generated test

Generates a report from draw mode annotations — summarizes all user feedback from an annotation session.

generate({what:"annotation_report", annot_session: "homepage-review"})
ParameterTypeDescription
annot_sessionstringNamed annotation session

annotation_issues — Extract Issues from Annotations

Section titled “annotation_issues — Extract Issues from Annotations”

Extracts structured issues from draw mode annotations, suitable for issue tracker integration.

generate({what:"annotation_issues", annot_session: "homepage-review"})
ParameterTypeDescription
annot_sessionstringNamed annotation session

test_from_context — Context-Aware Test Generation

Section titled “test_from_context — Context-Aware Test Generation”

Generates a Playwright test from a specific error, interaction, or regression context. More targeted than test — focuses on reproducing a specific scenario.

generate({what:"test_from_context", context: "error"})
generate({what:"test_from_context", context: "interaction"})
generate({what:"test_from_context", context: "regression", include_mocks: true})
ParameterTypeDescription
contextstringTest context: error, interaction, or regression
error_idstringSpecific error ID (for error context)
include_mocksbooleanInclude network mocks in the generated test
output_formatstringOutput format: file or inline

Self-healing for Playwright tests — analyzes test files for broken selectors and suggests or auto-applies fixes using the current DOM state.

generate({what:"test_heal", action: "analyze", test_file: "tests/login.spec.ts"})
generate({what:"test_heal", action: "repair",
broken_selectors: ["#old-submit-btn", ".deprecated-class"],
auto_apply: true})
generate({what:"test_heal", action: "batch", test_dir: "tests/"})
ParameterTypeDescription
actionstringanalyze (find broken selectors), repair (fix them), batch (process directory)
test_filestringTest file path (for analyze)
test_dirstringTest directory (for batch)
broken_selectorsarraySelectors to repair (for repair)
auto_applybooleanAuto-apply high-confidence fixes (for repair)

Classifies test failures by root cause — infrastructure, flaky, regression, environment, etc. Useful for batch triage of CI failures.

generate({what:"test_classify", action: "failure",
failure: {test_name: "login-flow", error: "Timeout 30000ms exceeded", duration_ms: 30500}})
generate({what:"test_classify", action: "batch",
failures: [
{error: "Timeout 30000ms exceeded", test_name: "login"},
{error: "Element not found: #submit", test_name: "checkout"}
]})
ParameterTypeDescription
actionstringfailure (single) or batch (multiple)
failureobjectSingle test failure: {error, test_name?, screenshot?, trace?, duration_ms?}
failuresarrayArray of failure objects (for batch)