Skip to content

Accessibility Testing

Gasoline includes a full accessibility auditing engine powered by axe-core. Your AI assistant can audit pages for WCAG violations, explain what’s wrong and why it matters, scope audits to specific components, and export results in SARIF format for CI/CD integration.

Ask your AI to audit the current page:

"Run an accessibility audit on this page."
analyze({what: "accessibility"})

Returns a list of violations, each with:

FieldDescription
idaxe-core rule ID (e.g., color-contrast, image-alt, label)
impactcritical, serious, moderate, minor
descriptionWhat the rule checks
helpHow to fix the issue
helpUrlLink to axe-core documentation with full explanation
nodesList of failing DOM elements with selectors and HTML snippets

Audit a specific section instead of the entire page:

analyze({what: "accessibility", scope: "#main-content"})
analyze({what: "accessibility", scope: ".checkout-form"})
analyze({what: "accessibility", scope: "[role='navigation']"})

The scope parameter accepts any CSS selector. Only elements within that subtree are audited. Useful for:

  • Testing a specific component you just built
  • Auditing a modal or dialog
  • Checking a form before shipping
  • Ignoring third-party widgets you can’t control

Target specific WCAG success criteria:

analyze({what: "accessibility", tags: ["wcag2a"]})
analyze({what: "accessibility", tags: ["wcag2aa"]})
analyze({what: "accessibility", tags: ["wcag21aa"]})
analyze({what: "accessibility", tags: ["best-practice"]})
TagWhat It Covers
wcag2aWCAG 2.0 Level A (minimum compliance)
wcag2aaWCAG 2.0 Level AA (standard compliance target)
wcag21aWCAG 2.1 Level A
wcag21aaWCAG 2.1 Level AA
wcag22aaWCAG 2.2 Level AA
best-practiceNon-WCAG rules that improve accessibility
section508Section 508 requirements

Combine tags to run multiple rule sets:

analyze({what: "accessibility", tags: ["wcag2aa", "best-practice"]})

Results are cached for performance. To re-run the audit after making changes:

analyze({what: "accessibility", force_refresh: true})

This clears the cache and runs a fresh axe-core scan.


Export results in SARIF (Static Analysis Results Interchange Format) for CI/CD integration:

generate({format: "sarif"})
generate({format: "sarif", save_to: "/path/to/a11y-report.sarif"})
generate({format: "sarif", scope: "#main-content", include_passes: true})
ParameterTypeDescription
scopestringCSS selector to limit audit scope
include_passesbooleanInclude passing rules (not just violations)
save_tostringFile path to save the SARIF file

Upload SARIF to GitHub Code Scanning:

.github/workflows/a11y.yml
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: a11y-report.sarif

This puts accessibility violations directly in your PR’s “Code scanning alerts” tab — visible to reviewers alongside code changes.

  • VS Code — install the SARIF Viewer extension to browse results in your editor
  • Azure DevOps — upload as a build artifact for security/quality gates
  • SonarQube — import via the SARIF import plugin

What it means: Text doesn’t have enough contrast against its background. Low-vision users can’t read it.

WCAG requirement: 4.5:1 for normal text, 3:1 for large text (AA level).

How the AI fixes it: Reads the computed colors, calculates the contrast ratio, and adjusts either the text or background color to meet the threshold.

What it means: An <img> element is missing the alt attribute. Screen readers can’t describe the image.

How the AI fixes it: Examines the image context and suggests appropriate alt text. Decorative images get alt="".

What it means: A form input doesn’t have an associated <label>. Screen readers can’t tell users what the input is for.

How the AI fixes it: Adds a <label for="..."> element or an aria-label attribute.

What it means: A button has no accessible name. Screen readers announce it as just “button.”

How the AI fixes it: Adds text content, an aria-label, or an aria-labelledby attribute.

What it means: A link has no accessible name (e.g., <a href="..."><img src="icon.png"></a>).

How the AI fixes it: Adds aria-label to the link or alt text to the image inside it.

What it means: Heading levels skip (e.g., <h1> followed by <h3> with no <h2>). This confuses screen reader navigation.

How the AI fixes it: Adjusts heading levels to maintain proper hierarchy.


"Run a full accessibility audit."
analyze({what: "accessibility"})

Focus on critical and serious violations first. These are the issues that actually prevent people from using your application.

"Show me only the critical and serious violations."

The AI has the exact DOM selector for each failing element. Ask it to fix them:

"Fix the color contrast issue on the login form."

The AI reads your component source, adjusts the colors, and verifies the contrast ratio meets 4.5:1.

"Run the accessibility audit again — did we fix the contrast issues?"
analyze({what: "accessibility", force_refresh: true})

As you build new features, audit just the new component:

"Audit the accessibility of the checkout form."
analyze({what: "accessibility", scope: ".checkout-form"})
"Generate a SARIF report and save it."
generate({format: "sarif", save_to: "a11y-report.sarif"})

If you need more detail about a failing element:

analyze({what: "dom", selector: "#submit-btn"})

Returns the element’s attributes, classes, computed state, and children — more context for understanding why the violation occurs.

Visually identify where violations are on the page:

interact({action: "highlight", selector: "#failing-element", duration_ms: 5000})

Capture the current state before and after fixes:

observe({what: "screenshot"})

Start with Level AA. WCAG 2.1 Level AA is the standard compliance target for most organizations and the legal baseline in many jurisdictions.

Audit early, not late. Running accessibility checks during development is cheaper than retrofitting an entire application. Ask your AI to audit after every new component.

Automated audits catch ~30-40% of issues. axe-core is excellent but can’t detect all accessibility problems (e.g., whether alt text is actually meaningful, whether focus order is logical). Use it as a baseline, not a complete audit.

Use include_passes for compliance documentation. When generating SARIF reports for compliance, including passing rules shows auditors what was tested, not just what failed.

Scope aggressively. Full-page audits on complex applications can return hundreds of violations. Scoping to specific components makes the results actionable.