Skip to content

DOMAIN:ACCESSIBILITY:TESTING_METHODOLOGY

OWNER: julian (compliance), alexander (design review) ALSO_USED_BY: floris, floor, antje UPDATED: 2026-03-26 SCOPE: automated and manual accessibility testing protocols, tools, frequencies, reporting


TESTING:OVERVIEW

PRINCIPLE: automated testing catches ~30-40% of accessibility issues PRINCIPLE: manual testing catches ~30% more (keyboard, visual) PRINCIPLE: screen reader testing catches the remaining ~30% PRINCIPLE: no single method is sufficient — all three are mandatory CONSEQUENCE: automated pass does NOT mean accessible

GE_TESTING_LAYERS:

Layer 1: Automated (CI pipeline)     → catches syntax, structure, contrast
Layer 2: Keyboard navigation          → catches focus, interaction, traps
Layer 3: Screen reader                → catches semantics, announcements
Layer 4: Visual inspection            → catches contrast, reflow, spacing
Layer 5: User testing (quarterly)     → catches real-world usability


TESTING:AUTOMATED — CI_PIPELINE

AXE_CORE

WHAT: open-source accessibility rule engine by Deque Systems STRENGTHS: zero false positives, modular rules, framework-agnostic COVERAGE: detects up to 57% of automatically detectable issues INTEGRATION: @axe-core/playwright (preferred), cypress-axe, jest-axe

SETUP — PLAYWRIGHT:

import { test, expect } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';

test('page has no accessibility violations', async ({ page }) => {
  await page.goto('/dashboard');
  const results = await new AxeBuilder({ page })
    .withTags(['wcag2a', 'wcag2aa', 'wcag22aa'])
    .analyze();
  expect(results.violations).toEqual([]);
});

SETUP — COMPONENT_LEVEL:

import { render } from '@testing-library/react';
import { axe, toHaveNoViolations } from 'jest-axe';

expect.extend(toHaveNoViolations);

test('Button is accessible', async () => {
  const { container } = render(<Button>Submit</Button>);
  const results = await axe(container);
  expect(results).toHaveNoViolations();
});

RULES_TO_ENABLE: - wcag2a — WCAG 2.0 Level A - wcag2aa — WCAG 2.0 Level AA - wcag21a — WCAG 2.1 Level A - wcag21aa — WCAG 2.1 Level AA - wcag22aa — WCAG 2.2 Level AA - best-practice — additional best practices

CI_GATE: axe-core must pass with zero violations before PR merge CI_GATE: if violation cannot be fixed immediately, add to known-issues with timeline


LIGHTHOUSE

WHAT: Google's web quality auditing tool with accessibility category STRENGTHS: broad coverage, performance + SEO + a11y in one tool LIMITATIONS: less detailed than axe-core for accessibility specifically INTEGRATION: lighthouse CI, @lhci/cli

SETUP — CI:

# GitHub Actions example
- name: Lighthouse CI
  uses: treosh/lighthouse-ci-action@v11
  with:
    urls: |
      http://localhost:3000/
      http://localhost:3000/login
      http://localhost:3000/dashboard
    budgetPath: ./lighthouse-budget.json

THRESHOLD: accessibility score must be >= 95 NOTE: Lighthouse uses axe-core internally but runs fewer rules USE_FOR: quick health check alongside full axe-core scan


PA11Y

WHAT: command-line accessibility testing tool RUNNERS: HTML CodeSniffer (default) + axe-core (must be enabled explicitly) STRENGTHS: fast, scriptable, good for large-scale testing INTEGRATION: pa11y-ci for CI pipelines

SETUP — PA11Y_CI:

{
  "defaults": {
    "runners": ["htmlcs", "axe"],
    "standard": "WCAG2AA",
    "timeout": 30000
  },
  "urls": [
    "http://localhost:3000/",
    "http://localhost:3000/login",
    "http://localhost:3000/dashboard"
  ]
}

CRITICAL: if you pass "axe" as runner, you MUST also pass "htmlcs" — otherwise it overrides the default CRITICAL: use both runners together for maximum coverage

THRESHOLD: zero errors at WCAG2AA level THRESHOLD: warnings are logged but do not block build


AUTOMATED_TESTING_FREQUENCY

Trigger Tool Scope
Every PR axe-core (Playwright) Changed pages + critical paths
Every PR jest-axe Changed components
Nightly pa11y-ci Full sitemap
Nightly Lighthouse CI Top 10 pages
Pre-release All tools Full application

TESTING:MANUAL — KEYBOARD_NAVIGATION

PROTOCOL

TOOL: physical keyboard (no mouse/trackpad) BROWSER: Chrome (primary), Firefox (secondary), Safari (for macOS/iOS context) TIME: ~30 minutes per major user journey

CHECKLIST

CHECK: can you reach every interactive element using Tab? CHECK: is focus order logical (follows visual layout and reading order)? CHECK: is focus visible at all times (clear indicator, not browser default gray)? CHECK: can you activate buttons with Enter and Space? CHECK: can you activate links with Enter? CHECK: can you operate checkboxes and radio buttons with Space? CHECK: can you navigate select/dropdown with Arrow keys? CHECK: can you open and close modals with keyboard? CHECK: does Escape close modals, popovers, and dropdowns? CHECK: does focus return to trigger element after modal closes? CHECK: are there any keyboard traps (focus cannot leave a component)? CHECK: can you skip navigation with skip link? CHECK: do custom components (tabs, accordions, menus) have correct keyboard interaction? CHECK: is focus NOT obscured by sticky headers or fixed elements?

KEYBOARD_INTERACTION_EXPECTATIONS

Component Tab Enter Space Escape Arrow Keys
Link focus activate
Button focus activate activate
Checkbox focus toggle
Radio group first/selected select cycle options
Select focus open open close navigate
Tab list first tab cycle tabs
Menu first item activate activate close navigate
Modal first focusable close
Accordion header toggle toggle next/prev header
Slider focus adjust value

REPORTING

FORMAT: table per user journey COLUMNS: step, expected behavior, actual behavior, pass/fail, screenshot SEVERITY: blocker (cannot complete task), major (workaround exists), minor (cosmetic)


TESTING:MANUAL — SCREEN_READER

PRIMARY_COMBINATIONS

Screen Reader OS Browser Priority
NVDA Windows Chrome/Firefox PRIMARY — most users
VoiceOver macOS Safari PRIMARY — Apple ecosystem
VoiceOver iOS Safari PRIMARY — mobile
TalkBack Android Chrome SECONDARY — Android mobile
JAWS Windows Chrome/Edge SECONDARY — enterprise users

MINIMUM: test with NVDA + VoiceOver (macOS) on every release IDEAL: test with all five combinations quarterly

NVDA_TESTING_PROTOCOL

SETUP: install NVDA (free, nvaccess.org), use with Chrome or Firefox KEYBINDING: NVDA key = Insert (desktop) or CapsLock (laptop)

NAVIGATION_COMMANDS: - H — next heading - D — next landmark - K — next link - F — next form field - T — next table - Tab — next focusable element - NVDA+F7 — elements list (headings, links, landmarks)

TESTING_STEPS: STEP: load page — listen to page title announcement STEP: press H repeatedly — verify heading hierarchy is logical (h1 → h2 → h3) STEP: press D — verify landmarks (banner, navigation, main, contentinfo) STEP: press F — verify form fields have labels STEP: tab through interactive elements — verify each is announced with name and role STEP: trigger dynamic content (add to cart, form submission) — verify announcement STEP: open modal — verify focus moves into modal, content is announced STEP: close modal — verify focus returns to trigger STEP: navigate data table — verify headers are announced with cell content

VOICEOVER_TESTING_PROTOCOL (macOS)

SETUP: built-in, activate with Cmd+F5 BROWSER: Safari (best VoiceOver support) KEYBINDING: VO key = Control+Option

NAVIGATION_COMMANDS: - VO+Right/Left — next/previous element - VO+Cmd+H — next heading - VO+Cmd+J — next form control - VO+Cmd+L — next link - VO+Space — activate element - VO+U — rotor (headings, links, landmarks, forms)

TESTING_STEPS: STEP: open rotor (VO+U) — verify headings list makes sense STEP: navigate with VO+Right — verify reading order is logical STEP: interact with forms — verify labels are announced STEP: test dynamic content — verify aria-live announcements STEP: test on iOS (VoiceOver) — verify touch gestures work

SCREEN_READER_CHECKLIST

CHECK: page title announced on load CHECK: headings hierarchy correct (one h1, logical nesting) CHECK: all images described (or decorative images silent) CHECK: all form fields have associated labels CHECK: required fields announced as required CHECK: error messages announced when they appear CHECK: buttons announce name and role CHECK: links announce name (and destination if not obvious from text) CHECK: tables announce row/column headers CHECK: dynamic content announced (cart count, notifications, loading state) CHECK: modals announced on open, trap focus, announce close CHECK: custom components (tabs, accordions) announce state changes CHECK: route changes announce new page title CHECK: no unexpected announcements from hidden content


TESTING:MANUAL — VISUAL_INSPECTION

COLOR_CONTRAST

TOOL: TPGi Colour Contrast Analyser (desktop app, free) TOOL: browser DevTools contrast checker TOOL: Stark (Figma plugin) — for design review

REQUIREMENTS: - normal text: 4.5:1 contrast ratio minimum - large text (18pt / 14pt bold): 3:1 contrast ratio minimum - UI components and graphics: 3:1 contrast ratio minimum - focus indicators: 3:1 contrast ratio against adjacent colors

CHECK: every text color against its background CHECK: placeholder text contrast (often fails) CHECK: disabled state contrast (exempt from WCAG but should still be perceivable) CHECK: text over images or gradients CHECK: link text distinguishable from surrounding text (not by color alone) CHECK: focus indicator visible against all backgrounds

TEXT_REFLOW

TEST: set browser zoom to 200% CHECK: no horizontal scrollbar appears CHECK: no content is clipped or hidden CHECK: no content overlaps CHECK: reading order remains logical CHECK: all functionality still works

TEST: set viewport to 320px width CHECK: content reflows to single column CHECK: no horizontal scroll needed CHECK: images scale down appropriately

TEXT_SPACING

TEST: apply WCAG text spacing overrides

* {
  line-height: 1.5 !important;
  letter-spacing: 0.12em !important;
  word-spacing: 0.16em !important;
}
p { margin-bottom: 2em !important; }
CHECK: no content is clipped or lost CHECK: no controls become inoperable CHECK: no overlapping text

COLOR_INDEPENDENCE

CHECK: links distinguishable from body text without color (underline, weight, icon) CHECK: form validation errors indicated by more than color (icon, text, border) CHECK: charts and graphs use patterns or labels in addition to color CHECK: status indicators use icons or text alongside color


TESTING:ARIA_PATTERNS

ARIA_VALIDATION_CHECKLIST

CHECK: no ARIA where native HTML works (button instead of div[role="button"]) CHECK: all role values are valid WAI-ARIA roles CHECK: required ARIA properties present (role="tab" must have aria-selected) CHECK: aria-expanded reflects actual state (true when open, false when closed) CHECK: aria-live regions announce dynamic content (but not excessively) CHECK: aria-label and aria-labelledby provide meaningful names CHECK: aria-describedby provides supplementary description (not duplication) CHECK: no aria-label on non-interactive elements (div, span) CHECK: aria-hidden="true" not applied to focusable elements CHECK: role="presentation" not applied to meaningful content CHECK: no duplicate IDs referenced by aria-labelledby or aria-describedby

ARIA_TESTING_WITH_ACCESSIBILITY_TREE

TOOL: Chrome DevTools Accessibility Tree (Elements → Accessibility pane) TOOL: Firefox Accessibility Inspector

CHECK: every interactive element has a computed accessible name CHECK: computed role matches expected role CHECK: state properties (expanded, selected, checked) are correct CHECK: description matches expectations


TESTING:FREQUENCY_AND_OWNERSHIP

Activity Frequency Owner Blocks Release?
axe-core (Playwright) every PR floris/floor (automated) YES
jest-axe (components) every PR floris/floor (automated) YES
pa11y-ci (full site) nightly CI (automated) NO (logged)
Lighthouse audit nightly CI (automated) NO (logged)
Keyboard navigation every sprint antje YES for new features
Screen reader (NVDA) every sprint antje YES for new features
Screen reader (VoiceOver) pre-release antje YES
Visual contrast audit every sprint antje + alexander YES for new features
Text reflow/spacing pre-release antje YES
Full manual audit quarterly antje + julian YES
User testing quarterly julian (coordination) NO (advisory)

TESTING:REPORTING_FORMAT

PER_SPRINT_REPORT

ACCESSIBILITY TEST REPORT
Project: {project_name}
Sprint: {sprint_number}
Date: YYYY-MM-DD
Tester: antje

AUTOMATED RESULTS:
- axe-core: {pass/fail} — {n} violations, {n} warnings
- pa11y: {pass/fail} — {n} errors, {n} warnings
- Lighthouse: {score}/100

KEYBOARD TESTING:
- Journey: {user_journey_name}
- Result: {pass/fail}
- Issues: {list}

SCREEN READER TESTING:
- Tool: {NVDA/VoiceOver}
- Journey: {user_journey_name}
- Result: {pass/fail}
- Issues: {list}

VISUAL TESTING:
- Contrast: {pass/fail}
- Reflow: {pass/fail}
- Issues: {list}

BLOCKERS: {list of issues that must be fixed before release}
WARNINGS: {list of issues to fix in next sprint}

CONFORMANCE_REPORT (Julian)

SEE: eaa-requirements.md → EAA:DOCUMENTATION_REQUIREMENTS FORMAT: VPAT EU edition FREQUENCY: at each delivery milestone AUDIENCE: client, potentially national enforcement authority


TESTING:AGENT_INSTRUCTIONS

FOR antje: - you own manual accessibility testing — keyboard, screen reader, visual - test every new feature with keyboard + NVDA before sprint end - produce sprint test report in format above - escalate blockers to julian and the team PM (faye/sytske)

FOR floris, floor: - you own automated accessibility testing — axe-core and jest-axe in CI - zero violations policy — fix or explicitly document exceptions - if pa11y nightly finds new issues, triage within 2 business days

FOR julian: - you own the conformance report and compliance sign-off - compile antje's sprint reports + automated results into milestone conformance report - schedule quarterly full audits

FOR alexander: - review contrast ratios and target sizes during design QA - use Stark plugin in Figma for early contrast validation


READ_ALSO: domains/accessibility/index.md, domains/accessibility/wcag-2-2.md, domains/accessibility/component-patterns.md, domains/accessibility/pitfalls.md