Skip to content

DOMAIN:VISUAL_PRODUCTION

OWNER: felice UPDATED: 2026-03-24 SCOPE: AI image generation, video production, animation, responsive delivery AGENTS: felice (Visual Asset Producer) ALSO_USED_BY: alexander (design direction), floris/floor (frontend integration), martijn/valentin (iOS assets), karel (CDN delivery), tjarda (marketing assets)

SUB_PAGES

Page Scope
image-generation.md DALL-E 3, Midjourney, Flux Pro — prompt engineering, quality evaluation, batch patterns, tool decision tree
video-production.md Runway Gen-3, Synthesia/HeyGen, ElevenLabs voiceover, FFmpeg post-processing, pipeline stages
remotion-patterns.md Remotion deep dive — project structure, core concepts, animation patterns, rendering, CI/CD
asset-optimization.md WebP/AVIF/PNG decision tree, responsive images, video compression, Lottie, SVG, BunnyCDN
delivery-specs.md Per-context specifications — web, iOS, Android, social media, App Store, email, video
accessibility-media.md Captions, audio descriptions, alt text, reduced motion, keyboard navigation, color independence

VISUAL:AI_IMAGE_GENERATION

DALL_E_3

TOOL: OpenAI DALL-E 3 API
ENDPOINT: POST https://api.openai.com/v1/images/generations

PARAMS:
- model: "dall-e-3"
- prompt: max 4000 chars
- size: "1024x1024" | "1024x1792" (portrait) | "1792x1024" (landscape)
- quality: "standard" (faster, cheaper) | "hd" (more detail, 2x cost)
- style: "vivid" (hyper-real, dramatic) | "natural" (less exaggerated)
- n: always 1 (DALL-E 3 only generates 1 at a time)

PROMPT_ENGINEERING:
- RULE: be extremely specific — DALL-E 3 rewrites vague prompts internally
- RULE: state the visual style explicitly ("flat vector illustration", "photorealistic", "watercolor")
- RULE: specify composition ("centered", "rule of thirds", "bird's eye view", "close-up")
- RULE: specify lighting ("soft studio lighting", "golden hour", "dramatic side lighting")
- RULE: for text in images: spell it out exactly and keep it short (max 3-4 words reliable)
- RULE: negative instructions work better than in DALL-E 2 ("no text", "no watermark")

ANTI_PATTERN: prompt says "a nice website design"
FIX: "a clean SaaS dashboard design with a left sidebar navigation, white background, blue accent color, showing a data table with 5 rows and a line chart in the top right, flat design style, no text"

ANTI_PATTERN: requesting multiple unrelated subjects in one image
FIX: one clear subject per generation — composite in post-processing

PRICING (as of 2025):
- standard 1024x1024: $0.040/image
- standard 1792x1024 or 1024x1792: $0.080/image
- hd 1024x1024: $0.080/image
- hd 1792x1024 or 1024x1792: $0.120/image

LICENSING: outputs owned by the user per OpenAI terms (commercial use allowed)

MIDJOURNEY

TOOL: Midjourney API (via Discord bot or official API)
NOTE: official API launched 2025 — check current endpoint at docs.midjourney.com

PARAM_SYNTAX (appended to prompt):
- --ar 16:9 — aspect ratio (flexible, not limited to preset sizes)
- --v 6.1 — model version
- --style raw — less Midjourney stylization, more literal
- --stylize 50 — 0-1000, lower = more literal, higher = more artistic (default 100)
- --quality 1 — 0.25, 0.5, 1 (render quality, affects cost)
- --chaos 20 — 0-100, variation between results
- --no text, watermark — negative prompt (things to exclude)
- --sref <url> — style reference image
- --cref <url> — character reference (maintain character consistency)
- --tile — seamless tiling pattern

PROMPT_ENGINEERING:
- RULE: Midjourney responds best to comma-separated descriptors, not sentences
- RULE: put the most important subject first
- RULE: style keywords at the end: "editorial photography", "3d render", "oil painting"
- RULE: --style raw for anything that needs to match a brand guideline precisely
- RULE: use --sref for brand consistency across multiple generations

ANTI_PATTERN: long paragraph-style prompts
FIX: "modern office interior, floor-to-ceiling windows, minimal furniture, warm wood tones, natural light, architectural photography --ar 16:9 --style raw --stylize 30"

LICENSING: users on paid plans own commercial rights to outputs

FLUX_PRO

TOOL: Flux Pro API (via Replicate, fal.ai, or BFL API)
ENDPOINT: varies by provider — api.bfl.ml/v1/flux-pro for BFL direct

STRENGTHS vs competitors:
- superior text rendering in images (best in class as of 2025)
- excellent prompt adherence — less "creative reinterpretation" than DALL-E/Midjourney
- good at technical diagrams and UI mockups
- fast generation (2-5 seconds via optimized providers)

PARAMS:
- prompt: detailed description
- width, height: flexible dimensions (multiples of 8)
- num_inference_steps: 20-50 (higher = better quality, slower)
- guidance_scale: 2.0-10.0 (higher = stricter prompt adherence)
- seed: for reproducibility

PROMPT_ENGINEERING:
- RULE: Flux Pro takes literal prompts well — say exactly what you want
- RULE: specify text content explicitly: with text reading "Growing Europe" in white sans-serif font
- RULE: technical accuracy is higher — use for diagrams, infographics, UI elements
- RULE: less artistic interpretation than Midjourney — better for brand assets

LICENSING: outputs owned by user (check BFL terms for specific use cases)

IMAGE_QUALITY_EVALUATION

AUTOMATED_CHECKS:
- CHECK: resolution matches requested dimensions
- CHECK: no visible compression artifacts (JPEG quality < 60 = artifacts)
- CHECK: no obvious AI artifacts — extra fingers, melted text, impossible geometry
- CHECK: brand colors present if specified (sample pixels at key positions)
- CHECK: aspect ratio correct for delivery context
- CHECK: file size reasonable (< 5MB for web, < 500KB for thumbnails)

COMMON_AI_ARTIFACTS:
- text distortion (misspelled or warped text)
- finger/hand deformities
- asymmetric faces
- background object blending
- inconsistent shadows/lighting direction
- seamless tiling breaks at edges

RULE: always generate 3-4 variations and select best — AI generation is stochastic
RULE: human review required before client delivery — no auto-publish of AI images

FORMAT_OPTIMIZATION

Format Use Case Compression Browser Support
WebP default for web photos/illustrations lossy 80% or lossless all modern browsers
AVIF maximum compression, newer content lossy 60% (smaller than WebP) Chrome 85+, Firefox 93+, Safari 16.4+
PNG graphics with transparency, screenshots lossless only universal
SVG logos, icons, simple illustrations N/A (vector) universal
JPEG legacy fallback only lossy 80% universal

RULE: generate in PNG (lossless source), then convert to WebP + AVIF for delivery
RULE: always provide fallback format: <picture> with WebP/AVIF sources + JPEG/PNG fallback
RULE: never serve AVIF without WebP fallback (Safari < 16.4 has no AVIF support)

TOOL: sharp (Node.js) or Pillow (Python)
RUN: sharp input.png -o output.webp --quality 80
RUN: sharp input.png -o output.avif --quality 60

RESPONSIVE_IMAGE_DELIVERY

TECHNIQUE: srcset with width descriptors

<picture>
  <source type="image/avif"
    srcset="image-400.avif 400w, image-800.avif 800w, image-1200.avif 1200w"
    sizes="(max-width: 600px) 100vw, (max-width: 1200px) 50vw, 33vw">
  <source type="image/webp"
    srcset="image-400.webp 400w, image-800.webp 800w, image-1200.webp 1200w"
    sizes="(max-width: 600px) 100vw, (max-width: 1200px) 50vw, 33vw">
  <img src="image-800.jpg" alt="descriptive alt text" loading="lazy" decoding="async"
    width="800" height="600">
</picture>

RULE: always set width and height on <img> to prevent CLS (Cumulative Layout Shift)
RULE: always set loading="lazy" for below-the-fold images
RULE: always set decoding="async" for non-critical images
RULE: hero images should NOT be lazy-loaded — use fetchpriority="high" instead
RULE: generate at least 3 sizes: 400w, 800w, 1200w (add 1600w/2400w for hero images)

ANTI_PATTERN: serving a single 2400px image to mobile devices
FIX: srcset with appropriate breakpoints

ANTI_PATTERN: using CSS background-image for content images
FIX: <picture> element with <img> fallback — better for SEO, accessibility, and lazy loading


VISUAL:REMOTION

PROJECT_SETUP

TOOL: Remotion (React-based video framework)
RUN: npx create-video@latest my-video --template blank

STRUCTURE:

src/
  Root.tsx          -- registers all compositions
  compositions/
    MyVideo.tsx     -- main video component
    Intro.tsx       -- intro sequence
    Scene1.tsx      -- content scene
    Outro.tsx       -- outro sequence
  lib/
    constants.ts    -- fps, dimensions, durations
    spring.ts       -- reusable spring configs
  assets/
    fonts/
    images/
    audio/
remotion.config.ts  -- Remotion config
package.json

CORE_CONCEPTS:

Composition — defines a renderable video with metadata

<Composition
  id="MyVideo"
  component={MyVideo}
  durationInFrames={300}  // 10 seconds at 30fps
  fps={30}
  width={1920}
  height={1080}
/>

useCurrentFrame() — returns current frame number (0-indexed)
useVideoConfig() — returns fps, width, height, durationInFrames

interpolate() — map frame ranges to values

const opacity = interpolate(frame, [0, 30], [0, 1], {
  extrapolateLeft: 'clamp',
  extrapolateRight: 'clamp',
});

spring() — physics-based easing

const scale = spring({
  frame,
  fps: 30,
  config: { damping: 10, stiffness: 100, mass: 0.5 },
});

Sequence — offset child by N frames

<Sequence from={30} durationInFrames={60}>
  <Scene1 />
</Sequence>

<Audio> and <Video> — embed media

<Audio src={staticFile("narration.mp3")} startFrom={0} volume={0.8} />

RULE: all animations MUST use interpolate() or spring() — never CSS transitions (they don't work in render)
RULE: use staticFile() for assets in /public — never dynamic imports
RULE: always set extrapolateLeft: 'clamp' and extrapolateRight: 'clamp' on interpolate

ANTI_PATTERN: using setTimeout or setInterval in Remotion components
FIX: use useCurrentFrame() — Remotion renders frame-by-frame, not in real time

ANTI_PATTERN: using CSS transition or animation properties
FIX: all animation via interpolate() / spring() driven by frame number

RENDERING

LOCAL_CLI:
RUN: npx remotion render src/Root.tsx MyVideo out/video.mp4
RUN: npx remotion render src/Root.tsx MyVideo out/video.mp4 --codec h264 --crf 18

FLAGS:
- --codec: h264 (default, wide compat), h265 (smaller, less compat), vp8/vp9 (WebM), prores (Apple)
- --crf: 0-51, lower = better quality/larger file (18 = visually lossless for h264)
- --image-format: jpeg (faster render) or png (better quality for graphics)
- --concurrency: number of frames to render in parallel (default: CPU count / 2)
- --every-nth-frame: render every Nth frame (for previews)
- --gl: "angle" | "egl" | "swangle" — GPU backend (use "angle" on Linux servers)

REMOTION_LAMBDA (cloud rendering):
- deploys rendering to AWS Lambda — 200+ concurrent workers
- 5-minute videos render in ~30 seconds
- cost: ~$0.01-0.05 per render depending on duration
- requires AWS account with Lambda + S3 permissions
RUN: npx remotion lambda render --function-name remotion-render-func --composition MyVideo

RULE: for CI/CD, use Lambda — local rendering blocks the pipeline
RULE: always specify --gl angle on headless Linux to avoid GPU issues
RULE: CRF 18-23 for web delivery, CRF 15-18 for archival quality

AUDIO_INTEGRATION

import { Audio, Sequence, staticFile, interpolate, useCurrentFrame } from 'remotion';

export const MyVideo = () => {
  const frame = useCurrentFrame();
  return (
    <>
      {/* Background music - full duration, low volume */}
      <Audio src={staticFile("bg-music.mp3")} volume={0.15} />

      {/* Narration - starts at frame 30, full volume */}
      <Sequence from={30}>
        <Audio src={staticFile("narration.mp3")} volume={0.8} />
      </Sequence>

      {/* Sound effect with fade-in */}
      <Sequence from={60}>
        <Audio
          src={staticFile("whoosh.mp3")}
          volume={(f) => interpolate(f, [0, 10], [0, 1], {
            extrapolateRight: 'clamp',
          })}
        />
      </Sequence>
    </>
  );
};

RULE: background music volume 0.1-0.2 when narration is active
RULE: audio files must be MP3 or WAV — other formats may not render correctly
RULE: use volume as a callback function for dynamic volume (fades, ducking)

COMMON_PATTERNS

ANIMATED_TEXT:

const AnimatedText = ({ text, delay = 0 }) => {
  const frame = useCurrentFrame();
  const { fps } = useVideoConfig();
  const opacity = interpolate(frame - delay, [0, 15], [0, 1], { extrapolateLeft: 'clamp', extrapolateRight: 'clamp' });
  const y = interpolate(frame - delay, [0, 15], [30, 0], { extrapolateLeft: 'clamp', extrapolateRight: 'clamp' });
  return <div style={{ opacity, transform: `translateY(${y}px)` }}>{text}</div>;
};

DATA_VISUALIZATION:

const BarChart = ({ data }) => {
  const frame = useCurrentFrame();
  return (
    <div style={{ display: 'flex', alignItems: 'flex-end', gap: 8 }}>
      {data.map((d, i) => {
        const height = interpolate(frame, [i * 5, i * 5 + 20], [0, d.value], {
          extrapolateLeft: 'clamp', extrapolateRight: 'clamp',
        });
        return <div key={i} style={{ width: 40, height, background: d.color }} />;
      })}
    </div>
  );
};

SCREEN_RECORDING_OVERLAY:

<AbsoluteFill>
  <Video src={staticFile("screen-recording.mp4")} style={{ objectFit: 'contain' }} />
  <div style={{ position: 'absolute', bottom: 40, left: 40 }}>
    <AnimatedText text="Click the settings icon" />
  </div>
</AbsoluteFill>

CI_CD_INTEGRATION

# GitHub Actions example
render-video:
  runs-on: ubuntu-latest
  steps:
    - uses: actions/checkout@v4
    - uses: actions/setup-node@v4
      with: { node-version: 20 }
    - run: npm ci
    - run: npx remotion render src/Root.tsx MyVideo out/video.mp4 --gl angle
    - uses: actions/upload-artifact@v4
      with: { name: video, path: out/video.mp4 }

RULE: --gl angle is required on GitHub Actions / headless Linux
RULE: cache node_modules to speed up CI renders
RULE: for long videos, use Remotion Lambda instead of CI rendering


VISUAL:VIDEO_PRODUCTION_PIPELINE

PIPELINE_STAGES

BRIEF → STORYBOARD → ASSET_CREATION → ANIMATION → RENDERING → OPTIMIZATION → DELIVERY

STAGE: BRIEF
- INPUT: client requirements, brand guidelines, target audience
- OUTPUT: structured brief document (purpose, duration, style, key messages, CTA)
- RULE: brief must specify delivery context (web embed, social media, presentation)

STAGE: STORYBOARD
- INPUT: brief
- OUTPUT: frame-by-frame breakdown (scene description, duration, transitions, audio cues)
- RULE: each scene has: visual description, text overlay, narration line, duration in seconds

STAGE: ASSET_CREATION
- INPUT: storyboard
- OUTPUT: images, icons, logos, background music, narration audio
- TOOLS: DALL-E 3, Midjourney, Flux Pro (images), ElevenLabs/Azure TTS (narration)
- RULE: all assets at 2x target resolution (render down, never up)

STAGE: ANIMATION
- INPUT: assets + storyboard
- OUTPUT: Remotion project with all scenes composed
- TOOLS: Remotion (React), Lottie (micro-animations)

STAGE: RENDERING
- INPUT: Remotion project
- OUTPUT: master video file (ProRes or H.264 CRF 15)
- TOOLS: Remotion CLI or Remotion Lambda

STAGE: OPTIMIZATION
- INPUT: master video
- OUTPUT: delivery-optimized variants (web, social, mobile)
- TOOLS: FFmpeg

STAGE: DELIVERY
- INPUT: optimized variants
- OUTPUT: CDN-hosted files with adaptive streaming
- TOOLS: BunnyCDN Stream, custom HLS/DASH

DELIVERY_SPECS

Context Resolution Format Bitrate FPS
Web embed (hero) 1920x1080 H.264 MP4 5-8 Mbps 30
Web embed (section) 1280x720 H.264 MP4 2-4 Mbps 30
Social (landscape) 1920x1080 H.264 MP4 5-8 Mbps 30
Social (square) 1080x1080 H.264 MP4 4-6 Mbps 30
Social (vertical) 1080x1920 H.264 MP4 4-6 Mbps 30
Email/thumbnail 480x270 GIF or WebP <2 MB total 15
Presentation 1920x1080 H.264 MP4 8-12 Mbps 30

SYNTHESIA_HEYGEN_API

TOOL: Synthesia API
ENDPOINT: POST https://api.synthesia.io/v2/videos
PURPOSE: avatar-based explainer videos from text script

{
  "title": "Product Demo",
  "input": [{
    "scriptText": "Welcome to Growing Europe...",
    "avatar": "anna_costume1_cameraA",
    "background": "off_white",
    "avatarSettings": { "horizontalAlign": "left", "scale": 0.8 }
  }],
  "aspectRatio": "16:9"
}

RULE: avatar videos need 5-15 min to render (async — poll for completion)
RULE: script must be plain text — no SSML/markup
RULE: max ~10 minutes per video
RULE: cost: ~$0.50-2.00 per minute of output (varies by plan)

TOOL: HeyGen API
ENDPOINT: POST https://api.heygen.com/v2/video/generate
PURPOSE: similar to Synthesia — avatar videos from text

COMPARISON:
- Synthesia: more realistic lip sync, better for corporate content, higher cost
- HeyGen: faster rendering, more avatar customization, lower cost, API-first design

LICENSING: both — user owns output video, avatar likeness rights included in subscription

RUNWAY_GEN_3

TOOL: Runway Gen-3 Alpha API
ENDPOINT: POST https://api.runwayml.com/v1/generate
PURPOSE: text-to-video and image-to-video generation

MODES:
- text-to-video: describe the scene, get 4-16 second clip
- image-to-video: provide start frame, animate it
- image-to-video with end frame: specify start AND end, interpolate between

PARAMS:
- promptText: scene description
- duration: 4 | 10 | 16 seconds
- ratio: "16:9" | "9:16" | "1:1"
- seed: for reproducibility
- watermark: false (paid plans only)

PROMPT_ENGINEERING:
- RULE: describe camera motion explicitly ("slow dolly in", "tracking shot left to right", "static wide shot")
- RULE: describe the action in present continuous ("a person is walking", "leaves are falling")
- RULE: specify lighting and atmosphere ("warm sunset light", "moody overcast")
- RULE: keep prompts under 200 words — shorter = more coherent output

QUALITY_EVALUATION:
- CHECK: temporal consistency (objects don't morph between frames)
- CHECK: no flickering or strobing artifacts
- CHECK: camera motion is smooth (no sudden jumps)
- CHECK: physics are plausible (gravity, motion blur direction)

PRICING: ~$0.05-0.50 per generation depending on duration and plan
LICENSING: user owns output on paid plans

FFMPEG_COMMON_OPERATIONS

TOOL: FFmpeg

TRANSCODE to web-optimized H.264:
RUN: ffmpeg -i input.mov -c:v libx264 -preset slow -crf 20 -c:a aac -b:a 128k -movflags +faststart output.mp4
NOTE: -movflags +faststart moves metadata to front of file — required for web streaming

RESIZE:
RUN: ffmpeg -i input.mp4 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2" output-720p.mp4

EXTRACT FRAMES (for thumbnails/storyboard):
RUN: ffmpeg -i input.mp4 -vf "select='eq(pict_type\,I)'" -vsync vfr -q:v 2 frames/frame-%03d.jpg

ADD CAPTIONS from SRT:
RUN: ffmpeg -i input.mp4 -vf "subtitles=captions.srt:force_style='FontSize=24,PrimaryColour=&HFFFFFF&'" output-captioned.mp4

BURN-IN CAPTIONS (hard sub):
RUN: ffmpeg -i input.mp4 -vf "subtitles=captions.srt" -c:a copy output.mp4

SOFT CAPTIONS (selectable):
RUN: ffmpeg -i input.mp4 -i captions.srt -c copy -c:s mov_text output.mp4

CREATE GIF PREVIEW:
RUN: ffmpeg -i input.mp4 -ss 0 -t 3 -vf "fps=10,scale=480:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" preview.gif

CONCATENATE CLIPS:

# Create file list
echo "file 'intro.mp4'" > list.txt
echo "file 'main.mp4'" >> list.txt
echo "file 'outro.mp4'" >> list.txt
ffmpeg -f concat -safe 0 -i list.txt -c copy output.mp4

ANTI_PATTERN: re-encoding video unnecessarily (quality loss each time)
FIX: use -c copy when no transformation needed (just concatenating, adding subs, changing container)

ANTI_PATTERN: missing -movflags +faststart for web video
FIX: always include it — without it, browser must download entire file before playing

LOTTIE_ANIMATION

TOOL: Lottie (JSON-based animation format)
SOURCE: After Effects + Bodymovin plugin, or LottieFiles.com, or programmatic generation

FORMAT: JSON describing layers, shapes, keyframes
PLAYER: lottie-web (vanilla JS), @lottiefiles/react-lottie-player (React), lottie-react (React)

import { Player } from '@lottiefiles/react-lottie-player';

<Player
  autoplay
  loop
  src="/animations/loading.json"
  style={{ width: 200, height: 200 }}
/>

USE_CASES:
- loading spinners / micro-interactions
- icon animations (hamburger → X, checkbox tick)
- onboarding illustrations
- success/error state indicators
- scroll-triggered animations

RULE: Lottie files should be < 50KB for micro-interactions, < 200KB for full illustrations
RULE: optimize with lottie-optimizer or LottieFiles editor — remove unused layers
RULE: prefer Lottie over GIF — 10x smaller file size, resolution independent, interactive
RULE: set rendererSettings.preserveAspectRatio: 'xMidYMid slice' to prevent distortion

ANTI_PATTERN: embedding Lottie JSON inline in JavaScript bundle
FIX: load from static file or CDN — Lottie files can be large and should be lazy-loaded

ANTI_PATTERN: using Lottie for complex photographic animation
FIX: Lottie is for vector/shape animation — use video for photographic content

CDN_VIDEO_DELIVERY

TOOL: BunnyCDN Stream
FEATURES:
- automatic transcoding to multiple quality levels
- HLS adaptive bitrate streaming
- global edge delivery
- thumbnail/preview generation
- MP4 fallback for non-HLS clients

INTEGRATION:

1. upload master file via API: PUT https://video.bunnycdn.com/library/{libraryId}/videos/{videoId}
2. BunnyCDN transcodes to 240p/360p/480p/720p/1080p
3. embed player: <iframe src="https://iframe.mediadelivery.net/embed/{libraryId}/{videoId}" />
4. or use direct HLS URL: https://{pullzone}.b-cdn.net/{videoId}/playlist.m3u8

RULE: always use adaptive bitrate streaming for videos > 30 seconds
RULE: provide MP4 fallback URL for email clients and legacy browsers
RULE: set appropriate cache headers (video files: 1 year, manifest files: 10 seconds)
RULE: enable CORS on CDN for cross-origin video players

PRICING: ~$0.005/GB storage + $0.01/GB bandwidth (varies by region)

ACCESSIBILITY

CAPTIONS:
- RULE: ALL videos must have captions — not optional (EAA compliance, WCAG 1.2.2)
- RULE: captions must be accurate (not auto-generated without review)
- RULE: sync tolerance: captions within 100ms of spoken audio
- TOOL: generate initial captions with Whisper API, then human review
- FORMAT: WebVTT preferred for web, SRT for universal compatibility

AUDIO_DESCRIPTIONS:
- RULE: required for videos where visual content conveys information not in audio (WCAG 1.2.5)
- TECHNIQUE: extended audio descriptions — pause video to describe visual content
- TECHNIQUE: integrated description — script includes visual descriptions in narration

REDUCED_MOTION:
- CHECK: prefers-reduced-motion: reduce media query
- IF: reduced motion preferred THEN: show static thumbnail with play button, do not autoplay
- RULE: never autoplay video with motion — always require user interaction
- RULE: provide static alternative (image + transcript) for every video

@media (prefers-reduced-motion: reduce) {
  video { display: none; }
  .video-fallback { display: block; }  /* static image + transcript */
}

RULE: video player must be keyboard accessible (play/pause, volume, captions toggle, seek)
RULE: video player must announce state changes to screen readers (aria-live region)
RULE: never rely solely on color to convey information in video (color blindness)