DOMAIN:SECURITY:PITFALLS¶
OWNER: julian
UPDATED: 2026-03-18
READ_BEFORE: every audit
PITFALL:OVER_FLAGGING¶
SYMPTOM: hundreds of findings, mostly false positives. developers ignore ALL findings.
ANTI_PATTERN: running semgrep with every community ruleset simultaneously
ANTI_PATTERN: npm audit flagging devDependencies that never ship to production
ANTI_PATTERN: SAST flagging every innerHTML even with static content
ANTI_PATTERN: accessibility scanner flagging decorative images with correct empty alt
ANTI_PATTERN: dependency scanner alerting on unreachable CVEs
FIX: curate rulesets to specific tech stack and risk profile
FIX: severity-based gating — block on CRITICAL/HIGH with confirmed exploitability only
FIX: use reachability analysis not just CVE presence
FIX: maintain suppression file with documented justifications, review quarterly
METRIC: track signal-to-noise ratio
PITFALL:UNDER_REPORTING¶
SYMPTOM: real vulnerabilities ship to production. compliance gaps undetected.
ANTI_PATTERN: scan results visible only to security team, not developers
ANTI_PATTERN: quarterly scans instead of continuous CI/CD
ANTI_PATTERN: not scanning container base images or IaC
ANTI_PATTERN: "we don't use that function" without reachability proof
ANTI_PATTERN: no zero-day response process
ANTI_PATTERN: GDPR data mapping on paper but not updated when features ship
FIX: continuous scanning in every PR
FIX: findings visible to developer who introduced them + remediation guidance
FIX: SLA-based remediation — CRITICAL=24h, HIGH=7d, MEDIUM=30d, LOW=90d
FIX: quarterly review of suppressions
FIX: tested incident response runbooks
PITFALL:CHECKBOX_COMPLIANCE¶
SYMPTOM: certifications obtained but security posture is weak. paper security.
ANTI_PATTERN: password policy says 12 chars, system accepts 6
ANTI_PATTERN: data classification policy exists but no data is classified
ANTI_PATTERN: pen test reports filed and never remediated
ANTI_PATTERN: DPIA conducted after system is built as form-filling exercise
ANTI_PATTERN: access reviews documented but never result in revocation
ANTI_PATTERN: security training is annual 20-minute video
FIX: policies generated from actual system configs (policy-as-code)
FIX: automated evidence collection (audit logs, scan results feed compliance platform)
FIX: DPIAs during design phase, updated on architecture changes
FIX: controls tested continuously not just before audits
PITFALL:DEVELOPER_FRICTION¶
SYMPTOM: developers route around security controls or build resentment.
ANTI_PATTERN: blocking PRs on low-severity informational findings with no auto-fix
ANTI_PATTERN: requiring manual security review for every change (doesn't scale)
ANTI_PATTERN: security tools adding 20+ minutes to CI pipeline
ANTI_PATTERN: compliance docs developers must manually maintain
ANTI_PATTERN: security review as end-of-development gate not continuous feedback
ANTI_PATTERN: accessibility communicated as WCAG success criteria numbers without context
FIX: security tooling in IDE (shift left)
FIX: pre-commit hooks for secrets (fast, local)
FIX: CI scans with inline PR comments on affected lines
FIX: auto-fix where possible (Renovate PRs, ESLint auto-fix)
FIX: security champions — one trained dev per team as liaison
FIX: compliance-as-code — requirements as automated tests
FIX: accessibility guidance in design system (component-level not abstract)
PITFALL:GDPR_FAILURES¶
ANTI_PATTERN: cookie banner with pre-checked options or dark patterns (illegal per EDPB)
ANTI_PATTERN: consent collected but no technical mechanism for withdrawal
ANTI_PATTERN: right to erasure = soft-delete but data persists in backups indefinitely
ANTI_PATTERN: legitimate interest claimed without documented balancing test
ANTI_PATTERN: DPAs reference GDPR articles without specifying technical measures
ANTI_PATTERN: PII in application logs shipped to third-party without DPA
ANTI_PATTERN: analytics cookies firing before consent obtained
ANTI_PATTERN: privacy policy last updated 2018
FIX: data minimization enforced architecturally
FIX: consent management integrated into app architecture not bolted on
FIX: automated DSR handling with verified identity
FIX: retention enforced by automated deletion jobs with audit trail
FIX: annual ROPA review triggered by data mapping
PITFALL:ACCESSIBILITY_FAILURES¶
ANTI_PATTERN: testing only with automated tools and declaring conformance (catches <40%)
ANTI_PATTERN: overlay solutions (AccessiBe, UserWay) — condemned by disability community, don't work
ANTI_PATTERN: separate "accessible version" instead of making main site accessible
ANTI_PATTERN: ARIA overuse — adding role/aria-label to everything, breaking native semantics
ANTI_PATTERN: focus management ignored in SPAs — route changes not announced
ANTI_PATTERN: color as sole state indicator (error=red, success=green)
ANTI_PATTERN: custom components reinventing native HTML poorly (custom dropdowns, date pickers)
FIX: accessibility built into design system components
FIX: automated CI + regular manual audits with assistive technology
FIX: user testing with people who have disabilities
FIX: semantic HTML first, ARIA only when native HTML insufficient
FIX: focus management in application router
FIX: accessibility statement with known issues and remediation timeline
PITFALL:SUPPLY_CHAIN_FAILURES¶
ANTI_PATTERN: no lock file committed (non-deterministic builds)
ANTI_PATTERN: * or latest version ranges in production deps
ANTI_PATTERN: evaluating packages by download count alone (typosquatting risk)
ANTI_PATTERN: no review of dependency update changes
ANTI_PATTERN: build pipeline pulling from public registries without integrity verification
ANTI_PATTERN: SBOM generated but never consumed or analyzed
FIX: lock files committed, npm ci (not npm install) in CI
FIX: dependency review before adoption (maintenance, security history, license)
FIX: automated alerts on maintainer changes for critical deps
FIX: SBOM per build, stored alongside artifacts, queryable on new CVE
SELF_CHECK¶
BEFORE_EVERY_AUDIT:
- [ ] am I flagging real issues or generating noise?
- [ ] am I providing actionable remediation guidance?
- [ ] am I consistent with severity classification?
- [ ] am I checking both automated output AND manual review areas?
- [ ] am I educating not just enforcing?
- [ ] am I considering the developer experience of my report?