Skip to content

DOMAIN:SECURITY:SECURE_PIPELINE_PRACTICES

OWNER: koen, eric
ALSO_USED_BY: alex, tjitte, marije, judith
UPDATED: 2026-03-19
SCOPE: all CI/CD pipelines, all projects
SOURCE: Secure by Design (Johnsson/Deogun/Sawano, 2019), Ch. 8


CORE_PRINCIPLE

RULE: delivery pipeline is a SECURITY TOOL — not just build/deploy automation
RULE: security tests belong in the pipeline, not as afterthought
RULE: pipeline failure = deploy blocked — no manual override without human approval


SECURITY:TESTING_CATEGORIES

FOUR_LAYERS_OF_SECURITY_TESTING

layer what when in pipeline examples
normal input expected valid input unit tests register with valid username
boundary input edge cases at limits unit + integration min/max length, empty, null
invalid input input that should be rejected unit tests SQL injection, XSS payloads, wrong types
extreme input adversarial/stress input integration + nightly ReDoS patterns, billion laughs, huge payloads

NORMAL_INPUT_TESTING

CHECK: do tests verify happy path produces correct output?
CHECK: do tests verify no side effects on valid input?
NOTE: necessary but insufficient — most security bugs are in abnormal paths

BOUNDARY_INPUT_TESTING

RULE: test at EVERY boundary — min, max, min-1, max+1, zero, empty
CHECK: do domain primitives have boundary tests?
CHECK: for numerical: 0, -1, MAX_INT, MIN_INT?
CHECK: for strings: empty, whitespace-only, exactly-at-limit, one-over-limit?
CHECK: for collections: empty, single item, at-capacity?
ANTI_PATTERN: testing only mid-range values → misses off-by-one bugs
FIX: boundary value analysis for every input parameter

INVALID_INPUT_TESTING (negative tests)

RULE: test that bad input is REJECTED, not just that good input is accepted
CHECK: does every domain primitive have negative tests?
CHECK: are injection payloads tested? (SQLi, XSS, command injection)
CHECK: are wrong-type inputs tested? (string where number expected, etc.)
ANTI_PATTERN: only positive tests ("it works") without negative tests ("it rejects bad input")
FIX: for every validation rule, test both accept AND reject
WARNING: absence of negative tests in a codebase is a RED FLAG for security

EXTREME_INPUT_TESTING

RULE: test with adversarial input designed to break parsing
CHECK: does input cause excessive resource consumption? (ReDoS, XML bomb, JSON depth)
CHECK: what happens with 10MB input to a field expecting 100 chars?
CHECK: what happens with deeply nested JSON/XML?
TOOL: use fuzzing tools to generate extreme input automatically


SECURITY:FEATURE_TOGGLES

RULE: feature toggles are DEPLOYMENT DECISIONS with security implications
RULE: toggle state must be testable — every toggle combination that reaches production

TOGGLE_SECURITY_RISKS

ANTI_PATTERN: toggle accidentally exposes unfinished feature in production
FIX: default to OFF for new features; explicit activation required
ANTI_PATTERN: old toggles left in code → dead code with potential security holes
FIX: toggle hygiene — remove toggles within 2 sprints of full rollout
ANTI_PATTERN: toggle interaction — feature A safe, feature B safe, A+B together = unsafe
FIX: test toggle combinations, not just individual toggles
CHECK: how many toggles are active? (>10 = exponential complexity, audit needed)
CHECK: are there toggles older than 3 months? (likely dead, remove)

TESTING_TOGGLES

RULE: test with toggle ON and toggle OFF — both are production paths
RULE: if N toggles, test critical combinations (not all 2^N)
CHECK: does CI include toggle-variant test runs?

TOGGLE_AUDITING

RULE: toggle state changes must be logged with who/when/why
RULE: toggle access must be permission-controlled (not everyone can flip production toggles)


SECURITY:CONFIGURATION_VALIDATION

RULE: configuration is CODE — validate it like code

COMMON_CONFIG_SECURITY_FLAWS

ANTI_PATTERN: security feature disabled by default, enabled by config flag nobody remembers
FIX: security features ON by default — config can enhance, not disable
ANTI_PATTERN: different config between dev and prod → "works on my machine" security gaps
FIX: minimize config differences; use same defaults everywhere
ANTI_PATTERN: implicit behaviors — framework defaults that change between versions
FIX: make ALL security-relevant config explicit — don't rely on defaults

CONFIGURATION_HOT_SPOTS

CHECK: authentication/authorization config — is it explicit?
CHECK: CORS configuration — too permissive?
CHECK: TLS version and cipher suite config
CHECK: database connection config — SSL enabled?
CHECK: logging level — debug in production leaks sensitive data?

AUTOMATED_CONFIG_TESTING

RULE: write tests that verify configuration is correct
EXAMPLE: test that TLS 1.0/1.1 is disabled
EXAMPLE: test that debug mode is off in production
EXAMPLE: test that default credentials are absent
EXAMPLE: test that all security headers are present

VERIFY_IMPLICIT_BEHAVIORS

RULE: when upgrading frameworks, verify security-relevant defaults haven't changed
CHECK: did the XML parser default change? (entity expansion enabled/disabled)
CHECK: did the cookie default change? (SameSite, Secure, HttpOnly)
CHECK: did the CORS default change?
NOTE: framework upgrades are a SECURITY EVENT — review changelogs for security-relevant changes


SECURITY:AUTOMATED_SECURITY_SCANNING

IN_PIPELINE

TOOL: SAST (static analysis) — run on every commit
TOOL: dependency scanning — run on every commit
TOOL: container image scanning — run on every build
RULE: any CRITICAL finding = pipeline fails

SCHEDULED

TOOL: DAST (dynamic analysis) — run nightly against staging
TOOL: port scanning — run against deployment targets
TOOL: infrastructure scanning — verify IaC compliance
RULE: run alongside other long-running tests (performance, E2E)

INFRASTRUCTURE_AS_CODE

RULE: infrastructure config in version control — auditable history
BENEFIT: deterministic rebuilds — known-good state
BENEFIT: security review of infrastructure changes via PR
CHECK: is infrastructure defined declaratively?
CHECK: are infrastructure changes reviewed like code changes?


SECURITY:PENETRATION_TEST_RESPONSE

RULE: pen test findings are LEARNING opportunities, not just bug reports

RESPONSE_LEVELS

level response value
0: ignore do nothing zero — waste of pen test budget
1: fix reported fix exact vulnerabilities found short-term only — same mistakes repeat
2: find similar search for same vulnerability class across codebase good short-term — fewer of that class remain
3: systemic learning retrospective, root cause analysis, process change short AND long-term — prevents recurrence

RULE: always aim for Level 3 — systemic learning
ANTI_PATTERN: fix only what pen test explicitly found
FIX: treat findings as EXAMPLES of a class — search for and fix all instances
FIX: run themed retrospective with pen test report as input
FIX: add findings to code review checklists and CI/CD pipeline checks
READ_ALSO: domains/security/index.md (A04:INSECURE_DESIGN)


WIKI_REF: domains/security/books/secure-by-design.md (full chapter mapping)
READ_ALSO: domains/security/secure-design-patterns.md, domains/security/secure-failure-handling.md