Skip to content

Security in the SDLC

End-to-End Security

Security is not a stage. It is present at every stage. This page maps GE's security controls to every phase of the Software Development Lifecycle, with ISO 27001 control references throughout.


Overview

Traditional SDLC treats security as a gate before release. GE treats security as a property of every phase.

Requirements → Design → Implementation → Testing → Deployment → Operations
     ↑            ↑           ↑              ↑          ↑            ↑
  Victoria     Victoria     OWASP          Pol      Immutable    Monitoring
  Threat       Secure       Top 10       Pen Test   Containers   Incident
  Model        Patterns     Prevention   Ashley     Three Gates  Response
                                         Chaos

Every phase has: - A security owner - Defined security activities - Exit criteria that must pass before proceeding - Traceability to the threat model


Phase 1: Requirements

Security Owner: Victoria ISO 27001: A.5.8 (Information security in project management)

Activities

  1. Threat Model Creation Victoria produces a STRIDE threat model for the project. See Threat Modeling for full methodology. This happens before any design work begins.

  2. Security Requirements Derivation Each identified threat generates one or more security requirements. Requirements are specific, testable, and traceable.

Bad: "The system shall be secure." Good: "The system shall enforce RBAC on all API endpoints, returning 403 for unauthorized access attempts within 50ms."

  1. Data Classification All data elements are classified:
Level Description Examples Controls
Public No sensitivity Marketing content Integrity only
Internal Business-sensitive Agent configurations Access control
Confidential Client data PII, project data Encryption + access control
Restricted Credentials, keys API keys, passwords Vault, rotation, audit
  1. Compliance Mapping Requirements are mapped to applicable regulations: GDPR, EAA, ISO 27001, SOC 2, sector-specific requirements.

Exit Criteria

  • [ ] Threat model reviewed and signed off by Victoria
  • [ ] Security requirements documented and traceable to threats
  • [ ] Data classification complete for all data elements
  • [ ] Compliance requirements identified and mapped
  • [ ] Residual risk accepted by human (Dirk-Jan) where applicable

Phase 2: Design

Security Owner: Victoria + Lead Developer ISO 27001: A.8.25 (Secure development lifecycle)

Activities

  1. Secure Architecture Patterns Developers pull validated patterns from the wiki. GE maintains a library of secure design patterns:

  2. Authentication flows (WebAuthn, OAuth 2.0 + PKCE)

  3. Authorization models (RBAC, ABAC)
  4. API security (rate limiting, input validation, output encoding)
  5. Data access patterns (parameterized queries, ORM security)
  6. Cryptographic patterns (key management, envelope encryption)
  7. Session management (httpOnly cookies, secure flags, rotation)

  8. Trust Boundary Definition Every interface between components is a trust boundary. Each boundary gets explicit security controls:

Client ←→ API Gateway      : TLS 1.3, JWT validation, rate limiting
API Gateway ←→ Service     : mTLS, service mesh auth
Service ←→ Database        : Connection pooling, role credentials
Service ←→ External API    : API key in Vault, egress filtering
  1. Attack Surface Minimization Every exposed endpoint, port, and interface is justified. If it does not need to be exposed, it is not exposed. Default-deny network policies in Kubernetes.

  2. Secure Defaults All configuration starts from the most restrictive setting. Permissions are opened only as needed, with justification.

Exit Criteria

  • [ ] Architecture uses only validated secure patterns
  • [ ] Trust boundaries defined with explicit controls
  • [ ] Attack surface documented and minimized
  • [ ] Security design reviewed by Victoria
  • [ ] No security-by-obscurity in the design

Phase 3: Implementation

Security Owner: Implementing Agent + Piotr (review) ISO 27001: A.8.28 (Secure coding)

OWASP Top 10:2025 Prevention

Every implementing agent follows these rules:

A01: Broken Access Control

  • Deny by default — every endpoint requires explicit authorization
  • RBAC checks at middleware level, not in business logic
  • No direct object references without ownership validation
  • CORS configuration: explicit allow-list, never wildcard in production
  • Rate limiting on all authenticated endpoints

A02: Security Misconfiguration

  • No default credentials anywhere — ever
  • Error messages reveal no internal state
  • Security headers on all responses (CSP, HSTS, X-Frame-Options)
  • Debug mode disabled in all non-development environments
  • Directory listing disabled

A03: Software Supply Chain Failures

  • Dependencies pinned to exact versions in lockfiles
  • Automated vulnerability scanning on every build
  • Software Bill of Materials (SBOM) generated per release
  • No direct use of unvetted third-party code
  • Subresource integrity for CDN assets

A04: Cryptographic Failures

  • TLS 1.3 for all data in transit
  • AES-256-GCM for data at rest
  • No custom cryptographic implementations
  • Passwords: argon2id with tuned parameters
  • Keys: Vault-managed, auto-rotated, never in code

A05: Injection

  • Parameterized queries only — no string concatenation
  • ORM with escaped parameters for database access
  • Input validation: allowlist over denylist
  • Output encoding: context-aware (HTML, JS, URL, CSS)
  • Content-Security-Policy headers to prevent XSS

A06: Insecure Design

  • Handled by Phase 1 and Phase 2 — threat model exists
  • Business logic flaws caught in design review
  • Abuse cases documented alongside use cases

A07: Authentication Failures

  • WebAuthn as primary authentication method
  • Multi-factor authentication enforced for all accounts
  • Session tokens: cryptographically random, httpOnly, secure, SameSite
  • Account lockout after configurable failed attempts
  • Credential stuffing protection via rate limiting

A08: Software/Data Integrity Failures

  • All build artifacts are signed
  • Deployment pipeline verifies signatures
  • No unsigned code reaches production
  • CI/CD pipeline hardened against tampering

A09: Logging & Alerting Failures

  • Structured logging at every security-relevant event
  • Authentication events: success, failure, lockout
  • Authorization failures logged with context
  • Log integrity: append-only, tamper-evident
  • Alerting on anomalous patterns

A10: Mishandling of Exceptional Conditions

  • Fail-closed: errors default to deny
  • No stack traces in production responses
  • Resource exhaustion handled gracefully
  • Timeout handling on all external calls
  • Circuit breakers on downstream dependencies

Secure Coding Rules

  • Never log sensitive data (passwords, tokens, PII)
  • Never trust client-side validation alone
  • Always validate on the server
  • Use constant-time comparison for secrets
  • Sanitize all file paths to prevent traversal

Exit Criteria

  • [ ] OWASP Top 10 prevention verified in code review
  • [ ] No hardcoded secrets (scan with gitleaks or similar)
  • [ ] Input validation on all external inputs
  • [ ] Error handling follows fail-closed pattern
  • [ ] Piotr has reviewed security-critical code paths

Phase 4: Testing

Security Owner: Pol (penetration testing) + Ashley (chaos engineering) ISO 27001: A.8.29 (Security testing in development and acceptance)

Pol: Penetration Testing

Pol performs structured penetration testing against the threat model:

  1. Reconnaissance — Map the attack surface
  2. Vulnerability Scanning — Automated tools (OWASP ZAP, nuclei)
  3. Manual Testing — Business logic flaws, authentication bypasses
  4. Exploitation — Proof-of-concept for identified vulnerabilities
  5. Reporting — Severity-rated findings with remediation guidance

Every finding is linked back to the threat model. New threats discovered during testing update the threat model.

Ashley: Chaos Engineering

Ashley tests resilience under failure conditions:

  • Network partition between services
  • Resource exhaustion (CPU, memory, disk)
  • Dependency failure (database, cache, external API)
  • Clock skew and time-dependent logic
  • Concurrent access and race conditions

Security relevance: systems must fail safely. A service under resource pressure must not bypass security controls.

Automated Security Testing

Integrated into the CI/CD pipeline:

Tool Purpose Stage
SAST Static analysis for vulnerability patterns Build
DAST Dynamic testing of running application Staging
SCA Dependency vulnerability scanning Build
Secret scanning Detect leaked credentials Pre-commit
Container scanning Image vulnerability analysis Build

Exit Criteria

  • [ ] Penetration test complete — all critical/high findings resolved
  • [ ] Chaos engineering scenarios passed without security degradation
  • [ ] Automated security scans clean
  • [ ] Regression tests for all fixed vulnerabilities

Phase 5: Deployment

Security Owner: Hugo ISO 27001: A.8.31 (Separation of development, test, and production)

Three Gates

Every deployment passes through three gates:

  1. Build Gate — Artifact signed, SBOM generated, scans clean
  2. Staging Gate — Functional + security tests pass in staging
  3. Production Gate — Canary deployment, health checks, rollback ready

Immutable Containers

  • Container images are built once, promoted through environments
  • No modifications to running containers — ever
  • Read-only root filesystem
  • Non-root user
  • Minimal base image (distroless or Alpine)
  • No shell access in production containers

Infrastructure Security

  • Network policies: default-deny, explicit allow per service
  • Pod security standards: restricted profile
  • Secrets: Kubernetes Secrets backed by Vault
  • TLS certificates: auto-rotated via cert-manager
  • Ingress: WAF-filtered, rate-limited

Rollback

Every deployment has a tested rollback procedure. If security monitoring detects anomalies within the canary window, automatic rollback triggers.

Exit Criteria

  • [ ] All three gates passed
  • [ ] Container image signed and scanned
  • [ ] Network policies in place
  • [ ] Rollback procedure tested
  • [ ] Monitoring and alerting configured

Phase 6: Operations

Security Owner: Hugo + Monitoring Agents ISO 27001: A.5.24 (Information security incident management planning)

Continuous Monitoring

  • Security event aggregation and correlation
  • Anomaly detection on authentication patterns
  • Resource usage monitoring for cryptomining indicators
  • Certificate expiry monitoring
  • Vulnerability feed monitoring for deployed dependencies

Incident Response

GE follows a structured incident response process:

  1. Detection — Automated alerting or manual report
  2. Triage — Severity assessment, scope determination
  3. Containment — Isolate affected systems
  4. Eradication — Remove the threat
  5. Recovery — Restore from known-good state
  6. Post-Incident — Root cause analysis, learning extraction

Team Zulu handles incident response. Learnings feed back into the threat model and wiki.

Patch Management

  • Critical vulnerabilities: patch within 24 hours
  • High vulnerabilities: patch within 7 days
  • Medium vulnerabilities: patch within 30 days
  • Low vulnerabilities: patch in next release cycle

Exit Criteria (Ongoing)

  • [ ] Monitoring dashboards reviewed daily
  • [ ] Vulnerability scans run weekly
  • [ ] Incident response procedures tested quarterly
  • [ ] Threat model updated on architecture changes

ISO 27001 Control Mapping

SDLC Phase ISO 27001 Controls
Requirements A.5.8, A.5.9, A.5.10
Design A.8.25, A.8.26, A.8.27
Implementation A.8.28, A.8.4, A.8.12
Testing A.8.29, A.8.8, A.8.16
Deployment A.8.31, A.8.32, A.8.9
Operations A.5.24, A.5.25, A.5.26, A.5.27

Further Reading