Skip to content

Threat Modeling

Gate Requirement

No development starts without a completed threat model. Victoria produces and owns the threat model. This is the single most important security artifact in the GE pipeline.


Why Threat Model

Finding a vulnerability in design costs 1x. Finding it in implementation costs 10x. Finding it in production costs 100x. Finding it after a breach costs everything.

Threat modeling shifts security left — not to the build phase, but to before the first line of code is written. It forces the question: "What can go wrong?" before the question "How do we build it?"


STRIDE Methodology

GE uses Microsoft's STRIDE framework as the primary threat classification taxonomy. STRIDE maps directly to the CIA triad plus Authentication, Authorization, and Non-Repudiation.

The Six Categories

Category Threat Security Property Example
Spoofing Pretending to be someone else Authentication Stolen session token used to impersonate user
Tampering Modifying data or code Integrity Man-in-the-middle altering API responses
Repudiation Denying an action occurred Non-repudiation User claims they never authorized a transaction
Information Disclosure Exposing data to unauthorized parties Confidentiality SQL injection leaking customer database
Denial of Service Making a system unavailable Availability Resource exhaustion through unbounded queries
Elevation of Privilege Gaining unauthorized access Authorization IDOR allowing access to another user's data

STRIDE per Element

Apply STRIDE to each element in the data flow diagram:

Element Type S T R I D E
External Entity x
Process x x x x x x
Data Store x ? x x
Data Flow x x x

Processes are exposed to all six categories. Data stores and data flows have reduced exposure. External entities are primarily spoofing vectors.


Data Flow Diagrams

The data flow diagram (DFD) is the foundation of the threat model. It maps how data moves through the system and where trust changes.

Elements

┌─────────┐     External Entity (user, external system)
│  Actor   │     Rounded rectangle
└─────────┘

┌───────────┐
│  Process  │    Rectangle — code that transforms data
└───────────┘

╔═══════════╗
║ Data Store║    Double-bordered — database, file, cache
╚═══════════╝

──────────►      Data Flow — arrow showing direction

- - - - - -      Trust Boundary — dashed line

DFD Levels

Level Scope Purpose
DFD-0 System context Shows the system as a single process with external entities
DFD-1 Major components Breaks the system into subsystems and data stores
DFD-2 Component detail Shows internal processes within a subsystem

Victoria starts with DFD-0 for new projects. DFD-1 is required for all projects. DFD-2 is produced for security-critical components.

Example: Client Project Data Flow

                    Trust Boundary: Internet
- - - - - - - - - - - - - - - - - - - - - - - - - - -
                    ┌────────────┐
  ┌──────────┐     │   API      │     ┌──────────────┐
  │  Client  │────►│  Gateway   │────►│  Auth Service │
  │  Browser │◄────│  (nginx)   │     └──────────────┘
  └──────────┘     └────────────┘            │
                         │          Trust Boundary: Internal
- - - - - - - - - - - - -│- - - - - - - - - - - - - -
                    ┌────────────┐     ╔══════════════╗
                    │  App       │────►║  PostgreSQL   ║
                    │  Service   │◄────║  Database     ║
                    └────────────┘     ╚══════════════╝
                    ┌────────────┐
                    │  External  │
                    │  API       │
                    └────────────┘
                    Trust Boundary: Third Party
- - - - - - - - - - - - - - - - - - - - - - - - - - -

Every arrow crossing a trust boundary is a high-priority analysis point.


Trust Boundaries

Trust boundaries separate zones with different privilege levels. Data crossing a trust boundary must be validated, authenticated, and potentially re-encrypted.

Common Trust Boundaries in GE Projects

Boundary From To Controls
Internet → DMZ Untrusted client API gateway TLS, WAF, rate limiting
DMZ → Internal API gateway Application services mTLS, JWT validation
Internal → Data Application Database Connection auth, parameterized queries
Internal → External Application Third-party API API key from Vault, egress filtering
Agent → Agent GE agent Another GE agent Redis Stream auth, message validation
Human → System Dirk-Jan Admin UI WebAuthn, session management

Trust Boundary Rules

  1. Never trust data from a lower-trust zone
  2. Always validate at the boundary, not deeper in the stack
  3. Re-validate even if the sending zone claims to have validated
  4. Encrypt data crossing boundaries unless both zones are equally trusted
  5. Log all boundary crossings for audit

Attack Trees

Attack trees decompose a threat goal into the steps required to achieve it. They complement STRIDE by showing how an attack might be carried out, not just what could go wrong.

Structure

Goal: Access another user's data
├── OR: Exploit Broken Access Control
│   ├── AND: Find IDOR vulnerability
│   │   ├── Enumerate user IDs
│   │   └── Access /api/users/{id}/data without auth check
│   └── AND: Privilege escalation
│       ├── Register normal account
│       └── Modify role claim in JWT
├── OR: Exploit Authentication
│   ├── Credential stuffing attack
│   ├── Session fixation
│   └── AND: Social engineering
│       ├── Phish credentials
│       └── Bypass MFA via SIM swap
└── OR: Exploit Information Disclosure
    ├── SQL injection on search endpoint
    ├── Error message leaking stack trace
    └── Exposed backup file with database dump

When to Use Attack Trees

  • High-value assets (payment data, PII, credentials)
  • Complex multi-step attack scenarios
  • When STRIDE identifies a threat but the path is unclear
  • For communicating risk to non-technical stakeholders

Attack trees are optional for low-risk features. Victoria decides when they are warranted.


When Victoria Triggers Threat Modeling

Threat modeling is not a one-time activity. Victoria triggers a new or updated threat model when:

Mandatory Triggers

Trigger Scope Deliverable
New project Full system Complete threat model (DFD-0 + DFD-1 + STRIDE)
New client onboarding Client-specific components Threat model addendum
Architecture change Affected components Updated threat model
External integration added Integration boundary Boundary threat analysis
Security incident Affected system Revised threat model + gap analysis
Trigger Scope Deliverable
New API endpoint Endpoint + data flow Lightweight STRIDE per endpoint
Dependency upgrade (major) Dependency surface Supply chain risk assessment
Compliance requirement change Affected controls Control gap analysis
Quarterly review Full system Threat model freshness check

Threat Model Template

Every threat model follows this structure:

1. Document Header

project: [Project name]
version: [Threat model version]
author: victoria
date: [Creation date]
last_reviewed: [Last review date]
status: [draft | review | approved | superseded]
reviewers: [List of reviewing agents]

2. System Description

  • Business context and purpose
  • Key assets and their classification
  • Users and their roles
  • External dependencies
  • Regulatory requirements

3. Data Flow Diagrams

  • DFD-0: System context
  • DFD-1: Component decomposition
  • DFD-2: Security-critical detail (where applicable)
  • Trust boundaries marked and labeled

4. Threat Enumeration

For each element in the DFD, apply STRIDE:

ID Element Category Threat Description Likelihood Impact Risk Score Mitigation
T-001 API Gateway Spoofing Attacker forges authentication token Medium High 12 JWT validation with key rotation
T-002 Database Information Disclosure SQL injection extracts data Low Critical 15 Parameterized queries, WAF rules
... ... ... ... ... ... ... ...

5. Attack Trees

For high-risk threats, decompose into attack trees. Show AND/OR relationships between attack steps.

6. Mitigation Strategy

For each threat: - Mitigation approach (prevent, detect, respond, accept) - Specific controls - Agent responsible for implementation - Verification method

7. Residual Risk

Threats that cannot be fully mitigated: - Documented residual risk level - Justification for acceptance - Compensating controls - Human approval required — Victoria cannot accept residual risk alone


Risk Scoring

GE uses a Likelihood x Impact matrix for risk scoring.

Likelihood Scale

Score Level Description
1 Very Low Requires insider access + advanced skills + time
2 Low Requires advanced skills or insider access
3 Medium Exploitable with publicly available tools
4 High Exploitable with basic skills and public tools
5 Very High Trivially exploitable, automated attacks exist

Impact Scale

Score Level Description
1 Negligible No data exposure, no service disruption
2 Minor Limited data exposure, brief disruption
3 Moderate Significant data exposure or extended disruption
4 Major Large-scale data breach or prolonged outage
5 Critical Complete system compromise, regulatory violation

Risk Matrix

Impact
  5 │  5   10   15   20   25
  4 │  4    8   12   16   20
  3 │  3    6    9   12   15
  2 │  2    4    6    8   10
  1 │  1    2    3    4    5
    └─────────────────────────
      1    2    3    4    5   Likelihood

Risk Thresholds

Score Level Action
1-4 Low Accept with documentation
5-9 Medium Mitigate — compensating controls acceptable
10-15 High Mitigate — direct controls required
16-25 Critical Mitigate before development proceeds

Residual Risk Acceptance

Only humans (Dirk-Jan) can accept residual risk. No agent, including Victoria, has authority to accept risk.

The acceptance record includes: - Threat ID and description - Original risk score - Mitigations applied - Residual risk score - Business justification - Acceptance date and signature - Review date (maximum 6 months)


How Threat Models Feed the Pipeline

The threat model is not a document that sits in a drawer. It drives the entire development pipeline:

Threat Model
    ├──► Security Requirements → User stories / acceptance criteria
    ├──► Secure Design Patterns → Architecture decisions
    ├──► Test Cases → Pol's penetration test plan
    ├──► Monitoring Rules → Alerting configuration
    └──► Incident Playbooks → Team Zulu response procedures

Traceability

Every security control in the codebase traces back to a threat:

Threat T-003 (SQL Injection)
  → Requirement SR-003 (Parameterized queries)
    → Implementation (Drizzle ORM, no raw SQL)
      → Test TC-003 (SQL injection test suite)
        → Monitoring M-003 (WAF SQL injection alerting)

If a control has no threat, question why it exists. If a threat has no control, that is a gap.


ASTRIDE: AI Agent Extensions

GE's 60-agent architecture introduces AI-specific threats. The ASTRIDE extension adds an "A" category:

Category Threat Example
Agent Manipulation Prompt injection, context poisoning Malicious input in work item tricks agent into executing harmful code
Spoofing Agent identity forgery One agent impersonates another to access restricted resources
Tampering Work item modification Corrupted Redis Stream message alters agent behavior
Repudiation Untraceable agent actions Agent modifies code without audit trail
Information Disclosure Knowledge leakage Agent exposes internal architecture in client-facing output
Denial of Service Agent resource exhaustion Token burn loop consumes budget
Elevation of Privilege Agent permission escalation Agent accesses files outside its workspace

Victoria applies ASTRIDE when modeling agent-to-agent interactions and any system where agents process external input.


Further Reading