Skip to content

SOC 2 TYPE II — CONTINUOUS COMPLIANCE

OWNER: julian
UPDATED: 2026-03-24
SCOPE: maintaining SOC 2 Type II compliance over time, audit engagement lifecycle
STANDARD: AICPA SOC 2 (AT-C Section 205 / SSAE 18)


TYPE_I_VS_TYPE_II

TYPE_I (Point-in-Time)

WHAT: assesses control DESIGN at a specific date
QUESTION: "are the controls suitably designed on [date]?"
DURATION: typically 1-2 weeks of auditor engagement
USE: first-time SOC 2 engagements, quick wins, proof of concept
LIMITATION: says nothing about whether controls actually WORKED over time
CLIENT_PERCEPTION: acceptable for initial proof, but enterprises want Type II

TYPE_II (Period of Time)

WHAT: assesses control DESIGN and OPERATING EFFECTIVENESS over a period (typically 6-12 months)
QUESTION: "did the controls work consistently from [start date] to [end date]?"
DURATION: observation period 6-12 months + 4-8 weeks audit fieldwork
USE: ongoing compliance demonstration, enterprise client requirement
ADVANTAGE: proves controls are not just designed well but actually function
CLIENT_PERCEPTION: gold standard for trust — this is what enterprises require

RULE: Type II is the target for GE — Type I is acceptable only as stepping stone
RULE: observation period MUST be continuous — gaps require explanation in the report

WHAT_CHANGES_IN_TYPE_II

TYPE_I_EVIDENCE: policy documents, configuration snapshots, process descriptions
TYPE_II_EVIDENCE: everything in Type I PLUS operating records over the entire period
EXAMPLES:
- Type I: "access review policy exists" → Type II: "4 quarterly access reviews were performed, here are the records"
- Type I: "change management procedure documented" → Type II: "247 changes went through the procedure in 12 months, here is the evidence"
- Type I: "incident response plan exists" → Type II: "3 incidents occurred, all followed the plan, here are the response records"

IF a control existed for part of the period but not all THEN auditor notes the gap
IF a control failed during the period THEN auditor evaluates severity and compensating controls
IF a control was never tested because no events occurred THEN auditor may test it through inquiry + walkthrough


CONTINUOUS_COMPLIANCE_FRAMEWORK

PRINCIPLE

Compliance is NOT a project with a start and end date.
Compliance is a CONTINUOUS OPERATING STATE.
Evidence MUST accumulate as a byproduct of normal operations.
IF you scramble to create evidence before an audit THEN you have already failed.

THREE_PILLARS

PILLAR_1: AUTOMATED_EVIDENCE_COLLECTION
- Scan results generated on schedule (trivy, semgrep, kube-bench)
- Access reviews performed quarterly (amber + piotr)
- Change records auto-generated by git/CI/CD
- Monitoring data continuously captured
- Agent task records in PostgreSQL
GE_ADVANTAGE: AI agents generate evidence automatically through normal operation

PILLAR_2: CONTROL_TESTING
- Continuous automated testing (CI/CD pipeline, Falco, cost gates)
- Periodic manual testing (amber internal audits, pol penetration tests)
- Testing cadence matches control criticality
GE_ADVANTAGE: anti-LLM pipeline IS control testing — every work package goes through review gates

PILLAR_3: EXCEPTION_MANAGEMENT
- Exceptions detected promptly (monitoring agents, automated alerts)
- Exceptions documented immediately (not retroactively)
- Corrective actions tracked to closure
- Root cause analysis prevents recurrence
GE_ADVANTAGE: mira (Incident Commander) + learning cycle ensures exceptions feed improvement


EVIDENCE_COLLECTION_CADENCE

CONTINUOUS (real-time or per-event)

WHAT_IS_COLLECTED:
- Git commits and PR records (change management)
- CI/CD pipeline execution logs (deployment controls)
- Redis stream events (communication audit trail)
- Agent task execution records (operational records)
- Monitoring alerts and responses (incident detection)
- Cost gate enforcement records (financial controls)
WHERE: PostgreSQL, git, Redis streams, application logs
WHO_GENERATES: all operational agents
WHO_REVIEWS: monitoring agents (annegreet, ron), mira (escalation)

DAILY

WHAT_IS_COLLECTED:
- Backup execution confirmations (otto)
- Vulnerability scan results (trivy)
- System health metrics
WHERE: backup logs, scan result storage, monitoring dashboards
WHO_GENERATES: otto, victoria, automated scans
WHO_REVIEWS: automated alerts for failures

WEEKLY

WHAT_IS_COLLECTED:
- Vulnerability scan summary report (victoria)
- Agent compliance summary (amber)
- Open incident status review (mira)
WHERE: wiki reports, monitoring system
WHO_GENERATES: victoria, amber, mira
WHO_REVIEWS: julian

MONTHLY

WHAT_IS_COLLECTED:
- Backup restore test results (otto)
- Secret rotation compliance report (piotr)
- Certificate expiry report (jette)
- Vulnerability SLA compliance report (victoria)
- Incident summary (mira)
WHERE: wiki, audit records
WHO_GENERATES: otto, piotr, jette, victoria, mira
WHO_REVIEWS: julian, amber

QUARTERLY

WHAT_IS_COLLECTED:
- Access review completion (amber + piotr)
- Risk register update (julian)
- Asset inventory update (julian + arjan)
- Management review package (julian → dirk-jan)
- Internal audit results (amber)
WHERE: wiki, audit records, management review minutes
WHO_GENERATES: amber, julian, piotr, arjan
WHO_REVIEWS: dirk-jan (management review)

ANNUALLY

WHAT_IS_COLLECTED:
- Full policy review cycle (julian)
- Threat landscape assessment (victoria)
- Penetration test report (pol)
- Training programme review (julian)
- Regulatory register update (julian)
- ICT continuity plan test (otto + arjan)
WHERE: wiki, secure document storage
WHO_GENERATES: julian, victoria, pol, otto, arjan
WHO_REVIEWS: dirk-jan, external auditor


CONTROL_TESTING_SCHEDULE

HIGH_FREQUENCY (continuous or weekly)

CONTROLS_TESTED:
- Change management (every PR = test) — CC8.1
- Access restrictions (k8s RBAC enforcement) — CC6.1
- Vulnerability scanning (daily trivy) — CC7.1
- Cost controls (every execution) — CC9.1
- Monitoring (continuous Falco) — CC7.1, CC7.2
TEST_METHOD: automated
EVIDENCE: pipeline logs, scan results, enforcement records

MEDIUM_FREQUENCY (monthly or quarterly)

CONTROLS_TESTED:
- Backup and recovery (monthly restore test) — A1.3
- Secret rotation (monthly check) — CC6.7
- Access review (quarterly) — CC6.3
- Risk assessment (quarterly update) — CC3.2
- Configuration drift (quarterly kube-bench) — CC5.2
TEST_METHOD: semi-automated + manual review
EVIDENCE: test results, review reports

LOW_FREQUENCY (semi-annual or annual)

CONTROLS_TESTED:
- Incident response (semi-annual tabletop) — CC7.4
- Business continuity (annual test) — A1.3
- Penetration testing (annual) — CC7.1
- Policy effectiveness (annual review) — CC1.1, CC5.3
- Supplier assessment (annual) — CC9.2
TEST_METHOD: manual + structured exercise
EVIDENCE: exercise reports, assessment reports, policy review records


EXCEPTION_HANDLING

WHAT_IS_AN_EXCEPTION

An exception is any deviation from a documented control, including:
- Control not performed when scheduled
- Control performed but not documented
- Control performed but ineffective
- Control bypassed (emergency or otherwise)
- Control not yet implemented for new requirement

EXCEPTION_RESPONSE_PROCEDURE

STEP_1: DETECT
- Automated detection (monitoring agents, Falco, cost gates)
- Manual detection (amber audits, management review)
- External detection (client report, regulator inquiry)
RULE: detection within 24 hours for critical controls

STEP_2: DOCUMENT
- Date and time of exception
- Control affected (CC/A/C/PI reference)
- Description of what happened
- Impact assessment (who/what was affected)
- Root cause (if immediately known)
RULE: document IMMEDIATELY — not after the fact

STEP_3: COMPENSATING_CONTROL
IF control failed THEN identify compensating measures
EXAMPLE: if automated scan missed a run, perform manual scan
EXAMPLE: if access review delayed, perform expedited review + document reason
RULE: compensating controls must be in place within 48 hours for high-criticality exceptions

STEP_4: CORRECTIVE_ACTION
- Root cause analysis
- Corrective action to prevent recurrence
- Implementation timeline
- Verification that corrective action is effective
RULE: corrective actions tracked to closure in remediation register

STEP_5: AUDITOR_NOTIFICATION
IF exception is significant (affects control operating effectiveness) THEN:
- Document in exception log for auditor review
- Include in management review
- May result in qualified opinion if not remediated


GAP_REMEDIATION_TIMELINE

CRITICAL_GAPS (controls fundamentally absent)

TIMELINE: remediate within 30 days
EXAMPLE: no access control policy, no change management process
ESCALATION: immediate management notification

SIGNIFICANT_GAPS (controls partially implemented or ineffective)

TIMELINE: remediate within 60 days
EXAMPLE: access reviews performed but not documented, backup exists but never tested
ESCALATION: next management review

MINOR_GAPS (documentation or minor procedural)

TIMELINE: remediate within 90 days
EXAMPLE: policy not reviewed on schedule, minor documentation updates needed
ESCALATION: internal audit follow-up

RULE: gap remediation deadlines are HARD — auditors check closure dates
IF gap not remediated within timeline THEN escalate and document reason


SOC_2_TYPE_II_AUDIT_ENGAGEMENT

PHASE_1: SCOPING (4-6 weeks before observation period)

ACTIVITIES:
- Select auditor (CPA firm)
- Define scope (systems, Trust Services Criteria)
- Define observation period (recommend 12 months for first engagement)
- Sign engagement letter
- Provide system description for review
GE_DELIVERABLES: system description, scope definition, control matrix

ACTIVITIES:
- Gap analysis against selected criteria
- Control mapping review
- Evidence availability check
- Remediation of identified gaps
GE_APPROACH: amber performs internal readiness assessment 3 months before observation period start

PHASE_3: OBSERVATION_PERIOD (6-12 months)

ACTIVITIES:
- Controls operate normally
- Evidence accumulates as byproduct of operations
- Exceptions documented immediately
- Internal audits continue per schedule
- Periodic check-ins with auditor (optional)
RULE: NO changes to controls during observation period without documentation and impact assessment
RULE: ALL evidence must be generated during the observation period — historical evidence from before is invalid

PHASE_4: FIELDWORK (4-8 weeks)

ACTIVITIES:
- Auditor requests evidence samples
- Auditor interviews personnel (including Dirk-Jan for management questions)
- Auditor performs walkthroughs of controls
- Auditor tests operating effectiveness through sampling
- Auditor evaluates exceptions and compensating controls
GE_PREPARATION:
- julian coordinates evidence delivery
- amber provides internal audit reports
- Each control owner provides evidence from their domain
CHECK: evidence organized by criteria (CC1-CC9, A1, C1, PI1) for easy auditor access

PHASE_5: REPORTING (2-4 weeks)

ACTIVITIES:
- Auditor drafts report
- GE reviews system description and factual accuracy
- GE provides management assertion
- Final report issued
REPORT_SECTIONS:
- Section I: Management assertion
- Section II: Independent auditor's report (opinion)
- Section III: System description
- Section IV: Trust Services Criteria, controls, tests, and results
- Section V: Other information (optional)

PHASE_6: POST-AUDIT

ACTIVITIES:
- Address any exceptions noted in report
- Plan remediation for findings
- Begin evidence collection for next period
- Share report with clients (under NDA typically)
RULE: SOC 2 report is typically shared under NDA — it is NOT a public document


COMMON_PITFALLS

EVIDENCE_STALENESS

PROBLEM: evidence collected at start of period, not updated
SOLUTION: continuous collection cadence (see schedule above)
CHECK: amber verifies evidence currency during quarterly reviews

SCOPE_CREEP

PROBLEM: trying to cover everything instead of defined scope
SOLUTION: clearly defined scope at engagement start, document exclusions
CHECK: scope statement reviewed at kickoff

PEOPLE_CHANGES

PROBLEM: control owners change, knowledge lost
SOLUTION: wiki brain preserves institutional knowledge, agent profiles ensure continuity
CHECK: AGENT-REGISTRY.json maintained, handoff procedures followed

VENDOR_DEPENDENCE

PROBLEM: relying on vendor SOC 2 reports without understanding gaps
SOLUTION: review vendor reports for relevant criteria, identify complementary user controls
CHECK: vendor report review logged annually

EXCEPTION_FEAR

PROBLEM: hiding exceptions instead of documenting them
SOLUTION: culture of transparency — documented exception with corrective action is BETTER than undocumented compliance
RULE: amber's audit role is to find issues, not to judge — issues found and fixed demonstrate maturity


YEAR-OVER-YEAR_MATURITY

YEAR_1: FOUNDATION

  • Type I report (point-in-time baseline)
  • Control framework established
  • Evidence collection cadence implemented
  • Gaps identified and remediation started

YEAR_2: OPERATING_EFFECTIVENESS

  • Type II report (first observation period)
  • Controls demonstrated over time
  • Exception handling refined
  • Evidence collection fully automated where possible

YEAR_3: MATURITY

  • Type II report (consecutive year)
  • Fewer exceptions
  • Streamlined audit process
  • Controls optimized based on prior findings

ONGOING: CONTINUOUS_IMPROVEMENT

  • Annual Type II reports
  • Controls evolve with business changes
  • Automation increases evidence quality
  • Audit preparation effort decreases

SEE_ALSO: soc2-trust-criteria.md, iso27001-overview.md, compliance-automation.md, audit-procedures.md