A client passed their SOC 2 Type I audit — a point-in-time assessment that says "you have the right controls in place right now." Then came SOC 2 Type II — a period assessment that says "you've had the right controls in place consistently for the past 6 months." They failed. Not because their security degraded, but because they couldn't prove it had been maintained.
The paperwork is more annoying than the technology side. Proving you have compliance is different than actually being compliant. Building the feature is the easy part. Audit logs, documentation, certifications — that's where the real effort lives.
The Proving Problem
Your engineering team can implement encryption at rest in a day. Proving to an auditor that encryption has been enabled continuously for six months, that the encryption keys are rotated on schedule, and that no unencrypted data exists anywhere in the system — that requires logging, monitoring, and documentation that most teams don't think about when they flip the encryption switch.
Here's what auditors actually look at:
Access control evidence. Not just "we have role-based access control." They want to see: quarterly access reviews (documented, with sign-off), evidence that terminated employees' access was revoked within 24 hours, logs showing who accessed what and when, and proof that production access requires justification and time-limited approval.
Change management evidence. Not just "we use pull requests." They want: a documented change management policy, evidence that every production change went through the defined process (no hotfixes that bypassed review), approval records for each change, and rollback records for changes that caused incidents.
Vulnerability management evidence. Not just "we run Snyk." They want: scan results retained for the audit period, evidence that critical vulnerabilities were addressed within your defined SLA, documentation of accepted risks (vulnerabilities you chose not to fix) with business justification, and penetration test reports from a qualified third party.
Incident response evidence. Not just "we have a runbook." They want: documented incident response procedures, evidence of incident response testing (tabletop exercises), post-incident reports for actual incidents, and proof that lessons learned were implemented.
Building the Evidence Machine
The key insight: evidence collection should be continuous and automated, not a quarterly panic.
Automate access reviews. Use tools that pull your current access grants from your cloud provider and identity system, flag changes since the last review, and require sign-off from each team's manager. This turns a multi-day quarterly exercise into a 30-minute review.
Log everything automatically. Every authentication event, every authorization decision, every data access, every configuration change. Ship logs to a centralized, tamper-evident store (not a log file on the application server). Set retention policies that match your compliance requirements. Most SOC 2 auditors want 12 months of logs.
Attach evidence to changes. Your CI/CD pipeline should automatically capture: who approved the change, what tests were run, what the deployment window was, and whether the change was successful. This turns your deployment pipeline into a compliance evidence generator.
Monitor your controls. Don't just implement encryption — monitor that encryption is enabled. Don't just require MFA — monitor that MFA is configured for every account. Cloud security posture management tools (CSPM) like AWS Security Hub, Prisma Cloud, or Wiz continuously verify that your security controls are active and alert when they drift.
The Policy-Practice Gap
The second most common audit failure: policies that describe what should happen, not what actually happens.
I've read incident response policies that describe a sophisticated escalation process with a 24/7 security operations center. The company has 12 engineers and no SOC. The policy was copied from a template. The auditor sees the policy, tests it, discovers nobody follows it, and flags a finding.
Write policies that describe your actual process. If your incident response is "whoever's on call gets a PagerDuty alert, triages in Slack, and escalates to the CTO for anything customer-impacting," write that. An honest, followed policy beats a sophisticated, ignored one every time.
Then improve the process — and update the policy to match. The policy should trail the practice, not lead it.
Least Privilege as Default
Everyone getting admin permissions is standard for bootstrapping. But as you mature: scope service accounts narrowly, enforce policies via infrastructure as code, and make broad access the exception that requires justification.
This isn't just good security practice — it's a compliance requirement for every major framework. SOC 2, HIPAA, PCI DSS, and ISO 27001 all require the principle of least privilege. Auditors check by looking at your IAM policies and asking "why does the marketing team have database admin access?"
Start with: separate admin accounts from daily-use accounts, implement break-glass access for production (request, justify, time-limit, auto-revoke), and review service account permissions quarterly. Each of these is both a security improvement and an audit finding waiting to happen if you don't have it.
Related: Security and Compliance Without a CISO | Tech Debt Translation: Making Your CFO Care | DevOps Fundamentals for Growing Teams