You got the call. Someone found something — an anomaly in the logs, a ransom note on a server, a vendor who noticed unusual traffic. It might be a breach. It might be confirmed. Either way, you’re in the room now and people are looking at you.

Here’s what I want you to know: the first 72 hours are almost never about fixing the technical problem. They’re about sequencing the right actions in the right order, so you don’t make this worse.

I’ve been in this room. The instinct to immediately start shutting things down, calling customers, and alerting everyone is the instinct that creates secondary disasters. Let’s slow that down.

Hour 0–4: Contain Without Destroying Evidence

Your first call is to whoever is technically capable of stopping the bleeding — not cleaning up the mess. There’s a critical difference.

Containment means isolating affected systems so the attacker can’t move further. It does not mean wiping servers, restoring from backup, or rebooting everything. If you destroy the compromised environment before you’ve forensically captured it, you’ve just made it nearly impossible to know what was taken, how they got in, and whether they’re still in.

Right now, do these things:

  • Identify the blast radius. What systems are confirmed affected? What systems share credentials or network access with those systems?
  • Change credentials. Every privileged account, every service account, every API key that could have been exposed. Do this now, before the next step.
  • Isolate, don’t destroy. Take compromised systems offline but preserve disk images before any remediation.
  • Get a forensic specialist on the phone. If you don’t have one, your cyber insurance provider does — call them in the first hour, not the first day.

This is the step most companies skip and almost all of them regret.

Breach notification laws vary by state, by country, by industry, and by what type of data was exposed. HIPAA breaches have different timelines than PCI breaches. GDPR has a 72-hour notification window to regulators. State attorneys general have their own requirements. What you say, to whom, and when is a legal question — not a PR question.

Your lawyer needs to be in the room before your marketing team drafts a customer email. The specific language you use in an incident disclosure can have material legal consequences. “We believe customer data may have been exposed” and “we confirm customer data was accessed” are not interchangeable — one is appropriate at hour 12, the other might not be appropriate until you have forensic confirmation.

Do not send anything to customers, post on social media, or talk to press until your legal counsel has reviewed it. I know this feels slow. Do it anyway.

Hour 24–48: Find Out What Actually Happened

By now you should have forensic help engaged and containment in place. Now you start the investigation.

The questions you need answered:

  • When did this start? Attackers are often in your systems for weeks before anyone notices. The visible event is usually not the entry point.
  • What was accessed? Specifically — what data stores, what record types, what volume. “Customer data” is not an answer. “12,000 records containing name, email, and encrypted payment data stored in our orders database” is.
  • How did they get in? Phished credential, unpatched vulnerability, compromised third-party vendor, misconfigured storage bucket? You cannot prevent the next one without knowing the answer.
  • Are they still in? This is not a rhetorical question. Many breaches involve persistent access that survives the initial remediation. Your forensic team needs to check for backdoors, new accounts, and modified credentials before you declare the environment clean.

Hour 48–72: Communicate — In the Right Order

Once you have legal clarity and a reasonably complete picture of what happened, you communicate. In this order:

First, your board or major investors. They should not read about this in a customer email.

Second, regulatory bodies if required (your lawyer will tell you which ones and when).

Third, affected customers. Direct, factual, specific about what happened and what you’re doing. No corporate-speak. No minimizing. Tell them exactly what data may have been exposed and exactly what they should do to protect themselves.

Fourth, all customers if the situation warrants general notification.

What You Do Not Do

Do not lie about the scope of what happened. Breaches that get managed cleanly are survivable. Cover-ups are not. The Uber security team learned this at a cost of $148 million and criminal charges.

Do not publicly blame a specific employee, vendor, or system until you have forensic certainty. You will probably be wrong, and it will make the situation worse.

Do not issue the all-clear until your forensic team has signed off on it. Not when the symptoms stop. Not when the visible attack ends. When qualified people have confirmed the environment is clean.

After the 72 Hours

The breach itself is survivable. The six months after it are where companies either rebuild trust or lose it. You’ll need a post-incident review, security hardening, and — if you don’t have one already — a formal incident response plan so the next time this happens, you’re not improvising.

If you’re in the middle of this right now, or you want to know whether your current incident response capability is real or theoretical, book a 15-minute call. In that call we can talk through your specific situation — what systems are involved, what your exposure looks like, and what to prioritize in the next 24 hours.


Related: What Is a Penetration Test? | Cybersecurity for Growing Companies | Security Compliance Without a CISO