Containing the Blast Radius

Apr 1, 2026·3 min read
Photo of Suga Team
Suga Team

Automated scanners hit every IP on the internet in minutes. The question isn't whether you get scanned, it's how much damage a breach can do.

Containing the Blast Radius

Origin discovery is typically the first step in any reconnaissance workflow. GreyNoise's 2025 Mass Internet Exploitation Report found that attackers scan the entire internet because it's quick and cheap to do, then immediately go after whatever's exposed.

Tools like masscan can hit every IP address on the internet for a specific open port at ten million packets per second. That's the entire IPv4 address space covered in under six minutes from a single machine.

Once an open port is found, the next tool in the chain connects and fingerprints exactly what's running: software name, version number, OS. Then a vulnerability scanner like Nuclei matches those versions against databases of known CVEs, publicly catalogued security flaws that tell an attacker exactly how to get in. The whole pipeline runs continuously and autonomously.

The actual exploitation is often embarrassingly simple. Maybe it's an unpatched dependency, an admin panel with default credentials, or a misconfigured environment variable. The scanner already told them what to poke at, so they poke away.

What you can actually control

You can't stop people from scanning the internet, and there's always a chance of something slipping through the cracks. What you can control is the blast radius: how much damage someone can do once they're in.

There are three layers where this matters, and most production setups leave at least one of them wide open:

The edge: making your origin unreachable

Most apps sit behind a CDN or proxy like Cloudflare, which handles DDoS protection, WAF rules, bot detection, and rate limiting. The problem is that the origin server itself is often still directly reachable. Attackers who find it can skip every one of those protections.

Mutual TLS solves this. Normal TLS is one-sided: the browser verifies the server, but the server accepts connections from anyone. With mTLS, both sides have to present a certificate. The gateway only trusts certificates signed by the CDN's origin CA, and refuses the connection outright if one isn't presented.

This means every request that reaches the container has passed through the full filtering pipeline. An attacker can still use the app through legitimate channels, but they can't sidestep the defenses.

That said, the app is still exposed to the internet through the CDN. A vulnerability in the code, a compromised dependency, or a supply chain attack can still be exploited through legitimate traffic, and someone can still get a shell in the container.

The network: stopping lateral movement

Once an attacker has a shell, the first thing they do is scan the local network, looking for databases, caches, internal services, and other tenants. On most platforms, those connections go through.

The fix is deny-by-default network policy. Inbound traffic to a container should only come from its own services and traffic that has passed through the edge. A compromised container belonging to another tenant shouldn't even be able to see yours.

Outbound is trickier. The app needs to reach public APIs like Stripe or Resend, but it shouldn't be able to reach private IP ranges where other tenants and internal cloud services live. There's one internal address in particular worth calling out: the cloud metadata endpoint. AWS, GCP, and Azure all run it, and it hands out cloud credentials to anything that asks. It's how containers authenticate with cloud services, but it's also one of the first things an attacker queries after getting in. Earlier in 2025, F5 Labs observed a campaign doing exactly this against AWS-hosted applications, stealing credentials and escalating access. Blocking it entirely and providing credentials explicitly is the safer path.

The container: stripping what shouldn't be there

Even with the network locked down, a default container hands an attacker a lot to work with. Environment variables often map out an entire infrastructure, giving an attacker a blueprint of the system. Writable filesystems let them drop tools or modify binaries. Long-running containers give them time to explore the wider container network.

Stripping all of it, locking the filesystem to read-only, minimizing environment variables, and booting fresh from the original image on every restart, removes a lot of the tools an attacker would use to escalate from a single container to the broader system.


These aren't novel ideas. mTLS at the edge, deny-by-default networking, metadata blocking, and minimal container runtimes are well-understood practices. The problem is that configuring all of them correctly for every deployment is exactly the kind of thing that gets missed when you're moving fast.

That's part of what we've been building into Suga. Every deployment gets these layers by default, not because they're hard to understand, but because they're easy to forget.

Ready to ship?

Start with a free account.

Sign up