Limiting the Blast Radius of a Data Breach
Legacy security practices leave business data exposed, especially in the cloud. Better cloud security reduces a data breach’s blast radius, easing incident response.
- The amount of damage that a data breach can cause — the blast radius — is likely a lot larger than you think.
- Legacy security approaches centered on a strong firewall can neglect access restrictions for sensitive information inside the organization.
- Coupling policies like zero trust with real-time activity monitoring can improve cloud security and limit the blast radius of a data breach.
Many companies have transitioned business data and applications to the cloud to cut down on infrastructure costs and make it easier for employees to access critical assets.
While such a move makes it easier for companies to support a remote workforce, it can have an unintended consequence. The so-called “blast radius” of a data breach or other type of security incident can be much larger in the cloud, since assets and workloads are more widely dispersed and interconnected. At the same time, security best practices haven’t always kept up with this new way of working.
The latest Cost of a Data Breach Report from IBM and the Ponemon Institute found that 83% of companies experience at least one breach per year. In addition, 45% of all breaches now occur in the cloud. Despite such a clear threat, fewer than one in four companies are deploying cloud security best practices consistently — putting them at risk of a cloud security breach with a wide blast radius.
Organizations can limit the overall impact of a successful breach by using a range of security techniques and tools, including artificial-intelligence-based solutions such as Mimecast’s CyberGraph, designed to detect the most evasive email threats.
Your Blast Radius Is Bigger than You Think
The blast radius of a security incident is defined as the amount of damage that the incident could potentially cause. It’s every account, file, application, server, or other corporate asset that could be compromised once an attacker gets “inside” the system.
Chances are your organization’s blast radius for a data breach is larger than you think. That’s because of the number of dependencies among workloads, which are the applications or processes that use computing power or memory to complete a task.
Companies run tens of thousands of computing workloads at a time, and each workload depends on others to be executed. As a simple example, successfully sending an email needs a server, an application, a network, and some processing power, both in the browser and on the client device.
The typical workload, though, has dozens of dependencies. That means the number of dependencies in your company could be well into the millions. Unless you take the time and resources to map these dependencies, many will remain unknown. As a result, a data breach in one part of the company will affect countless other parts of the business unless you take the necessary steps to improve security.
Traditional Security Models Don’t Minimize the Blast Radius
The traditional model of cybersecurity defense has been securing the perimeter. The premise is simple, if not a bit medieval: Build a firewall so thick that no attacker can penetrate it. But no security system keeps out 100% of attacks.
And unfortunately, focusing so much on this “north-south” traffic, as Mimecast partner Palo Alto Networks describes it, means that organizations have been paying little attention to the lateral, “east-west” traffic that occurs among people, applications, servers, and devices inside the firewall.
Basing a security strategy on the assumption that attackers can’t get through the firewall has had two pivotal consequences, both of which are exacerbated by the growing adoption of cloud services.
Workload dependencies are a blind spot. Knowing how workloads depend on each other requires a deep understanding of data (its format, sensitivity, and location), permissions (who or what is authorized to access data or business applications), and general IT operations.
This is a complex engineering problem in and of itself. It’s all the more challenging in today’s computing environments — where data is hosted onsite as well as offsite on various public and private cloud services, where collaboration tools make it easier than ever for employees to share information (and harder than ever to perform version control), and where data increasingly exists in an unstructured format (whether text and visual files or search engine queries and data streamed from smart devices).
Access management is a low priority. If the foremost security goal is to keep attackers out, organizations place their strongest controls around who’s allowed to go through the firewall. This means efforts to restrict access to sensitive data or applications for users or devices inside the organization are limited.
What’s more, restrictions on access are often enforced by corporate policies that must be reviewed and renewed manually. Given the time and resources necessary to review credentials for every user and device, lax enforcement is common. For example, employees transitioning to new roles will receive permissions to access data and systems for their new role but won’t have access taken away for the things they needed for their old role. In addition, companies often grant broad permissions to service accounts (unlike typical user accounts, these are used to operate specific system services and applications). Service account permissions are broad to ensure there’s little disruption to business operations when systems need to be fixed.
Shifting this approach to cloud-based services poses additional security concerns and increases the blast radius of a breach. In transitioning workloads to the cloud, many organizations left the same broad permissions in place, with few restrictions on who can access data or applications. Likewise, many paid little attention to default cloud configuration settings, such as those that grant administrative access to all users. This combination of broad permissions and limited oversight into resources hosted offsite leaves organizations especially vulnerable.
5 Strategies to Limit your Data Blast Radius
Some organizations respond to these cybersecurity challenges by segmenting on-premises networks. This approach can isolate endpoints that host or handle sensitive data, restrict lateral movement across the entire network, and keep data breaches contained to a particular network segment.
However, segmentation requires devoting substantial time and resources to re-architecting and reconfiguring networks. It also maintains the “north-south” approach and doesn’t account for the growing number of corporate IT assets hosted outside the firewall.
As organizations continue their migration to the cloud, the following five policies and strategies will help to improve their cloud security positioning, limit the blast radius of a data breach when it happens, and enable a faster return to business as usual:
- Zero Trust: A zero-trust security framework assumes that assets or user accounts shouldn’t implicitly be granted trust based solely on where they are located or who owns them. It’s based on two underlying principles: Remote users and cloud-based services are outside the traditional perimeter, and devices inside the perimeter can and will be compromised. Zero trust requires authentication and authorization of a user or asset before establishing a connection to an enterprise resource. According to the Cost of a Data Breach Report, about 40% of organizations have deployed a zero-trust security architecture; those that have done so spend an average of 20% less to mitigate a data breach.
- Identity-Based Segmentation: This approach applies the principles of network segmentation while making it possible to isolate individual workloads regardless of where they’re located. Also known as microsegmentation, identity-based segmentation provides better visibility into all network traffic and enables the creation of policies for specific applications. This reduces an organization’s attack surface and strengthens data breach containment, both of which reduce blast radius.
- Asset Management: Organizations greatly improve cloud security when they gain visibility into the data, hardware, and software they own. Asset management involves maintaining an inventory of IT resources, scanning apps and services for the presence of sensitive data, assessing and mitigating security gaps, and updating policies and requirements to minimize future gaps. The National Institute of Standards and Technology (NIST) lists asset management as a core piece of its Cybersecurity Framework, so this should be top of mind for organizations if it isn’t already.
- Activity Monitoring: Similarly, visibility into what’s happening across an organization’s network can help detect attempts at unauthorized access and isolate them when they happen, limiting the blast radius of an attack. DNS security solutions can help to monitor email and web traffic — the source of nearly all data breaches — while email monitoring can detect unusual activity from compromised user accounts or remote access malware, block sensitive data from leaving the organization, and prevent lateral movement from email to other corporate systems.
- Incident Response: Forming an incident response team and creating a cybersecurity playbook help organizations learn to respond quickly, contain a security incident, and reduce the data breach blast radius. Integrating security tools into systems based on extended detection and response (XDR) can shorten the breach lifecycle by nearly a month, according to the Cost of a Data Breach Report, while organizations that regularly test their incident response plans spend 58% less to mitigate a breach.
The Bottom Line
With data breaches all but inevitable for today’s companies, taking the right steps to shore up cloud security can minimize the blast radius of a breach. An approach that integrates a range of best-of-breed security solutions can help organizations take advantage of the latest threat intelligence, automate repetitive but critical security tasks, gain visibility into IT assets, and speed up incident detection and response. Successfully embracing this approach reduces response time and cost while helping organizations get back to business-as-usual faster. Read on to learn how Mimecast’s Extensible Security Hooks (MESH) fits into this strategy.
 “Cost of a Data Breach Report 2022,” IBM
 “What is Microsegmentation?” Palo Alto Networks
 “What Is the Principle of Least Privilege (POLP)?” CrowdStrike
 “Zero Trust Architecture,” NIST Computer Security Resource Center
Subscribe to Cyber Resilience Insights for more articles like these
Get all the latest news and cybersecurity industry analysis delivered right to your inbox
Sign up successful
Thank you for signing up to receive updates from our blog
We will be in touch!