According to the Cost of Data Breach Report 2022 published by IBM and the Ponemon Institute, global ransomware attacks were up 41% between 2021 to 2022, costing firms on average $4.5M. Globally, the average cost of a ransomware attack is higher than the average cost of a data breach, and many ransomware attacks cause the additional pain of shutting down critical infrastructure. As SC Magazine says, ransomware is “the gift that keeps on giving,” because criminals can hold organizations to ransom while also having the opportunity to sell the stolen data on the dark web.
One challenge most organizations face when responding to ransomware attacks is the lack of visibility into what data is being held “hostage.” Having good data in the wrong place means that it cannot be utilized or protected effectively. Furthermore, without having a full view of your data, it’s unlikely that you’ll be able to quantify the risk of a breach accurately or take appropriate action during a security incident. For example, the Los Angeles Unified School District disclosed a major ransomware attack in September 2022 and, because of uncertainty related to their data assets, is having to work “with law enforcement to determine what information was impacted and to whom it belongs.”
Internally, however, it’s essential that Chief Information Security Officers (CISOs) and Chief Privacy Officers (CPOs) not only communicate on these types of issues proactively but have their teams collaborate in real-time in the event of an attack.
Typically, CPOs track risk via a process called data mapping, in which data is discovered, assessed, and tracked as it flows throughout the organization, including to third parties. In this way, the CPO determines not just the nature and sources of the data, but also the potential risks associated with it – to the organization itself and to its customers.
That information can be a clear path forward to helping the security team understand where an attack originated and how to mitigate it.
Data discovery is a foundational capability for reducing the attack surface and footprint of high-value data. You cannot protect your data if you do not know what you have. Personal data is the most valuable and riskiest type of data for many organizations, so we will use it as the basis for this discussion.
Step 1: Discover your data to increase visibility
If we take a closer look at the example above, an investigation with law enforcement may have been avoided if a greater understanding of the organization’s data had been developed. And while ransomware attacks and data breaches are becoming a feature of modern life, having an understanding of your data allows you to take a more informed approach to protecting it.
Data discovery tools allow you to scan major data stores for high-value information and classify the results accordingly. You will often find such data in unexpected locations, like sensitive personal information in documents or Active Directory credentials in source code. To find this type of data, you must scan both structured and unstructured sources at the data level. Metadata scanning is not enough to catch rogue data.
Once you know where the potentially vulnerable data resides, you can either delete that data or move it to a hardened location. You will still bear risk, but your riskiest data will now be subject to effective and appropriate protections. And, if rethinking data security is not feasible, your organizations will still have a better understanding of the potential risks and the increased protections you might need.
Step 2: Reduce risk by reducing data
Even if your data is locked down in the most secure location, you will still likely have more data than you need. This becomes an issue on different levels. First, certain privacy laws dictate that organizations should not collect and store more data than is necessary for their purposes. And secondly, the more data you have the bigger the risk to the individual becomes in the event of a data breach.
One method for reducing data collection is to embed consent signals into your data collection procedures which will help to eliminate excessive or inadvertent data collection. The benefits of reducing the amount of data you collected include reducing your organization’s attack surface, but also it becomes easier to support consumer requests for data access or deletion.
Another option for maintaining streamlined data inventories is to set and enforce data retention policies to prevent the accumulation of unnecessary Redundant, Obsolete, and Trivial (ROT) data. Storing ROT data is all risk and no reward.
Step 3: Keep data sprawl contained
The vast quantities of data being produced by organizations daily have created new and numerous threat vectors, enlarging the attack surface to a seemingly never-ending landscape. To combat this, data discovery becomes a foundational element of enterprise security architecture. It enables CISOs to better understand, reduce, and protect their data footprint.
Most importantly, data discovery provides visibility into the risk presented by data resulting from data sprawl. With data discovery processes in place, organizations can run a tighter ship and effectively match their data security investments to their risk tolerance.
The key takeaway should be: If you don’t understand your data, you cannot protect it. And, if you cannot protect it, you are creating risk. Take a look at the OneTrust Data Discovery tool and learn more about how you can develop a clear and complete view of your data.