Back To The Blog

Erase or Destroy: How to Properly Manage Unwanted Data

Cybersecurity / May 31 , 2021

Data erasure and data destruction are key components of a data management strategy and crucial for regulatory and legal compliance for any organization that handles sensitive information. Kathleen Moriarty, Chief Technology Officer of the Center of Internet Security explains why these practices are important, particularly in the age of ransomware and major third party and supply chain cyberattacks.

Why is data destruction important? How do you distinguish between erasure and destruction?

Data destruction ensures that data is no longer accessible by making the storage medium unusable. It’s typically employed when an organization disposes of equipment that might contain confidential data such as personally identifiable information (PII), other regulated data types (e.g., financial information), and organizationally sensitive information such as trade secrets or intellectual property.

Having worked in an environment where classified data was kept, [I can say that] it is all too easy to have a “contamination of data” where sensitive data is unexpectedly discovered on a system. This may be as simple as data being deemed classified or more sensitive at a later date, which is unknown to the user or administrator. A simple example might be that a user stored a credit card on a work system while processing a transaction.

Data destruction could involve degaussing or physical destruction, which in some cases is performed after a data erasure procedure is followed. Depending on the media, the data erasure technique, and the sensitivity of the stored data, this extra step may be required for assurance that the data cannot be accessed. Data erasure is a software-based technique to overwrite the data, making it inaccessible while data destruction is physical. Multiple overwrites of the medium using several patterns of data are often used in software erasure procedures.

What do organizations tend to overlook in their current practices? What could they do better?

The ease in which a mistake could be made, leading to data exposure results, emphasizes the importance of data erasure and destruction procedures. If equipment is repurposed within an organization, data erasure is important to prevent exposure. If a system compromise occurs, it’s often viewed as too difficult to “wipe” (i.e., perform data erasure) and reimage a system. With several recent wide-scale attacks, the best-practice recommendation has been to wipe and reimage systems. Resources are sometimes limited, and environments have not been set up to easily allow for this type of recovery. Without resiliency built into the architecture, data erasure is not feasible and thus is skipped.

What are the regulatory and legal risks of poor data destruction practice? What laws and regulations speak to this?

HIPAA and numerous individual state privacy laws set forth requirements for the destruction of stored personal data. GDPR also sets forth data destruction requirements with penalties for non-compliance. For GDPR’s “right to be forgotten,” data erasure requirements must be met for existing stored data, while also preventing any additional collection of data to avoid large fines. When media that contains regulated data, such as PII, is disposed of, the media must be appropriately destroyed via a degaussing process, shredding, incineration, or other physical method of destruction.

Any regulation that results in a data retention policy decision is likely to have a requirement for proper data erasure and destruction.

Why is it important to have data erasure procedures in place in case of cyberattack events?

Recommendations to perform data erasure to ensure no hidden files are left behind are common when handling system compromises. Attacks such as SolarWinds and the recent Exchange zero-day exploits both had strong recommendations to follow a wipe and reimage procedure to provide an assurance that all remnants of the attack were removed. With increasingly sophisticated attacks, the risk taken that a crumb from an attack remains and could result in the attacker retaining access is not worth the time savings of skipping these procedures.

The problem is that data erasure and reimage procedures take time. If an image of the system is not available after installation, the system requires configuration and system hardening according to recommendations for security (e.g., CIS Benchmarks). Methods exist to ease this process and avoid downtime in order to wipe and reimage systems, but they must have been built into the design and architecture of the service.

What other tips do you have for better data management practice and risk mitigation?

Stakeholders in policy development must include legal, compliance, security, business owners, and other relevant leaders. Framing data destruction as risk mitigation is important. In some instances, risk decisions may lead to one of several risk treatment options, e.g., accept, transfer, mitigate, defer. Legal or compliance obligations may set requirements for data retention and destruction that include room for variance in policy and implementation details. In other cases, the potential risk to the business may result in more stringent policies for the protection of data and the organization’s reputation.

Ideally, systems will have been architected with resiliency in mind. For servers, the ability to operate in a cloud-native architecture supports the need for resiliency. If a server application is compromised, it is easy to initiate a new instance that accounts for required vulnerability remediation and move a workload, allowing for any impacted system to go through data erasure procedures. If a cloud-native architecture or virtual environment cannot be supported, the use of mirrored systems may allow for a service to remain up while a wipe and reimage procedure takes place on one side of the mirror at a time.

Attacks will only increase in sophistication, and it is safe to expect data erasure to be part of the recovery process. Establishing your system architecture in a way that simplifies the recovery process (e.g., data erasure and reimage) while eliminating downtime could be beneficial for other use cases, including patch application. Building in resiliency can aid in eliminating downtime windows for routine maintenance as well as providing an easier recovery path in the event an attack occurs.


Tags

Related Blog Posts

Download 2023 Cyber Claims Study

The annual NetDiligence® Cyber Claims Study uses actual cyber insurance reported claims to illuminate the real costs of incidents from an insurer’s perspective.

Download

© 2024 NetDiligence All Rights Reserved.