A Few Thoughts on Digital Archeology

“There are two ways that you can delete data from magnetic media, using software or by physically destroying the media.” – Peter Gutmann, “Secure Deletion of Data from Magnetic and Solid-State Memory”, 1996.

“Perhaps it will be nano-devices that slowly eat away the magnetic surface taking measurements of flux density as they go, and perhaps this will yield amazing results. Perhaps not.” – Fred Cohen, “The Science of Digital Forensics: Recovery of Data from Overwritten Areas of Magnetic Media“, 2012

“The truth is boring and nuanced.”Morgan McSweeny (aka Dr. Noc), 2025

Some say that recovering deleted data is easier than ever in 2025, and in order to truly destroy that data one needs to go all the way to secure multi-pass erasure and specialized low-level formatting in order to rest assured their digital data is properly eradicated and unrecoverable. We at CleverFiles have been in the data restoration industry for the last 15 years, and here’s what we think on this matter.

In most cases, OS developers, ATA standards architects, storage firmware engineers, and controller designers all assume stable and predictable behavior from their drives, drivers, and firmware stacks – but real-world conditions have a habit of breaking those assumptions.

The simple answer to the most burning question of our users about the recoverability of their data, usually, is: It depends.

Truly so, and here’s what it depends on…

#1 The type of media the data was stored on

💽 Hard Disk Drives (HDD): the hardware technology at the heart of this type of storage devices would not affect the recoverability of one’s data. Disk controllers do not implement additional timely erasure mechanisms on the firmware level.

🏿 Solid State Drives (SSD): in short, full data erasure is almost imminent due to the TRIM command which is issued by the system immediately after a delete or format operation. Additionally, TRIM can be triggered when a certain threshold of space is freed or modified. It may also run on a scheduled basis. The exact behavior for any given SSD, however, is a black box – only the vendor’s firmware QA team truly knows how it’s implemented under the hood.

Here’s how TRIM works in general terms:

  • When files are deleted from an SSD, the OS identifies the physical blocks they occupied.
  • It sends a list of these now-unused blocks to the SSD controller via the TRIM command.
  • TRIM doesn’t erase the data immediately – it simply gives the SSD permission to reclaim those blocks at a later time.
  • The SSD controller marks the corresponding NAND pages as “invalid.”
  • During scheduled garbage collection, the controller may physically erase these invalid pages and consolidate valid ones to optimize performance and ensure balanced wear leveling.

However, that’s the idealized textbook version – the reality is (usually much) messier. Hundreds of variables can throw a wrench into the process: firmware quirks, controller behavior, wear-leveling strategies, OS implementation differences, background tasks, power cycles – you name it.

  • TRIM may work unreliably when an SSD is used in a USB enclosure. If you power off a SATA-connected drive right after deletion and reconnect it via USB fast enough, there’s a decent chance the data hasn’t been physically wiped yet. This trick can also work with SMR drives – if you immediately start backing up and the drive is under heavy I/O load, there’s a variable time frame where TRIM or internal cleanup is deferred, regardless of whether it’s via USB or SATA.
  • The more data is being deleted and the heavier the I/O load on the drive, the longer the TRIM execution can be postponed. The SSD controller may deliberately delay handling TRIM commands if it’s busy, prioritizing real-time read/write operations over background maintenance tasks like block cleanup. The same may be true for WinAPI glitches and accidental macOS low-level delays that may result in delayed TRIM execution on the drive’s level.
  • Another scenario: Windows misidentifies the SSD as a generic disk instead of a solid-state drive. As a result, it may skip issuing TRIM commands during operations like formatting, since it doesn’t recognize the drive as TRIM-capable in the first place.
  • Or more: overheating, bad blocks, buggy firmware, lack of TRIM (as well as S.M.A.R.T., SECURITY ERASE UNIT, ENHANCED SECURITY ERASE) support in USB-to-SATA bridge chips, incompatible file systems, TRIM disabled by default on OEM-installed Windows systems.

All of these and many other factors can interfere with proper TRIM execution. Additionally, if the SSD enters service (factory/test) mode, it halts all data modification operations entirely. Hence: It depends.

🛢 HDD + Shingled Magnetic Recording (SMR): While TRIM is commonly associated with SSDs, it’s also supported by some SMR HDDs.

🎟️ Flash Memory: This includes SD, SDHC, SDXC, MicroSD, CF (CompactFlash), MMC – The MultiMediaCard and other memory cards. In this case we should be looking at SD ERASE (CMD38) command, which provides similar functionality to the ATA TRIM command, although it requires that erased blocks be overwritten with either zeroes or ones. A DISCARD sub-operation is further defined in eMMC 4.5, and optionally in SDHC and SDXC cards, that more closely matches ATA TRIM in that the contents of discarded blocks can be considered indeterminate (i.e., “don’t care”).

Some camera models (most notably and repeatedly confirmed with SONY A7 series) issue an SD_ERASE command during quick format of memory cards. As a result, the card’s logical translator gets instantly wiped, and any recovery scan will just return zeros. However, at the physical NAND level, the data often remains intact and can be recovered – provided the card doesn’t use encryption or advanced error correction layers like LDPC.

Others, when deleting data from flash storage, perform a software-level zeroing – immediately overwriting the deleted files with zeros. This makes traditional recovery methods ineffective, as the data is actively erased rather than just marked as deleted.

Finally, file fragmentation is often a major factor in relation to memory cards. It significantly complicates the outcomes of data recovery algorithms. Reconstructing files from non-contiguous NAND pages or disk sectors requires advanced heuristics, especially when file system metadata is missing or corrupted. The more fragmented the storage, the harder it is to reliably piece files back together. Kudos to Disk Drill 6 with its Advanced Camera Recovery (ACR) mode.

🗂 Hybrid storage devices, SSD+HDD (SHDD, Fusion Drives, etc.): During formatting, the system typically wipes data on an HDD by overwriting file system structures. But with SSDs, it’s a different story: some portions of the data may remain recoverable in a forensic lab. This is because SSDs rely on flash translation layers (FTLs), wear-leveling, and (sometimes) deferred TRIM, which can leave physical NAND pages intact even after a logical format.

🗃️ RAID arrays: even though, this type of storage is not anything technologically separate from the ones mentioned above, it is worth noting that the exact data deletion mechanics may be quite different on RAID arrays depending on their type (RAID1, 2, 5, so on), on top of the fact whether a specific array used HDD, or SSD, or a mix of those technologies. If the data was only deleted (not overwritten), it may be recovered with a Quick Scan, Disk Drill includes a specialized RAID recovery (SSH) module that can remotely connect to Linux systems and NAS devices.

#2 How much time has passed after the deletion

Again, depending on the type of storage device we are talking about here the time might be more or less important. Obviously, the faster you act after data loss, the higher the chances to recover deleted data.

#3 Encryption

Encryption dramatically affects data recoverability: if the deleted files or storage medium were encrypted and you don’t have the decryption key or password, recovery is virtually impossible. Even if the raw data is recovered, the chances are high it will be unreadable.

Here are some types of encryption that may get in the way of a successful data restoration session:

  • OS-level encryption: Implemented by the operating system itself (e.g., BitLocker, FileVault). It encrypts data transparently as it’s written to disk and decrypts it on access, often tied to user credentials.
  • Software-based encryption: Performed by third-party tools (like VeraCrypt or AxCrypt). Offers flexible encryption schemes, virtual encrypted volumes, or file-level protection, independent of the OS.
  • Hardware-based encryption: Handled by dedicated chips – either built into the SSD, motherboard (e.g., TPM or Intel PTT), or RAID controller. It’s fast and transparent to the OS but often harder to audit or bypass, especially if proprietary.
  • Self-Encrypting Drives (SEDs) are storage devices (usually SSDs or HDDs) that automatically and continuously encrypt all data on the drive using built-in hardware encryption – without user intervention or system slowdown.
    • Crypto Secure Erase (CSE) is a fast and effective data destruction method used on Self-Encrypting Drives (SEDs) and other encrypted storage devices. Instead of overwriting data, CSE instantly renders all stored data unreadable by deleting or replacing the encryption key.
    • PSID Revert is a special command used to reset a Self-Encrypting Drive (SED) to its factory state by wiping all data and removing access controls – even if the drive is locked and the password is unknown.

#4 File system the partition was formatted into

Some file systems have better recovery success rates than others due to different ways they handle deleted files. E.g., some file systems do not support TRIM command. Here are some basic examples:

  • NTFS (Windows): more favorable to DIY file recovery.
  • FAT/FAT32/exFAT: non-journaled, hard to recover original file structures; do not support TRIM.
  • HFS+/APFS (macOS), ReFS/BTRFS, ext3/ext4 (Linux): journaled, higher chances to recover original file structures and properties.

#5 Physical vs logical damage/recovery

Bad blocks can either help or hinder your data recovery efforts. The same is true about unstable disks. When data is deleted or the drive is formatted, the disk controller may skip or fail overwriting the contents of bad blocks. These sectors are usually (but not always) marked as unusable and excluded from normal I/O operations, as a result their data may remain physically intact. With some forensic hardware, it may be possible to extract residual data from these bad blocks.

Firmware bugs and drive degradation can both lead to unpredictable behavior during data deletion or formatting. When you format a drive, it may not actually be zeroed out. In some cases, formatting only resets file system structures, leaving the underlying data untouched. This can be confirmed by scanning the drive with a data recovery software, files often remain recoverable until they’re actively overwritten or erased at the physical level.

#6 File size and type

It may sound obvious, but smaller files are easier to recover, fragmentation is less frequent, as they are usually stored in one chunk.

The file types you are after during a recovery process may directly affect the outcome. Some files store more information about their internal structure, and may contain self-healing data that helps in recovery. Signature-based recovery methods, like Disk Drill’s Deep Scan, are based on the vendor’s internal knowledgebase of known file signatures which helps piece them together when the partition or file system data, as well as MFT information is no longer available.

#7 Data deletion method

The most obvious factor in measuring the success of a data recovery attempt is the way the data was lost.

Even a so-called “full” format may not truly wipe all data. On some versions of Windows and macOS, default full format settings often just rebuild the file system structures, perform a surface scan, and mark bad blocks without actually zeroing out every data block unless explicitly instructed to do so via advanced options or command-line flags.

When files are deleted from NAS or cloud-connected storage, recovery might still be possible, especially if you have physical or SSH-level access to the underlying disks. Logical deletion doesn’t always imply physical erasure, so direct access opens a recovery window before the data is overwritten or TRIMmed.

#8 Effectiveness of recovery software algorithms

This one seems pretty obvious. When choosing a reliable DIY data recovery software, consider its metadata analysis features, file carving (or deep scan) capabilities, as well as methods of intelligent data reconstruction, like Advanced Camera Recovery.

#9 Specialized hardware

This is slightly outside of the realm of logical data recovery, but you should always keep in mind that depending on a specific scenario and your target drive’s condition, specialized hardware like PC3000, USB Stabilizer, and others may substantially increase your chances of success.

For example, some bad blocks can’t be written to, but still allow read access, which means a standard format operation won’t affect them. In such cases, the original data may remain intact and readable (with the help of a specialized hardware). Surprisingly, even a basic USB 2.0 or 1.1 docking station can sometimes read from these blocks more reliably due to the lower data transfer rate, effectively simulating the behavior of a primitive USB stabilizer – slower, but more stable reads from marginal media.

It’s precisely because of these “imperfections” – like firmware quirks, delayed TRIM, recoverable bad blocks, and incomplete formatting – that data recovery remains possible even in cases that seem hopeless at first glance. These edge conditions create loopholes in the data destruction process, leaving remnants that skilled tools and techniques can still extract.

On the other hand though, that’s exactly why industrial standards like NIST 800-88 and DIN 66399 exist – they define strict protocols for physical destruction of data when absolute, 100% data irrecoverability is required. Logical deletion and even formatting aren’t enough in high-security scenarios; only shredding, degaussing, or incineration guarantees data is gone beyond recovery.

Let’s summarize…

In 2025, when a user believes a file is deleted, it usually just means the file is no longer visible to that user in their specific circumstances. It doesn’t guarantee that the data (or its artifacts) have been physically and irreversibly removed. Visibility ≠ erasure.

We are asked something like “Can you recover my data?” probably around 500 times a day. And our answer has not changed in the last decade: “We don’t know, you should try and see for yourself.” Every recovery/data loss case is unique. Those that seem more or less standard and following regular predictable patterns may well turn out to be unrecoverable, while we observed completely unexpected “lucky” SSD misbehaviors when TRIM did not kick in on time, and all the deleted data was still there for our software, Disk Drill, to extract and recover. However, in most cases, the probability to recover deleted data from a modern SSD is very low, and may go up for other types of storage.

Of course, if you’ve just unboxed a brand-new MacBook and deleted files from your desktop, they’re likely physically unrecoverable at the block level thanks to File-Based Encryption, hardware-backed security, and FileVault. However! Even then, copies of those files might still exist in your iCloud trash or local Time Machine backups. True deletion in the modern ecosystem often requires cleaning up everywhere the data might have been replicated.

A Practical Example

Our Response

.updated: June 30, 2025 author: CleverFiles Team