Dangerous Assumptions: Solid-State Disk Behavior Underlying Digital Forensics
Forensically capturing a conventional disk is straightforward: power down the system, attach the drive to a portable forensic unit using a protective write-blocking device, and then capture the device bit-for-bit. Since the drive is protected by a write blocking device, the drive is presumed completely intact. Non-conventional mass storage devices (e.g., “solid-state disks,” hereafter “SSD”) implement features that invalidate the presumptive efficacy of write-blockers. This has implications in both the government and corporate worlds.
From a programming and management perspective, SSDs often appear as nothing more than very high performance plug compatible replacements for conventional disk drives. Disk drives however use magnetic recording technologies. Writing an updated record takes the same amount of time as writing the original record to previously unoccupied space. Erasing newly freed space requires the appropriate number of write operations, a significant performance penalty. For this reason, such security erase is almost always a settable parameter. The default setting is almost always off.
SSDs are different. Writing a virgin cell merely requires a write cycle. Rewriting a cell requires two cycles: an erase cycle and a write cycle. The erase cycle is governed by the physics, and takes time. Performance is improved by “pre-clearing” no longer needed cells (e.g., free space on the disk) during otherwise unused device cycles. This is the opposite of the case with magnetic storage. Since there are negligible positioning delays with SSD, pre-clearing is a performance issue. There are two approaches in use: an operating system-based implementation; and a controller based implementation invisible to the operating system. The hardware-based autonomous implementation is of interest to the forensics community.
Some SSD devices take the novel approach of mining the on-device file structure (e.g., Window's NTFS) for data about which areas of storage contain live data and which areas are slack space awaiting reassignment. The device controller “knows” that the first operation to these areas from the host operating system will be a write; thus it is safe to preemptively erase these areas in preparation. This is a substantial performance improvement and takes place on a controller-determined schedule. There is no intervention required on the part of the host computer to activate this activity. Thus, on a volume with a controller-recognized file structure (e.g., NTFS), a write-blocking device is only effective to the extent of protecting areas declared as currently in use.
A recent paper from Graeme Bell and Richard Boddington of Murdoch University in Perth, Solid State Drives: The Beginning of the End for Current Practices of Digital Forensic Recovery?, documented several consequences of this implementation approach with respect to standard best practices for digital forensic acquisitions. In short, the autonomous pre-clearing function rendered free space unrecoverable on short order from the time that the drive was powered-on.
The classic advice given to forensic investigators has been that a “quick format” operation is an ineffective technique for scrubbing data because the actual information could always be recovered. For autonomous SSD devices with knowledge of NTFS, this presumption seems to be extremely questionable.
Documentation for the autonomous erase functionality is not readily available. However, it would logically seem to be limited to standard NTFS volumes on standard partitions. It does not seem applicable to devices used in RAID arrays or to volumes created using whole disk encryption (e.g., TrueCrypt). In those cases, it is difficult to see how the device controller, which is not privy to either the complete picture (e.g., RAID) or a cleartext copy of the data (e.g., TrueCrypt), could determine the necessary information. As noted by Bell and Boddington, the automatic nature of the resetting function on space determined by the controller to be unallocated has several implications for standard forensics procedures:
The paper goes on to note that conventional interpretations of scrubbed drives are almost always in the negative (e.g., “deliberate erasure,” “destruction of evidence”). Since there are a number of legitimate reasons for quick formats and file deletes, the potential for erroneous interpretation is significant.
Going further, there is cause for concern beyond the forensics community. Systems managers are well aware that errors in file system tables are not uncommon, nor have they been unrecoverable. In this case, we have an autonomous actor (e.g., the SSD) interpreting the file structure. An error in file system structures is now an automatic short-term erase order for the affected data.
There is also a long history of coordination difficulties with file systems accessed from two uncoordinated systems. There is significant potential for file system and data corruption in these situations. In the past, this has primarily occurred with the file structures themselves. In particular, there are often problems with free space management and multiple allocated blocks. Autonomous erasure of deduced dead information would seem to be a previously un-encountered risk.
Corporate IT departments should be particularly concerned. What was a previously a simple matter of running a recovery utility against a disk with corrupted structures may now involve multiple actors, all of which are operating with no mechanisms for synchronization. This is a matter for concern, since the possible risks may invalidate previously sound operating procedures, leading to data loss.
|||Graeme Bell and Richard Boddington (2010) “Solid State Drives: The Beginning of the End for Current Practice in Digital Forensic Recovery?”|
|||Steve Bunting (2008) EnCE The Official EnCase® Certified Examiner Study Guide pp 69|