Abstract

Kickstarted by the Digital Forensic Research Workshop (DFRWS) conference in 2005, modern memory analysis is now one of most active areas of computer forensics and it mostly focuses on techniques to locate key operating system data structures and extract high-level information. These techniques work on the assumption that the information inside a memory dump is consistent and the copy of the physical memory was obtained in an atomic operation. Unfortunately, this is seldom the case in real investigations, where software acquisition tools record information while the rest of the system is running. Thus, since the content of the memory is changing very rapidly, the resulting memory dump may contain inconsistent data. While this problem is known, its consequences are unclear and often overlooked. Unfortunately, errors can be very subtle and can affect the results of an analysis in ways that are difficult to detect. In this article, we argue that memory forensics should also consider the time in which each piece of data was acquired. This new temporal dimension provides a preliminary way to assess the reliability of a given result and opens the door to new research directions that can minimize the effect of the acquisition time or detect inconsistencies. To support our hypothesis, we conducted several experiments to show that inconsistencies are very frequent and can negatively impact an analysis. We then discuss modifications we made to popular memory forensic tools to make the temporal dimension explicit during the analysis and to minimize its effect by resorting to a locality-based acquisition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call