Sony Building, 4110 — Vanderbilt University
My current research focuses on a specific sub area of systems security called the debugging transparency problem. In essence, I am developing tools that help engineers analyze malicious code without being detected.
In order to develop defenses against malicious code, engineers must first understand what a sample does and how it behaves. This includes understanding the underlying vector by which infection occurs, as well as the behavior of the malicious payload that executes once a host becomes infected. There are many tools used to understand malware's behavior, from bread and butter debuggers like GDB to more advanced packages such as IDA Pro. Recently, automated triage tools are used to identify samples from a large corpus of malware.
Unfortunately, while our defenses have improved a great deal, so too has the complexity of malware. State of the art malicious code is capable of detecting the presence of debuggers and other analysis tools. If the malware can detect a tool, it can change its behavior (i.e., hide itself), break specific debuggers, or otherwise conceal its true nature from the analyst. In these cases, we want to have tools capable of analysis that cannot be detected by the malicious code.
Malware can detect certain eccentricities that exist or are produced by analysis tools.
As a simple example, in Windows, the kernel API provides a method called
Malware can trivially detect debuggers that modify this method's return value. We refer
to these behaviors and oddities as artifacts. Thus, the debugging transparency
problem concerns the study and development of artifact-free analysis tools.