Kevin Joseph Leach

Sony Building, 4110 — Vanderbilt University

kevin.leach@vanderbilt.edu

 

Current Research

My research spans the disciplines of software engineering, security, and artificial intelligence. I work with several amazing students and collaborators to build dependable and robust software systems. This includes:

  • Building dyanmic analyses for securely executing and analyzing security-critical software and malware. This is especially relevant for stealthy or evasive malware.
  • Understanding how human developers and operators work with, comprehend, or make decisions about critical software systems. In particular, we use human studies and objective measures such as brain imaging to conduct these analyses.
  • Measuring, detecting, and understanding the role and limitations of training data for AI-enabled systems. In particular, we develop metrics and challenge datasets that break the performance of AI models, then improve, augment, or transform training data to improve model robustness.

I do not like having intellectual boundaries or staying fixed in one particular research area, set of conferences, or venues. I am fortunate to work with several great students who work across several research domains. I encourage my students to pick projects that motivate them, then submit their best work to the most appropriate venue. If you are interested in working with me as a student or collaborator, please reach out to me by email.


My work during my PhD was substantially funded by the Cyber Systems Assessments group at MIT Lincoln Laboratory, as part of the LOPHI (Low-artifact Observable Physical Host Instrumentation) project.

Debugging Transparency

My current research focuses on a specific sub area of systems security called the debugging transparency problem. In essence, I am developing tools that help engineers analyze malicious code without being detected.

Background

In order to develop defenses against malicious code, engineers must first understand what a sample does and how it behaves. This includes understanding the underlying vector by which infection occurs, as well as the behavior of the malicious payload that executes once a host becomes infected. There are many tools used to understand malware's behavior, from bread and butter debuggers like GDB to more advanced packages such as IDA Pro. Recently, automated triage tools are used to identify samples from a large corpus of malware.

The Problem

Unfortunately, while our defenses have improved a great deal, so too has the complexity of malware. State of the art malicious code is capable of detecting the presence of debuggers and other analysis tools. If the malware can detect a tool, it can change its behavior (i.e., hide itself), break specific debuggers, or otherwise conceal its true nature from the analyst. In these cases, we want to have tools capable of analysis that cannot be detected by the malicious code.

Artifacts and Transparency

Malware can detect certain eccentricities that exist or are produced by analysis tools. As a simple example, in Windows, the kernel API provides a method called isDebuggerPresent. Malware can trivially detect debuggers that modify this method's return value. We refer to these behaviors and oddities as artifacts. Thus, the debugging transparency problem concerns the study and development of artifact-free analysis tools.