I am a (happy!) graduate student in Julie Shah's Interactive Robotics Group (IRG) at MIT. In my research, I focus primarily on interpretability of AI systems. This manifests itself as custom-built neural models for particular tasks like fair classification, probes to understand the linguistic properties of NLP models, and representation learning for human understanding.

Before joining IRG, I worked for two years as a software engineer on the Advanced Projects team at Amazon Robotics. Prior to that, I got my Masters in Engineering in the Robust Robotics Group at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, concentrating in robotics. My research adviser was Professor Nicholas Roy in the Department of Aeronautics and Astronautics.

Prior to joining CSAIL as a graduate student, I earned my BS in Electrical Engineering and Computer Science and Aeronautical and Astronautical Engineering from MIT in 2015.


I'm currently studying methods for teaching rules to and extracting rules from neural nets. These are two sides of the same coin for control and interpretability. The main methods I employ are causal probing techniques and custom neural architectures that better support human understanding. Much of my work is situated in language-adjacent tasks because of the rich interplay between structure (e.g. linguistics or grammar) and intuition (e.g., how can an agent learn the meaning of a word).