I am a (happy!) graduate student in Julie Shah's Interactive Robotics Group (IRG) at MIT. In my research, I focus primarily on interpretability of AI systems. This manifests itself as custom-built neural models for particular tasks like fair classification, probes to understand the linguistic properties of NLP models, and inducing more human-like emergent communication.
Before joining IRG, I worked for two years as a software engineer on the Advanced Projects team at Amazon Robotics. Prior to that, I got my Masters in Engineering in the Robust Robotics Group at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, concentrating in robotics. My research adviser was Professor Nicholas Roy in the Department of Aeronautics and Astronautics.
Prior to joining CSAIL as a graduate student, I earned my BS in Electrical Engineering and Computer Science and Aeronautical and Astronautical Engineering from MIT in 2015.
I'm currently studying cognitive principles to understand or teach neural nets. These are two sides of the same coin for interpretability and control. The main methods I employ are causal probing techniques of large language models and complexity-limited emergent communication architectures that induce more human-like communication. Much of my work is situated in language-adjacent tasks because of the rich interplay between structure (e.g. linguistics or grammar) and intuition (e.g., how can an agent learn the meaning of a word).
Some of the my most recent work explores how trading off utility, informativeness, and complexity can be used to generate human-like emergent communication systems (example for a color domain shown on the left).