people

members of the lab or group


prof_pic2.png

Max Planck Institute for Security and Privacy

Universitaetsstr. 140

44799 Bochum, Germany


layout: about title: about permalink: / subtitle: > Postdoctoral Researcher at Max Planck Institute for Security and Privacy

profile: align: right image: prof_pic2.png image_circular: false more_info: > <p>Max Planck Institute for Security and Privacy</p> <p>Universitaetsstr. 140</p> <p>44799 Bochum, Germany</p>

selected_papers: true social: true

announcements: enabled: true scrollable: true limit: 20

latest_posts: enabled: false —

I am a postdoctoral researcher at the Max Planck Institute for Security and Privacy (MPI-SP), working on machine memory, mechanistic interpretability, and alignment in large language models.

My path to AI began in neuroscience wet labs—recording synaptic responses, inducing LTP and LTD to study memory formation, and using optogenetics to causally link neural circuits to behavior. I then moved upstream, developing AI systems to analyze the complex behaviors these manipulations produced. This arc—from probing biological memory, to controlling it, to quantifying its behavioral consequences—now shapes how I approach neural networks.

I study AI Engrams: locating where learned knowledge resides within model parameters and building precise methods to edit or erase it. I am also interested in the correspondence between biological and artificial neural networks—reinterpreting transformers as models of hippocampal memory consolidation and studying representational alignment between visual cortex and convolutional networks.

My underlying conviction is that interpretability, steerability, and alignment form a causal chain. If we can precisely locate memory traces within parameters, we gain the ability to steer model behavior at its source. And if we can steer it, we can align it. The path to trustworthy AI runs through understanding the learning and memory of AI.

Ph.D. from Korea University; previously at IBS and KIST.