I study reinforcement learning for social good.
I am a PhD student at Harvard, interested in applying RL for social good. I am also the co-founder of Founding. Other areas of interest include alignment and the principles of intelligence. I am very fortunate to currently be advised by the wonderful Professor Milind Tambe. I am also grateful to have worked closely with Chuang Gan at MIT-IBM Watson, Amy Zhang at Meta, and William Wang at UCSB.
Regents Scholar (top 2.5% of school)
Relevant coursework: Convex Optimization, Game Theory, Advanced Linear Algebra, Differential Geometry, Statistical Machine Learning, Special Topics in Deep Learning
AlphaGo Zero Reimplementation
Graph Theory w/ UCSB
BERT Lecture Summarization
Predicting Winners in League
3D graphics with React
It's like LinkedIn but Tinder
Connecting HS Students w/ College Students
I really like learning, and thinking about learning. I like spending time with people even more.
I love playing tennis (and losing miserably at it to my superior roommate), riding the BART, hating on Apple (sometimes while riding the BART), watching anime, and hunting dinosaurs. Haha just kidding on that last one
The credit assignment problem is an extremely interesting problem that appears in Reinforcement Learning and AI in general. Let's say that I play a game of chess, and make n moves in succession. At the end of the game, I get just one discrete feedback signal: the outcome of the game. How does one attribute the importance of each move to the outcome of the game? This is the credit assignment problem. For a more in-depth introduction to the topic I would recommend this paper from Minsky, starting from part 3 on page 10.The reason I mention this here is because very little of my career credit should be attributed to me. I am eternally grateful to the following people for their kindness, support and guidance. Without them, I would have nothing. In order of recency (not importance): Jiachen Li, Chad Spensky, Shou Chaofan, Derren Slinde.