photo_.jpg

SOCIALS


<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/f30304aa-3449-4968-94ed-4711f30b0b92/icons8-twitter-48.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/f30304aa-3449-4968-94ed-4711f30b0b92/icons8-twitter-48.png" width="40px" /> Twitter

</aside>

<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/46a01e87-8cf8-4ca8-98b3-f6bc18fbef39/linkedin_480px.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/46a01e87-8cf8-4ca8-98b3-f6bc18fbef39/linkedin_480px.png" width="40px" /> LinkedIn

</aside>

<aside> 📄 Google Scholar

</aside>

I also maintain most of my study notes on LLM here.

$$ \Large \textbf {About Me} \\ $$

I’m currently a 4th-year Ph.D. student at Department of Computer Science, University of California, Los Angeles (UCLA), where I am very fortunate to be advised by Prof. Wei Wang. Alongside my studies, I'm a part-time student researcher at Google. Previously, I did research internships at Microsoft Research and Amazon AWS.

I received my B.Sc. from Department of Math, UCLA, and M.Sc. from Department of Computer Science, UCLA. During that time, I’ve been an student researcher at UCLA-NLP group with Prof. Kai-Wei Chang.

My current research revolves around post-training techniques for Large Language Models (LLMs), particularly focusing on RL(HF), synthetic data generation, and enabling self-improvement in LLMs. Additionally, I explore reward hacking mitigation and multi-modal learning.

Quick links:

My Blogs / Notes

My Talks / Presentations

EDUCATION


2021 - ****

PhD (Computer Science) | ****University of California, Los Angeles (UCLA)

2019 - 2021

MS (Computer Science) | ****University of California, Los Angeles (UCLA)

2015 - 2019

BS (Mathematics of Computation) | University of California, Los Angeles (UCLA)

PROFESSIONAL EXPERIENCE


2024 Summer

Research Intern | Microsoft Research

LLM Self-training for Math Reasoning.

Paper: Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning

[NeurIPS 2024 Math-AI Workshop] [Media]

2023 Summer

Applied Scientist Intern | Amazon AWS

Large Language Model Reasoning with Knowledge Graphs.

PAPERS (See full list in my Google Scholar) *Equal Contribution


2025

Pre-prints

<aside> 📄

<aside> 📄

<aside> 📄

2024

<aside> 📄

<aside> 📄