photo_.jpg

SOCIALS


<aside> ๐Ÿ“„ Google Scholar

</aside>

<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/46a01e87-8cf8-4ca8-98b3-f6bc18fbef39/linkedin_480px.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/46a01e87-8cf8-4ca8-98b3-f6bc18fbef39/linkedin_480px.png" width="40px" /> LinkedIn

</aside>

<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/f30304aa-3449-4968-94ed-4711f30b0b92/icons8-twitter-48.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/f30304aa-3449-4968-94ed-4711f30b0b92/icons8-twitter-48.png" width="40px" /> Twitter

</aside>

I also maintain most of my study notes on LLM here.

$$ \Large \textbf {About Me} \\ $$

Iโ€™m an upcoming 5th-year Ph.D. candidate at Department of Computer Science, University of California, Los Angeles (UCLA), where I am very fortunate to be advised byย Prof. Wei Wang. Alongside my studies, I'm a student researcher at Google. Earlier, I completed research internships at Microsoft Research and Amazon AWS, and Iโ€™m honored to receive the 2025 Amazon Fellowship.

I earned both my B.S. in Mathematics and M.S. in Computer Science from UCLA. During that time, Iโ€™ve been an student researcher atย UCLA-NLPย group withย Prof. Kai-Wei Chang.

My research interests revolves around post-training strategies for Large Language Models (LLMs), particularly focusing on RL(VR), synthetic data generation, and self-improving LLMs. Most recently, I work on multi-modal and agentic reasoning.

Quick links:

My Blogs / Notes

My Talks / Presentations

PROFESSIONAL EXPERIENCE


2025 March-Ongoing

image.png

Student Researcher | Google LLC

RL for LLM Agentic Reasoning

Project 1: Ultra-fast Exploration for Scalable Agentic Reasoner [Internal Contribution to Google LLMs]

Project 2: to be released.

2024 Summer

image.png

Research Intern | Microsoft Research

LLM Self-Training for Math Reasoning.

Paper: Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning

[NeurIPS 2024 Math-AI Workshop] [Media]

2023 Summer

image.png

Applied Scientist Intern | Amazon AWS

Large Language Model Reasoning with Knowledge Graphs.

PAPERS (See full list in my Google Scholar) *Equal Contribution


Multi-modal LLMs

<aside> ๐Ÿ“„

image.png

</aside>

<aside> ๐Ÿ“„

image.png

</aside>

<aside> ๐Ÿ“„

image.png

</aside>

<aside> ๐Ÿ“„

Synthetic Data for LLM Improvement

<aside> ๐Ÿ“„

image.png

</aside>

<aside> ๐Ÿ“„

<aside> ๐Ÿ“„

image.png

</aside>

<aside> ๐Ÿ“„

image.png

</aside>

Data Curriculum / Scheduling

<aside> ๐Ÿ“„