<aside> 📄 Google Scholar
</aside>
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/46a01e87-8cf8-4ca8-98b3-f6bc18fbef39/linkedin_480px.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/46a01e87-8cf8-4ca8-98b3-f6bc18fbef39/linkedin_480px.png" width="40px" /> LinkedIn
</aside>
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/f30304aa-3449-4968-94ed-4711f30b0b92/icons8-twitter-48.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/27eb5805-53fb-4a73-8248-9cca33adce3c/f30304aa-3449-4968-94ed-4711f30b0b92/icons8-twitter-48.png" width="40px" /> Twitter
</aside>
I also maintain most of my study notes on LLM here.
$$ \Large \textbf {About Me} \\ $$
I’m currently a 4th-year Ph.D. candidate at Department of Computer Science, University of California, Los Angeles (UCLA), where I am very fortunate to be advised by Prof. Wei Wang. Alongside my studies, I'm a student researcher at Google. Earlier, I completed research internships at Microsoft Research and Amazon AWS, and I’m honored to receive the 2025 Amazon Fellowship.
I earned both my B.S. in Mathematics and M.S. in Computer Science from UCLA. During that time, I’ve been an student researcher at UCLA-NLP group with Prof. Kai-Wei Chang.
My current research revolves around post-training strategies for Large Language Models (LLMs), particularly focusing on RL(VR), synthetic data generation, and self-improvement in LLMs. Most recently, I work on multi-modal and agentic reasoning.
Quick links:
2025 March-Ongoing
Student Researcher | Google LLC
RL for LLM Agentic Reasoning
Project 1: Ultra-fast Exploration for Scalable Agentic Reasoner [Contribution to Gemma]
Project 2: to be released.
2024 Summer
Research Intern | Microsoft Research
LLM Self-Training for Math Reasoning.
Paper: Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning
[NeurIPS 2024 Math-AI Workshop] [Media]
2023 Summer
Applied Scientist Intern | Amazon AWS
Large Language Model Reasoning with Knowledge Graphs.
Pre-prints
<aside> 📄
</aside>
<aside> 📄
</aside>
<aside> 📄
</aside>
2025
<aside> 📄
</aside>
2024
<aside> 📄
</aside>
<aside> 📄
</aside>
<aside> 📄
Before 2024
<aside> 📄