What doesn’t kill you makes you stronger.
I am Runxin Xu (许润昕), working as a researcher at DeepSeek. I am deeply involved in the development of DeepSeek’s series of models, including DeepSeek-R1, DeepSeek V1/V2/V3, DeepSeek Math, DeepSeek Coder, DeepSeek MoE.
Previously, I was a master student at the Institute of Computational Linguistics in the School of EECS, Peking University, advised by Dr. Baobao Chang and Dr. Zhifang Sui. Prior to this, I earned my Bachelor’s degree at Shanghai Jiao Tong University.
My long-term research interest primarily lies in AGI, continuously pushing the boundaries of AI intelligence with scalable and effective methods. I constantly remind myself to read the bitter lessons.
Education
Peking University, Sep. 2020 - Jun. 2023
- Master student at the School of EECS.
- Advised by Dr. Baobao Chang and Dr. Zhifang Sui in Institute of Computational Linguistics.
Shanghai Jiao Tong University, Sep. 2016 - Jun. 2020
- Bachelor student at the School of Cyber Science and Engineering.
Experience
DeepSeek AI, August. 2023 – present
- LLM post-training.
Metabit Trading, November. 2022 – March. 2023
- Quant researcher.
- Advised by Bowei Ma, and An Ju.
ByteDance Search, January. 2022 – September. 2022
- Search engine in Douyin Mall.
- Advised by Shian Chen, Zhe Chen, and Pengcheng Yang.
Alibaba Damo Academy, Mar. 2021 – December. 2021
- Effective and efficient language model.
- Advised by Dr. Songfang Huang, and Fuli Luo.
ByteDance AI Lab, Nov. 2019 – Jan. 2021
- Information Extraction, Machine Translation.
- Advised by Dr. Lei Li, Dr. Mingxuan Wang, and Jun Cao.
Microsoft C+AI, July. 2019 – October. 2019
- Develop a vscode code extension vscode-maven.
- Advised by Rome Li, Jinbo Wang, and Yan Zhang.
Selected Publications
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. [PDF]
DeepSeek-AI, …, Runxin Xu, …
Arxiv 2025
DeepSeek-V3 Technical Report. [PDF]
DeepSeek-AI, …, Runxin Xu, …
Arxiv 2024
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model. [PDF]
DeepSeek-AI, …, Runxin Xu, …
Arxiv 2024
Deepseek llm: Scaling open-source language models with longtermism. [PDF]
DeepSeek-AI, …, Runxin Xu, …
Arxiv 2024
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence. [PDF]
DeepSeek-AI, …, Runxin Xu, …
Arxiv 2024
Deepseekmath: Pushing the limits of mathematical reasoning in open language models. [PDF]
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, Daya Guo
Arxiv 2024
Math-shepherd: Verify and reinforce llms step-by-step without human annotations. [PDF]
Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, Zhifang Sui
ACL 2024
Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models. [PDF]
Damai Dai, Chengqi Deng, Chenggang Zhao, Runxin Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y Wu, Zhenda Xie, YK Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, Wenfeng Liang
ACL 2024
Multimodal arxiv: A dataset for improving scientific comprehension of large vision-language models. [PDF]
Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, Qi Liu
ACL 2024
Omni-math: A universal olympiad level mathematic benchmark for large language models. [PDF]
Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, Zhengyang Tang, Benyou Wang, Daoguang Zan, Shanghaoran Quan, Ge Zhang, Lei Sha, Yichang Zhang, Xuancheng Ren, Tianyu Liu, Baobao Chang
ICLR 2025
BERT Raises a Child: Towards Improving Generalization for Large Language Model Fine-tuning. [PDF] [code] [Report]
Runxin Xu*, Fuli Luo*, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, Fei Huang
EMNLP2021
Document-level Event Extraction via Heterogeneous Graph-based Interaction Model with a Tracker. [PDF] [code] [talk]
Runxin Xu, Tianyu Liu, Lei Li, Baobao Chang
ACL2021
Double Graph Based Reasoning for Document-level Relation Extraction. [PDF] [code]
Shuang Zeng*, Runxin Xu*, Baobao Chang, Lei Li
EMNLP2020
For all publications, please visit Google scholar.
Hornors & Awards
- National scholarship. 2018, 2019, 2021.
- 华泰证券科技奖学金. 2022.
- Outstanding Graduates in Shanghai Jiao Tong University. 2020
- A-class scholarship in Shanghai Jiao Tong University (top 1 in the major). 2018.
- B-class scholarship in Shanghai Jiao Tong University. 2017, 2019.
- Cyrus Tang Scholarship (唐仲英德育奖学金). 2018, 2019
- Arawana scholarship (金龙鱼奖学金). 2017.
- Meritorious Winner in Interdisciplinary Contest In Modeling (top 8% of 11262 teams). 2018
Contact Me
- Email: runxinxu AT gmail DOT com
- Links: GitHub, Google scholar.