About Me

Welcome to my homepage! I’m Lang Lin, a first-year PhD student at the UT Austin, supervised by Prof. Mingyuan Zhou working in the fields of generative models. Before joining UT, I earned my bachelor’s degree at Nanjing University (NJU), majoring in computer science within the national elite program.

I was a research intern at OpenGVLab affiliated with the Shanghai AI Lab, where I worked on the InternVideo Series and was fortunate to be mentored by Dr. Yi Wang. I had the privilege of working at the UIUC under the supervision of Prof. Yu-Xiong Wang in multimodal LLMs. I was also an active member of NJU ACM/ICPC team.

My research interests lie broadly in generative models and large language models, with a particular focus on diffusion language models and multi-modal generation.

I'm actively looking for 2026 Summer Research Internship opportunities in both the USA and China, with particular focus on Diffusion Language Models and LLMs. If you are interested, please feel free to reach out via email! You can download my CV from here.

Publications

( \* denotes equal contribution)

  • GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmentation
    Lang Lin*, Xueyang Yu*, Ziqi Pang*, Yu-Xiong Wang
    CVPR, 2025. [Project Page]
    We propose a simple yet effective MLLMs for language-instructed video segmentation. It emphasizes global-local video understanding and achieves SOTA performance on multiple benchmarks.
  • InternVideo-Next: Towards General Video Foundation Models without Video-Text Supervision
    Chenting Wang, Yuhan Zhu, Yicheng Xu, Jiange Yang, Lang Lin, Ziang Yan, et al.
    CVPR, 2026. [Paper Link]
    We propose an Encoder-Predictor-Decoder framework that disentangles the predictor as a latent world model from reconstruction, using a conditional diffusion decoder to bridge pixel-level fidelity with semantic abstraction. Trained on unlabeled videos without any text supervision, it achieves SOTA results across video understanding benchmarks.
  • A Fragile Guardrail: Diffusion LLM’s Safety Blessing and Its Failure Mode
    Zeyuan He*, Yupeng Chen*, Lang Lin*, Yihan Wang, Shenxu Chang, Eric Sommerlade, et al.
    Under Review. [Paper Link]
    We investigate on the mechanism of safety blessing on D-LLMs and further analysis the effect and the scenarios where context-nesting attacking makes the mechanism fail.

Education

University of Texas at Austin
2025.09 - 2030.07 (Expected)

Ph.D. in Statistics


Nanjing University
2021.09 - 2025.06

B.S. in Computer Science, National Elite Program
GPA: 91.6 / 100.0 (4.58/5.00)
Ranking: 2/229


Awards

  • Dean’s Fellowship at UT Austin, 2025
  • National Elite Scholarship (Top 0.2% nationwide), 2023
  • International Collegiate Programming Contest (ICPC), Silver Medal, 2023
  • China Collegiate Programming Contest (CCPC), Silver Medal, 2022, 2023
  • Fundamental Discipline Special Scholarship at NJU (Ranked 1/147) 2022, 2024

Services

  • Teaching Assistant, STA 235 Data Science for Business Applications at UT Austin, 25 Fall
  • Teaching Assistant, Problem Solving at NJU, 23 Fall, 24 Spring, 24 Fall
  • Teaching Assistant, Introduction to Computer System at NJU, 23 Fall