张 培源
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/44aa5dbc-d55c-477f-9d96-25e2fe4ed8c1/cd361d2a-9856-49d3-bfa3-abc0d900c2fa/25231.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/44aa5dbc-d55c-477f-9d96-25e2fe4ed8c1/cd361d2a-9856-49d3-bfa3-abc0d900c2fa/25231.png" width="40px" /> Github
</aside>
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/44aa5dbc-d55c-477f-9d96-25e2fe4ed8c1/f30b1dd3-7384-4ae2-b912-11f36f7e174f/Picture2.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/44aa5dbc-d55c-477f-9d96-25e2fe4ed8c1/f30b1dd3-7384-4ae2-b912-11f36f7e174f/Picture2.png" width="40px" /> G Scholar
</aside>
<aside> <img src="https://prod-files-secure.s3.us-west-2.amazonaws.com/44aa5dbc-d55c-477f-9d96-25e2fe4ed8c1/4a681ff9-9560-425e-a09b-83cefa5ab4e8/twitter-3.png" alt="https://prod-files-secure.s3.us-west-2.amazonaws.com/44aa5dbc-d55c-477f-9d96-25e2fe4ed8c1/4a681ff9-9560-425e-a09b-83cefa5ab4e8/twitter-3.png" width="40px" /> Twitter
</aside>
<aside> 📧 Email
</aside>
<aside> 💬 WeChat
</aside>
I’m Peiyuan (Perry) Zhang, a PhD student in Computer Science at UC San Diego, advised by Prof. Hao Zhang. My research focuses on video generation, especially efficient attention and long-context models.
Previously, I worked on LLMs and VLMs at SUTD and NTU, and interned at ByteDance Seed. I enjoy building scalable systems and exploring new ideas at the intersection of ML, vision, and efficiency. I believe great ML scientists are, fundamentally, exceptional software engineers.
(*: equal contribution)
Faster Video Diffusion with Trainable Sparse Attention
Peiyuan Zhang*, Yongqi Chen*, Haofeng Huang*, Zhengzhong Liu, Ion Stoica, Eric Xing, Hao Zhang
Arxiv Preprint.[paper]
Fast Video Generation with Sliding Tile Attention
Peiyuan Zhang, Yongqi Chen, Runlong Su, Hangliang Ding, Ion Stoica, Zhengzhong Liu, Hao Zhang
ICML 2025.[paper]
Long Context Transfer from Language to Vision
Peiyuan Zhang*, Kaichen Zhang*, Bo Li*, Guangtao Zeng, Jingkang Yang, Yuanhan Zhang, Ziyue Wang, Haoran Tan, Chunyuan Li, Ziwei Liu
TMLR .[paper]
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning
Guangtao Zeng*, Peiyuan Zhang*, Wei Lu
ACL 2023 Long Paper. [paper]
Better Few-Shot Relation Extraction with Label Prompt Dropout
Peiyuan Zhang, Wei Lu
EMNLP 2022 Long Paper. [paper]
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
A lightweight framework for accelerating large video diffusion models.
One-for-all LMMs evaluation package.
LM long context training made simple.
UC San Diego, 09/2024–Present
PhD Student, with Prof. Hao Zhang
ByteDance Seed, San Jose, 06/2025–09/2025
Research Intern, with Xiaonan Nie @ Seedance Team
Nanyang Technological University, 10/2023–08/2024
Reseach Assistant, with Prof. Ziwei Liu
Singapore University of Technology and Design, 10/2022– 10/2023
Reseach Assistant, with Prof. Wei Lu
Singapore Agency of Science, Technology and Research, 05/2020–09/2020
Research Intern
Acknowledgments The template of this personal website is shamelessly brought from here.