Shortcut Models for Efficient Sampling in Flow-Based Generative Models
Supervisor
Suitable for
Abstract
Abstract
Flow-based generative models normally require many ODE integration steps to produce high-quality samples. Shortcut models aim to accelerate sampling by replacing this step-by-step integration with a single update or a small number of updates. Standard samplers follow the instantaneous velocity of the flow at each step, while shortcut methods instead predict a more direct transformation from noise to data, such as an average velocity or a displacement estimate. Because these approximations capture only part of the underlying dynamics, different shortcut designs can vary substantially in accuracy and stability.
This project will investigate how different shortcut formulations affect the quality of fast sampling in flow-based generative models. The work will analyse continuous-time flows and small pretrained generative models, comparing several choices of velocity or displacement representations for one-step or few-step generation. The project will evaluate these methods in terms of approximation accuracy, stability, and their ability to match the behaviour of standard multi-step samplers.
Pre-requisites:
Suitable for those who have taken a course in machine learning. Some familiarity with PyTorch would be beneficial.
References:
[1] Lipman, Yaron, et al. "Flow matching for generative modeling." International Conference on Learning Representations (ICLR), 2023. arXiv:2210.02747.
[2] Frans, Kevin, et al. "One Step Diffusion via Shortcut Models." International Conference on Learning Representations (ICLR), 2025. arXiv:2410.12557
[3] Geng, Zhengyang, et al. "Mean Flows for One-step Generative Modeling."
Advances in Neural Information Processing Systems (NeurIPS), 2025. arXiv:2505.13447
[4] Shafir et al., "Terminal Velocity Matching," 2025. arXiv:2511.19797. https://lumalabs.ai/blog/engineering/tvm