Skip to main content

Few-step Distillation for Flow Matching Generative Models

Supervisor

Suitable for

MSc in Advanced Computer Science
Computer Science, Part C

Abstract

Abstract

Recent advances in generative AI have led to rapid progress in image, video, and multimodal generation. Many of these systems rely on diffusion or flow-based generative frameworks, and Flow Matching has become a promising approach due to its strong generative quality and simpler training objectives. A key practical limitation, however, is that Flow Matching models still require multi-step ODE integration at inference time, making fast sampling an active challenge.

 

Distillation provides a way to accelerate generation by transferring the behaviour of a pretrained model into a more efficient few-step or one-step generator, and several alternative formulations have been proposed for this purpose. This project will investigate a distillation approach for Flow Matching models and evaluate its effectiveness relative to standard few-step baselines in terms of sample quality, stability, and efficiency.

 

Pre-requisites:

Suitable for those who have taken a course in machine learning. Some familiarity with PyTorch would be beneficial.

 

References:

[1] Lipman, Yaron, et al. "Flow matching for generative modeling." International Conference on Learning Representations (ICLR), 2023. arXiv:2210.02747.

[2] Yin, Tianwei, et al. "One-step diffusion with distribution matching distillation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR). 2024. arXiv:2311.18828

[3] Frans, Kevin, et al. "One Step Diffusion via Shortcut Models." International Conference on Learning Representations (ICLR), 2025. arXiv:2410.12557

[4] Geng, Zhengyang, et al. "Mean Flows for One-step Generative Modeling."

Advances in Neural Information Processing Systems (NeurIPS), 2025. arXiv:2505.13447