Flow Matching for Conditional Text Generation in a Few Sampling Steps

LMU Munich, University of Amsterdam, A*STAR
EACL 2024

Aliquam vitae elit ullamcorper tellus egestas pellentesque. Ut lacus tellus, maximus vel lectus at, placerat pretium mi. Maecenas dignissim tincidunt vestibulum. Sed consequat hendrerit nisl ut maximus.

Abstract

Diffusion models are a promising tool for high-quality text generation. However, current models face multiple drawbacks including slow sampling, noise schedule sensitivity, and misalignment between the training and sampling stages. In this paper, we introduce FlowSeq, which bypasses all current drawbacks by leveraging flow matching for conditional text generation. FlowSeq can generate text in a few steps by training with a novel anchor loss, alleviating the need for expensive hyperparameter optimization of the noise schedule prevalent in diffusion models. We extensively evaluate our proposed method and show competitive performance in tasks such as question generation, open-domain dialogue, and paraphrasing.

abstract abstract

Video Presentation

Another Carousel

Poster

BibTeX

@inproceedings{HuEACL2024,
        title = {Flow Matching for Conditional Text Generation in a Few Sampling Steps},
        author = {Vincent Tao Hu and Di Wu and Yuki M Asano and Pascal Mettes and Basura Fernando and Björn Ommer and Cees G M Snoek},
        year = {2024},
        date = {2024-03-27},
        booktitle = {EACL},
        tppubtype = {inproceedings}
        }