Abstract
ViBT vs Conditional DiT
We introduce Vision Bridge Transformer (ViBT), a large-scale instantiation of Brownian Bridge Models designed for conditional generation. Unlike traditional diffusion models that transform noise into data, Bridge Models directly model the trajectory between inputs and outputs, creating an efficient data-to-data translation paradigm. By scaling these models to 20B and 1.3B parameters, we demonstrate their effectiveness for image and video translation tasks. To support this scale, we adopt a Transformer architecture and propose a variance-stabilized velocity-matching objective for robust training. Together, these advances highlight the power of scaling Bridge Models for instruction-based image editing and complex video translation.
Translation Progress
Image Examples
x0
xt
t = 0.00
x1
x0
xt
t = 0.00
x1
Video Examples
Faster Speed
Removing conditional tokens lets our bridge approach run up to 4× faster than traditional methods.
Applications
Image Stylization
Source Images
Stylized
Click the buttons to switch the style of the images.
Image Editing
Source Images
Edited
Click an edit to switch the images below.
Video Stylization
Source Videos
Stylized
Click a style to switch the style.
Video Frame Interpolation
Source Videos · 15 FPS
Interpolated · 60 FPS
BibTeX
@article{tan2025vision,
title={Vision Bridge Transformer at Scale},
author={Tan, Zhenxiong and Wang, Zeqing and Yang, Xingyi and Liu, Songhua and Wang, Xinchao},
journal={arXiv preprint arXiv:2511.23199},
year={2025}
}