Source: @Gradio, and@junyanz89
This is a collaborative work with @GauravTParmar (lead author), Taesung Park and Srinivasa Narasimhan.
These conditional GAN can use a text-to-image model (such as SD-Turbo) to perform pairing and mismatch image conversion in one step (0.11 seconds on the A100 and 0.29 seconds on the A6000). Try our code and the @Gradio demo.
Thesis: http://arxiv.org/abs/2403.12036
Code: http://github.com/GaParmar/img2img-turbo
Demonstration: http://huggingface.co/spaces/gparmar/img2img-turbo-sketch
This work shows that pre-trained one-step models can easily adapt to the conditional GAN framework for downstream image editing and synthesis tasks.
The method can be applied to various image-to-image conversion tasks, such as day and night conversion and adding/deleting weather effects such as fog, snow, and rain.
App: Researchers @ GauavTParmar and@junyanz89 show impressive results on unpaired scene switching tasks such as day and night switching and weather effects
📜One-step image translation using text-to-image models
ˇ It introduces a way to adapt single-step diffusion models to new tasks and domains through adversarial learning
Demonstrations are available on Spaces. Find out!👀
Video:
