OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on
Paper • 2403.01779 • Published • 30
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("spawn08/segmentation_model", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]Our OOTDiffusion GitHub repository
🤗 Try out OOTDiffusion
(Thanks to ZeroGPU for providing A100 GPUs)
OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on [arXiv paper]
Yuhao Xu, Tao Gu, Weifeng Chen, Chengcai Chen
Xiao-i Research
Our model checkpoints trained on VITON-HD (half-body) and Dress Code (full-body) have been released
@article{xu2024ootdiffusion,
title={OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on},
author={Xu, Yuhao and Gu, Tao and Chen, Weifeng and Chen, Chengcai},
journal={arXiv preprint arXiv:2403.01779},
year={2024}
}