Tackling the generative learning trilemma with denoising diffusion gans Z Xiao, K Kreis, A Vahdat International Conference on Learning Representations, 2022 | 546 | 2022 |
Likelihood regret: An out-of-distribution detection score for variational auto-encoder Z Xiao, Q Yan, Y Amit Advances in Neural Information Processing Systems, 2020 | 228 | 2020 |
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models Z Xiao, K Kreis, J Kautz, A Vahdat International Conference on Learning Representations, 2021 | 128 | 2021 |
Ufogen: You forward once large scale text-to-image generation via diffusion gans Y Xu, Y Zhao, Z Xiao, T Hou Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024 | 57 | 2024 |
Generative Latent Flow Z Xiao, Q Yan, Y Amit arXiv preprint arXiv:1905.10485, 2019 | 54 | 2019 |
Mobilediffusion: Subsecond text-to-image generation on mobile devices Y Zhao, Y Xu, Z Xiao, T Hou arXiv preprint arXiv:2311.16567, 2023 | 38 | 2023 |
Do We Really Need to Learn Representations from In-domain Data for Outlier Detection? Z Xiao, Q Yan, Y Amit Uncertainty and Robustness in Deep Learning, ICML workshop, 2021 | 26 | 2021 |
ControlVAE: Tuning, analytical properties, and performance analysis H Shao, Z Xiao, S Yao, D Sun, A Zhang, S Liu, T Wang, J Li, T Abdelzaher IEEE transactions on pattern analysis and machine intelligence 44 (12), 9285 …, 2021 | 22 | 2021 |
Dreaminpainter: Text-guided subject-driven image inpainting with diffusion models S Xie, Y Zhao, Z Xiao, KCK Chan, Y Li, Y Xu, K Zhang, T Hou arXiv preprint arXiv:2312.03771, 2023 | 13 | 2023 |
Adaptive Multi-stage Density Ratio Estimation for Learning Latent Space Energy-based Model Z Xiao, T Han Advances in Neural Information Processing Systems, 2022 | 12 | 2022 |
A method to model conditional distributions with normalizing flows Z Xiao, Q Yan, Y Amit arXiv preprint arXiv:1911.02052, 2019 | 10 | 2019 |
Imagen 3 J Baldridge, J Bauer, M Bhutani, N Brichtova, A Bunner, K Chan, Y Chen, ... arXiv preprint arXiv:2408.07009, 2024 | 9 | 2024 |
EM Distillation for One-step Diffusion Models S Xie, Z Xiao, DP Kingma, T Hou, YN Wu, KP Murphy, T Salimans, ... arXiv preprint arXiv:2405.16852, 2024 | 7 | 2024 |
Exponential tilting of generative models: Improving sample quality by training and sampling from latent energy Z Xiao, Q Yan, Y Amit ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit …, 2020 | 7 | 2020 |
Two Symmetrized Coordinate Descent Methods Can Be Times Slower Than the Randomized Version P Xiao, Z Xiao, R Sun SIAM Journal on Optimization 31 (4), 2726-2752, 2021 | 6* | 2021 |
Hifi tuner: High-fidelity subject-driven fine-tuning for diffusion models Z Wang, W Wei, Y Zhao, Z Xiao, M Hasegawa-Johnson, H Shi, T Hou arXiv preprint arXiv:2312.00079, 2023 | 5 | 2023 |
EBMs Trained with Maximum Likelihood are Generator Models Trained with a Self-adverserial Loss Z Xiao, Q Yan, Y Amit Energy Based Models Workshop, ICLR, 2021 | 3 | 2021 |
Energy-based variational autoencoders A Vahdat, K Kreis, Z Xiao, J Kautz US Patent App. 17/357,728, 2022 | 2 | 2022 |
Denoising diffusion generative adversarial networks Z Xiao, K Kreis, A Vahdat US Patent App. 17/957,143, 2023 | 1 | 2023 |
Training energy-based variational autoencoders A Vahdat, K Kreis, Z Xiao, J Kautz US Patent App. 17/357,738, 2022 | | 2022 |