Tao Hu
Computer Vision & Learning Group
Akademiestr. 7,Munich
Ludwig Maximilian University of Munich
I am a Postdoctoral Research Fellow with Björn Ommer in Ommer-Lab ( Stable Diffusion Lab ), focused on exploring the scalability and generalization ablity of diffusion model in the nxtaim project. I finished my PhD at VISLab, University of Amsterdam.
I am recruiting for Bachelor, Master and PhD supervision in Munich and globally. If you're interested in collaborating, feel free to send an email.
Open to discussion and collaboration, feel free to send an email.
Focused on introducing inductive bias into neural network to achieve data-efficiency by few-shot learning, generative model, etc. Have a conviction that generative modelling will be the future of discriminative modelling.
Publication | GitHub | CV(updated in Nov.2023) |
LinkedIn | Research Note | Chat with me |Wechat |
news
Dec 10, 2024 | [MASK] is All You Need on Arxiv. Two papers including DepthFM accepted by AAAI 2025. |
---|---|
Dec 08, 2024 | Distillation of Diffusion Features for Semantic Correspondence got accepted by WACV 2025. Scaling Image Tokenizers with Grouped Spherical Quantization on Arxiv. |
Dec 06, 2024 | NeurIPS 2024 Excellent Reviewer. |
Jul 01, 2024 | Two papers(including ZigMa) got accepted by ECCV! Also ZigMa: DiT-style Mamba-based diffusion models has been accepted as oral in ICML Workshop on Long Context Foundation Models (LCFM) |
Jun 03, 2024 | Give a talk at Adobe Research and A-Star to introduce our work about ZigMa. |