I am PhD student at KAIST AI working with professor Jinwoo Shin. Previously, I interned at Google DeepMind working with Yinxiao Li and Feng Yang. I also closely worked with Kihyuk Sohn via Google University Relation program.
My research interests lie in probabilistic machine learning, generative modeling, and representation learning, with a focus on their applications to visual understanding and generation. Recently, I have been particularly interested in generative modeling for visual synthesis — including image, video, and 3D generation — as a step toward world modeling. I am also exploring applications in robotics, such as vision-language-action models. My previous works have focused on the fundamentals of training diffusion and flow models, as well as their adaptations to tasks such as 3D generation, personalization, and preference-based fine-tuning.
I plan to graduate in 2026 and am seeking industry research positions. Please feel free to reach out if you’re interested in potential collaborations or opportunities to work together.
Email: kyungmnlee (at) kaist (dot) ac (dot) kr | Curriculum Vitae
📝 Publications
PreprintDecoupled MeanFlow: Turning Flow Models into Flow Maps for Accelerated Sampling, Kyungmin Lee, Sihyun Yu, Jinwoo Shin.
PreprintDual-Stream Diffusion for World-Model Augmented Vision-Language-Action Model, John Won, Kyungmin Lee, Huiwon Jang, Dongyoung Kim, Jinwoo Shin.
PreprintContrastive Representation Regularization for Vision-Language-Action Models, Taeyoung Kim, Jimin Lee, Myungkoo Koo, Dongyoung Kim, Kyungmin Lee, Changyeon Kim, Younggyo Seo, Jinwoo Shin.
PreprintHAMLET: Switch your Vision-Language-Action Model into a History-Aware Policy, Myungkoo Koo, Daewon Choi, Taeyoung Kim, Kyungmin Lee, Changyeon Kim, Younggyo Seo, Jinwoo Shin. Project page
PreprintEnhancing Motion Dynamics of Image-to-Video Models via Adaptive Low-Pass Guidance, Junesuk Choi, Kyungmin Lee, Sihyun Yu, Yisol Choi, Jinwoo Shin and Kimin Lee. Project page
IJCAI 2025StarFT: Robust Fine-tuning of Zero-shot Models via Spuriosity Alignment, Younghyun Kim*, Jongheon Jeong*, Sangkyung Kwak, Kyungmin Lee, Juho Lee and Jinwoo Shin
CVPR 2025Calibrated Multi-Preference Optimization for Aligning Diffusion Models, Kyungmin Lee, Xiaohang Li, Qifei Wang, Junfeng He, Junjie Ke, Ming-Hsuan Yang, Irfan Essa, Jinwoo Shin, Feng Yang and Yinxiao Li. Project page
ICLR 2025DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing, Junesuk Choi, Kyungmin Lee, Jongheon Jeong, Saining Xie, Jinwoo Shin and Kimin Lee.
NeurIPS 2024Direct Consistency Optimization for Robust Customization of Text-to-Image Diffusion Models, Kyungmin Lee, Sangkyung Kwak, Kihyuk Sohn and Jinwoo Shin. Project page |
ECCV 2024Improving Diffusion Models for Authentic Virtual Try-on in the Wild, Yisol Choi, SangKyung Kwak, Kyungmin Lee, Hyungwon Choi and Jinwoo Shin. Project page ||
CVPR 2024(Highlight) Discovering and Mitigating Visual Biases through Keyword Explanation, Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee and Jinwoo Shin.
ICLR 2024(Spotlight) DreamFlow: High-quality Text-to-3D generation by Approximating Probability Flow, Kyungmin Lee, Kihyuk Sohn and Jinwoo Shin. Project page
NeurIPS 2023 WorkshopFine-tuning Protein Language Models by ranking protein fitness, Minji Lee, Kyungmin Lee and Jinwoo Shin.
NeurIPS 2023Collaborative Score Distillation for Consistent Visual Editing, Subin Kim*, Kyungmin Lee*, Junesuk Choi, Jongheon Jeong, Kihyuk Sohn and Jinwoo Shin. Project page |
NeurIPS 2023S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions, Sangwoo Mo, Minkyu Kim, Kyungmin Lee and Jinwoo Shin.
NeurIPS 2023Slimmed Asymmetrical Contrastive Learning and Cross Distillation for Lightweight Model Training, Jian Meng, Li Yang, Kyungmin Lee, Jinwoo Shin, Deliang Fan and Jae-sun Seo.
ICLR 2023(Spotlight) STUNT: Few-shot Tabular Learning with Self-generated Tasks from Unlabeled Tables, Jaehyun Nam, Jihoon Tack, Kyungmin Lee, Hankook Lee and Jinwoo Shin.
NeurIPS 2022RényiCL: Contrastive Representation Learning with Skew Rényi divergence, Kyungmin Lee and Jinwoo Shin.
ECCV 2022GCISG: Guided Causal Invariant Learning for Improved Syn-to-Real Generalization, Gilhyun Nam, Gyeongjae Choi and Kyungmin Lee.
ICLR 2022Representation Distillation by Prototypical Contrastive Predictive Coding, Kyungmin Lee
IJCAI 2022Pseudo-spherical Knowledge Distillation, Kyungmin Lee and Hyeongkeun Lee.
ICLR 2021 WorkshopProvable Defense by Denoised Smoothing with Learned Score function, Kyungmin Lee
💻 Work Experience
- 2024.07 - 2024.12, Google DeepMind, Student Researcher, Mountain View, CA, US.
- 2023.02 - 2024.03, Google Research, University Relation Program, Remote.
- 2019.04 - 2022.06, Agency for Defense Development, Researcher, Daejeon, Korea.
🤝 Academic Services
- Conference reviewer:
NeurIPS,ICML,ICLR,CVPR,ICCV,ECCV,WACV,AISTATS - Journal reviewer:
TMLR,TPAMI
📖 Educations
- 2022.09 - 2026.08, KAIST, Ph.D. in Artificial Intelligence (expected).
- 2015.03 - 2019.02, KAIST, B.S. in Mathematics, Electrical and Computer Engineering (double major).