I am PhD student at KAIST AI working with professor Jinwoo Shin. Previously, I interned at Google DeepMind working with Yinxiao Li and Feng Yang. I also closely worked with Kihyuk Sohn via Google University Relation program.
My research interests span probabilistic machine learning, generative modeling, representation learning and their applications. Recently, I am interested in visual generative models and their adaptations such as 3D generative modeling, personalization, and preference learning.
Email: kyungmnlee (at) kaist (dot) ac (dot) kr | Curriculum Vitae
📝 Publications
CVPR 2025
Calibrated Multi-Preference Optimization for Aligning Diffusion Models, Kyungmin Lee, Xiaohang Li, Qifei Wang, Junfeng He, Ming-Hsuan Yang, Irfan Essa, Jinwoo Shin, Feng Yang and Yinxiao Li.
ICLR 2025
DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing, Junesuk Choi, Kyungmin Lee, Jongheon Jeong, Saining Xie, Jinwoo Shin and Kimin Lee.
NeurIPS 2024
Direct Consistency Optimization for Robust Customization of Text-to-Image Diffusion Models, Kyungmin Lee, Sangkyung Kwak, Kihyuk Sohn and Jinwoo Shin. Project page |
ECCV 2024
Improving Diffusion Models for Authentic Virtual Try-on in the Wild, Yisol Choi, SangKyung Kwak, Kyungmin Lee, Hyungwon Choi and Jinwoo Shin. Project page ||
CVPR 2024
(Highlight) Discovering and Mitigating Visual Biases through Keyword Explanation, Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee and Jinwoo Shin.
ICLR 2024
(Spotlight) DreamFlow: High-quality Text-to-3D generation by Approximating Probability Flow, Kyungmin Lee, Kihyuk Sohn and Jinwoo Shin. Project page
NeurIPS 2023 Workshop
Fine-tuning Protein Language Models by ranking protein fitness, Minji Lee, Kyungmin Lee and Jinwoo Shin.
NeurIPS 2023
Collaborative Score Distillation for Consistent Visual Editing, Subin Kim*, Kyungmin Lee*, Junesuk Choi, Jongheon Jeong, Kihyuk Sohn and Jinwoo Shin. Project page |
NeurIPS 2023
S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions, Sangwoo Mo, Minkyu Kim, Kyungmin Lee and Jinwoo Shin.
NeurIPS 2023
Slimmed Asymmetrical Contrastive Learning and Cross Distillation for Lightweight Model Training, Jian Meng, Li Yang, Kyungmin Lee, Jinwoo Shin, Deliang Fan and Jae-sun Seo.
ICLR 2023
(Spotlight) STUNT: Few-shot Tabular Learning with Self-generated Tasks from Unlabeled Tables, Jaehyun Nam, Jihoon Tack, Kyungmin Lee, Hankook Lee and Jinwoo Shin.
NeurIPS 2022
RényiCL: Contrastive Representation Learning with Skew Rényi divergence, Kyungmin Lee and Jinwoo Shin.
ECCV 2022
GCISG: Guided Causal Invariant Learning for Improved Syn-to-Real Generalization, Gilhyun Nam, Gyeongjae Choi and Kyungmin Lee.
ICLR 2022
Representation Distillation by Prototypical Contrastive Predictive Coding, Kyungmin Lee
IJCAI 2022
Pseudo-spherical Knowledge Distillation, Kyungmin Lee and Hyeongkeun Lee.
ICLR 2021 Workshop
Provable Defense by Denoised Smoothing with Learned Score function, Kyungmin Lee
💻 Work Experience
- 2024.07 - 2024.12, Google DeepMind, Student Researcher, Mountain View, CA, US.
- 2023.02 - 2024.03, Google Research, University Relation Program, Remote.
- 2019.04 - 2022.06, Agency for Defense Development, Researcher, Daejeon, Korea.
🤝 Academic Services
- Conference reviewer:
NeurIPS
,ICML
,ICLR
,CVPR
,ICCV
,ECCV
,WACV
,AISTATS
- Journal reviewer:
TMLR
📖 Educations
- 2022.09 - 2026.08, KAIST, Ph.D. in Artificial Intelligence (expected).
- 2015.03 - 2019.02, KAIST, B.S. in Mathematics, Electrical and Computer Engineering (double major).