What we do

Our research focuses on multimodal intelligence and perception systems

Multimodal AI

Research on integrating vision, language, and other modalities for intelligent perception systems.

Physical AI

Developing Vision-Language-Action (VLA) model for autonomous robot.

Computer Vision

Advanced research in semantic segmentation, object detection, and scene understanding.

AI

Medical AI / AI for Science

Applying AI and deep learning to medical imaging, diagnosis, and healthcare applications. Delving into genomics and protein design with AI.

Top-Tier Publications

Publishing cutting-edge research at top-tier conferences including CVPR, ICCV, ECCV, and NeurIPS.

Collaborative Research

Working with international collaborators and industry partners on innovative AI solutions.

Latest News

Filter by Type:
February 23, 2026 admission

New students joined MIP Lab (Spring 2026)

We welcome a new MS student (Sangjin Lee) and new undergraduate students to MIP Lab.

February 21, 2025 publication

A paper accepted to CVPR 2026

A paper from MIP Lab is accepted to CVPR 2026. <Delta velocity rectified flow for text-to-image editing, Gaspard Beaudouin, Minghan Li, Jaeyeon Kim, Sung-Hoon Yoon*, Mengyu Wang*>

December 30, 2025 grant

Selected as Beta Service Participant for Advanced GPU Utilization Support Program

MIP Lab has been selected as a beta service participant for the 「Advanced GPU Utilization Support Program」.

December 18, 2025 service

MIP Lab has launched 🚀

Multimodal Intelligence and Perception (MIP) Lab has launched at DGIST EECS.