winter_pro.png

Minsung Kim

Hi! I am a Ph.D. student at the Machine Intelligence Lab at Seoul National University, advised by Prof. Kyomin Jung.

My research is driven by the goal of building faithful and trustworthy AI. I explore how language models acquire, store, and utilize knowledge — and investigate methods to ensure their behavior aligns with our intentions. Recently, I am particularly interested in making agentic AI system behavior easier to observe and control for humans.

News

Mar 03, 2026 Started serving as 🇰🇷 Technical Research Personnel for mandatory military service.
Jan 22, 2026 Two papers accepted to ICLR 2026: Bilinear Relational Structure and Erase or Hide!
Sep 20, 2025 Our FaithUn paper is accepted to EMNLP 2025!
Mar 14, 2025 Two papers accepted to NAACL 2025: VLind-Bench (Findings) and Generating Diverse Hypotheses (Oral)!
Oct 01, 2024 Started research internship at Max Planck Institute for Security and Privacy (MPI-SP), Germany.

Education

M.S/Ph.D
Electrical and Computer Engineering
2023 Mar - Present
B.S
Electrical and Computer Engineering
2018 Mar - 2023 Feb

Experience

Research Intern
Host: Meeyoun Cha
2024 Oct - 2024 Dec

Publications

  1. How Training Data Shapes the Use of Parametric and In-Context Knowledge in Language Models
    Minsung Kim, Dong-Kyum Kim , Jea Kwon , Nakyeong Yang , Kyomin Jung , and Meeyoung Cha
    arXiv preprint arXiv:2510.02370
  2. Rethinking Post-Unlearning Behavior of Large Vision-Language Models
    Minsung Kim, Nakyeong Yang , and Kyomin Jung
    arXiv preprint arXiv:2506.02541
  3. Erase or Hide? Suppressing Spurious Unlearning Neurons for Robust Unlearning
    Nakyeong Yang , Dong-Kyum Kim , Jea Kwon , Minsung Kim, Kyomin Jung , and Meeyoung Cha
    ICLR 2026
  4. Bilinear relational structure fixes reversal curse and enables consistent model editing
    Dong-Kyum Kim , Minsung Kim, Jea Kwon , Nakyeong Yang , and Meeyoung Cha
    ICLR 2026
  5. FaithUn: Toward Faithful Forgetting in Language Models by Investigating the Interconnectedness of Knowledge
    Nakyeong Yang , Minsung Kim, Seunghyun Yoon , Joongbo Shin , and Kyomin Jung
    EMNLP 2025
  6. Generating Diverse Hypotheses for Inductive Reasoning
    Kang-il Lee , Hyukhun Koh , Dongryeol Lee , Seunghyun Yoon , Minsung Kim, and Kyomin Jung
    NAACL 2025 Oral
  7. VLind-Bench: Measuring Language Priors in Large Vision-Language Models
    Kang-il Lee , Minbeom Kim , Seunghyun Yoon , Minsung Kim, Dongryeol Lee , Hyukhun Koh , and Kyomin Jung
    NAACL 2025 Findings
  8. MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple Distractors
    Nakyeong Yang , Minsung Kim, Seunghyun Yoon , Joongbo Shin , and Kyomin Jung
    CIKM 2024
  9. Fine-grained Gender Control in Machine Translation with Large Language Models
    Minwoo Lee , Hyukhun Koh , Minsung Kim, and Kyomin Jung
    NAACL 2024
  10. Target-Agnostic Gender-Aware Contrastive Learning for Mitigating Bias in Multilingual Machine Translation
    Minwoo Lee , Hyukhun Koh , Kang-il Lee , Dongdong Zhang , Minsung Kim, and Kyomin Jung
    EMNLP 2023