publications

* denotes equal contribution

2026

  1. Can Large Language Models Keep Up? Benchmarking Online Adaptation to Continual Knowledge Streams
    Jiyeon Kim*,  Hyunji Lee*,  Dylan Zhou*,  Sue Hyun Park,  Seunghyun Yoon,  Trung Bui,  Franck Dernoncourt,  Sungmin Cha,  Minjoon Seo
    Preprint

2025

  1. Understanding and Enhancing Mamba-Transformer Hybrids for Memory Recall and Language Modeling
    Hyunji Lee,  Wenhao Yu,  Hongming Zhang,  Kaixin Ma,  Jiyeon Kim,  Dong Yu,  Minjoon Seo
    EMNLP BabyLM Workshop 2025
  2. Video Parallel Scaling: Aggregating Diverse Frame Subsets for VideoLLMs
    Hyungjin Chung,  Hyelin Nam,  Jiyeon Kim,  Hyojun Go,  Byeongjun Park,  Junho Kim,  Joonseok Lee,  Seongsu Ha,  Byung-Hoon Kim
    CVPR 2026 Findings
  3. Latent Reasoning via Sentence Embedding Prediction
    Hyeonbin Hwang*,  Byeongguk Jeon*,  Seungone Kim,  Jiyeon Kim,  Hoyeon Chang,  Sohee Yang,  Seungpil Won,  Dohaeng Lee,  Youbin Ahn,  Minjoon Seo
    Oral, RAM 2: Reasoning, Attention & Memory @COLM 2025 Workshop
  4. Knowledge Entropy Decay during Language Model Pretraining Hinders New Knowledge Acquisition
    Jiyeon Kim*,  Hyunji Lee*,  Hyowon Cho,  Joel Jang,  Hyeonbin Hwang,  Seungpil Won,  Youbin Ahn,  Dohaeng Lee,  Minjoon Seo
    ICLR 2025 Oral
    Best Paper, Towards Knowledgeable Foundation Models @AAAI 2025 Workshop
  5. How Does Vision-Language Adaptation Impact the Safety of Vision Language Models?
    Seongyun Lee*,  Geewook Kim*,  Jiyeon Kim*,  Hyunji Lee,  Hoyeon Chang,  Sue Hyun Park,  Minjoon Seo
    ICLR 2025

2024

  1. ListT5: Listwise Reranking with Fusion-in-Decoder Improves Zero-shot Retrieval
    Soyoung Yoon,  Eunbi Choi,  Jiyeon Kim,  Hyeongu Yun,  Yireun Kim,  Seung-won Hwang
    ACL 2024 Oral