Tae Soo Kim

Tae Soo Kim

PhD, School of Computing, KAIST

KIXLAB · HCI+NLP

I am a fourth year PhD candidate in the School of Computing at KAIST. I work with Juho Kim as a member of KIXLAB.

My work falls in the intersection of HCI and NLP, focusing on interactive alignment of AI models. Specifically, I explore how to disentangle natural language into interactive components that empower users to iterate on their intents and audit the model's behaviors.
CV ·Google Scholar ·Twitter/X ·LinkedIn ·GitHub ·Email

Selected Publications

EXECUTION
EVALUATION
HCI
NLP

All Publications

* Equal contribution
© 2026 Tae Soo Kim

DiscoverLLM: From Executing Intents to Discovering Them

Preprint 2026
PDFarXiv
DiscoverLLM teaser
Tae Soo Kim, Yoonjoo Lee, Jaesang Yu, John Joon Young Chung, Juho Kim

DiscoverLLM introduces a framework that trains LLMs to help users form and discover their intents through interaction. By using a novel user simulator that models cognitive states with a progressive hierarchy of intents, the framework trains models to handle ambiguous requests by surfacing relevant options that guide users toward concretizing what they want.

Evalet: Evaluating Large Language Models by Fragmenting Outputs into Functions

(Cond. Accept) CHI 2026
PDFWebsiteCodearXiv
Tae Soo Kim*, Heechan Lee*, Yoonjoo Lee, Joseph Seering, Juho Kim
* Equal contribution

Evalet addresses the opacity of LLM-based evaluations by proposing functional fragmentation: dissecting model outputs into key text fragments and interpreting the rhetoric functions each serves relative to evaluation criteria. This interactive system visualizes these fragment-level functions across multiple outputs, empowering practitioners to transparently inspect, rate, and compare evaluations at scale.

ClearFairy: Capturing Creative Workflows through Decision Structuring, In-Situ Questioning, and Rationale Inference

(Cond. Accept) CHI 2026
PDFarXiv
ClearFairy teaser
Kihoon Son, DaEun Choi, Tae Soo Kim, Young-Ho Kim, Sangdoo Yun, Juho Kim

ClearFairy introduces the CLEAR framework and a think-aloud AI assistant to capture professionals' decision-making processes in creative workflows. By structuring reasoning into cognitive decision steps, the system detects weak explanations, asks lightweight clarifying questions, and infers missing rationales. This approach significantly increases the capture of strong design explanations without adding cognitive burden, while also providing structured data that enhances downstream generative AI agents.

CUPID: Evaluating Personalized and Contextualized Alignment of LLMs from Interactions

COLM 2025
PDFWebsiteCodearXiv
CUPID teaser
Tae Soo Kim, Yoonjoo Lee, Yoonah Park, Jiho Kim, Young-Ho Kim, Juho Kim

CUPID presents a benchmark of 756 human-curated session histories to evaluate an LLM's capability to infer a user's contextual preferences from multi-turn interactions. The work highlights the gap between static alignment and dynamic personalization, revealing that state-of-the-art models struggle to infer shifting preferences and discern relevant context from prior interactions.

One vs. Many: Comprehending Accurate Information from Multiple Erroneous and Inconsistent AI Generations

FAccT 2024
PDFarXivACM DL
One vs. Many teaser
Yoonjoo Lee, Kihoon Son, Tae Soo Kim, Jisu Kim, John Joon Young Chung, Eytan Adar, Juho Kim

This work investigates how users comprehend information when presented with multiple, potentially inconsistent, AI-generated outputs. Our experiment finds that while exposure to inconsistencies lowers the perceived capacity of the AI, it simultaneously increases users' comprehension of the generated information, offering design implications for promoting critical and transparent LLM usage.

EvalLM: Interactive Evaluation of Large Language Model Prompts on User-Defined Criteria

CHI 2024
PDFWebsiteVideoACM DLarXivCode
EvalLM teaser
Tae Soo Kim, Yoonjoo Lee, Jamin Shin, Young-Ho Kim, Juho Kim

EvalLM is an interactive system designed to help developers iteratively refine LLM prompts by evaluating multiple outputs against user-defined criteria. Users describe their criteria in natural language, and the system's LLM-based evaluator provides an overview of where prompts excel or fail, enabling structured prompt engineering and reducing the need for manual inspection.

GenQuery: Supporting Expressive Visual Search with Generative Models

CHI 2024
PDFWebsiteACM DLarXiv
GenQuery teaser
Kihoon Son, DaEun Choi, Tae Soo Kim, Young-Ho Kim, Juho Kim

GenQuery supports expressive visual search by integrating generative models to help designers articulate and refine abstract search intents. The system enables users to concretize text queries, generatively modify images to use as visual queries, and explore diverse search directions, successfully supporting both convergent and divergent exploration during the creative process.

Demystifying Tacit Knowledge in Graphic Design: Characteristics, Instances, Approaches, and Guidelines

CHI 2024 Honorable Mention Award
PDFarXivACM DL
Demystifying teaser
Kihoon Son, DaEun Choi, Tae Soo Kim, Juho Kim

This comprehensive study demystifies tacit knowledge in graphic design by collecting and analyzing instances from professional designers. The work identifies the core elements, actions, and purposes of tacit design knowledge, proposing approaches and guidelines to make implicit design expertise more explicit and teachable.

Cells, Generators, and Lenses: Design Framework for Object-Oriented Interaction with Large Language Models

UIST 2023
PDFWebsiteCodeACM DL
Cells, Generators, and Lenses teaser
Tae Soo Kim, Yoonjoo Lee, Minsuk Chang, Juho Kim

This paper introduces a design framework with three primitives for object-oriented interaction with LLMs: Cells (discrete input units), Generators (model instances), and Lenses (output spaces). By reifying these components into interactable objects, the framework enables users to compose, iterate, and experiment with generative configurations rather than treating LLMs as opaque black boxes.

Papeos: Augmenting Research Papers with Talk Videos

UIST 2023
PDFarXivACM DLDemo
Papeos teaser
Tae Soo Kim, Matt Latzke, Jonathan Bragg, Amy X. Zhang, Joseph Chee Chang

Papeos augments research papers with synchronized talk videos to create a richer reading experience. The novel interface automatically aligns paper passages with video segments, allowing readers to fluidly switch between consuming dense academic text and concise, visual explanations to reduce mental load and improve comprehension.

DAPIE: Interactive Step-by-Step Explanatory Dialogues to Answer Children's Why and How Questions

CHI 2023
PDFWebsiteVideoACM DL
DAPIE teaser
Yoonjoo Lee, Tae Soo Kim, Sungdong Kim, Yohan Yun, Juho Kim

DAPIE is a conversational agent designed to answer children's "why" and "how" questions by transforming existing human expert-authored explanations into interactive, step-by-step dialogues. By converting static text into digestible conversational steps, the system scaffolds learning and encourages children to actively interact and self-assess their understanding.

Stylette: Styling the Web with Natural Language

CHI 2022 Honorable Mention Award
PDFWebsiteVideoACM DL
Stylette teaser
Tae Soo Kim, DaEun Choi, Yoonseo Choi, Juho Kim

Stylette is a browser extension that empowers novices to style web pages using natural language. By interpreting vague user requests with an LLM and surfacing suggestions from a dataset of 1.7 million web components, Stylette generates a palette of CSS properties and values that users can apply and experiment with to achieve their design goals.

Promptiverse: Scalable Generation of Scaffolding Prompts Through Human-AI Hybrid Knowledge Graph Annotation

CHI 2022
PDFWebsiteVideoACM DL
Promptiverse teaser
Yoonjoo Lee, John Joon Young Chung, Tae Soo Kim, Jean Y Song, Juho Kim

Promptiverse introduces a scalable approach for generating diverse, multi-turn scaffolding prompts for instructional videos. Using a human-AI hybrid annotation tool called Grannotate, the system constructs knowledge graphs from video transcripts to efficiently create tailored educational prompts that support varying learner needs.

Winder: Linking Speech and Visual Objects to Support Communication in Asynchronous Collaboration

CHI 2021
PDFWebsiteVideoACM DL
Winder teaser
Tae Soo Kim, Seungsu Kim, Yoonseo Choi, Juho Kim

Winder is a design tool plugin that supports asynchronous collaboration by linking speech and visual objects. By allowing users to record multimodal comments, such as voice paired with document clicks, the system generates bidirectional links between the transcript and the UI objects, reducing communication effort and facilitating easier navigation for receivers.

Supporting Collaborative Sequencing for Small Groups through Visual Awareness

CSCW 2021
PDFWebsiteVideoACM DL
CoSeq teaser
Tae Soo Kim, Nitesh Goyal, Jeongyeon Kim, Juho Kim, Sungsoo (Ray) Hong

This work explores collaborative sequencing (CoSeq) in small groups, such as planning travel itineraries, and introduces visual awareness techniques to support consensus building. Instantiated in a system called Twine, these techniques help group members easily identify agreements and disagreements, reducing the effort needed to communicate preferences and encouraging cooperative behavior.

Design for Collaborative Information-Seeking: Understanding User Challenges and Deploying Collaborative Dynamic Queries

CSCW 2019
PDFACM DL
ComeTogether teaser
Sungsoo (Ray) Hong, Minhyang (Mia) Suh, Tae Soo Kim, Irina Smoke, Sangwha Sien, Janet Ng, Mark Zachry, Juho Kim

This work investigates challenges in collaborative information-seeking (CIS) and social coordination, such as capturing mutual preferences and high communication costs. The work introduces ComeTogether, a system utilizing collaborative dynamic queries to help groups build a shared understanding and streamline the collaborative decision-making process.