Research of
Human–Computer Interaction at Scale

The KAIST Interaction Lab (KIXLAB) is a human-computer interaction research group in the School of Computing at KAIST. Our mission is to improve ways people learn, collaborate, discuss, make decisions, and take action online by designing new interactive systems that leverage and support interaction at scale.

Recent projects

Integrating reflection and practice into video learning for improving learning experience and task performance in creative tasks

How can we leverage the content of slide-based lecture video to improve learner's watching experience?

Building online crowdsourcing platform to collect contextual, emotional, and intentional labels on dialog videos

Building data-driven personas in smart space by analyzing interactions at scale

Latest news

Hyeungshik & Hyungyu get their MS degrees; Hyungyu gets a thesis award!

Hyeungshik and Hyungyu officially completed their MS degrees. Hyeungshik joins NAVER as a software engineer, and Hyungyu continues as a Ph.D. student in KIXLAB.

Hyungyu wins an outstanding MS thesis award from the department!

Two CHI 2019 late-breaking work (LBW) papers accepted

Two late breaking work papers to CHI 2019 have been accepted: “Crowdsourcing Perspectives on Public Policy from Stakeholders” led by Hyunwoo Kim and “SolveDeep: A System for Supporting Subgoal Learning in Online Math Problem Solving” led by Hyoungwook Jin.

Hyeungshik and Hyungyu successfully defended their Master thesis

Both Hyeungshik and Hyungyu successfully defended their Master thesis. Congrats!

Hyeungshik’s thesis is titled “DynamicLecture: Enabling Direct Revision of Slide-based Lecture Videos”.

Hyungyu’s thesis is titled “Understanding the Effect of In-Video Prompting on Learners and Instructors”.

Voice video navigation paper accepted to CHI 2019

Our paper “How to Design Voice Based Navigation for How-To Videos” led by Minsuk Chang with collaborators at Adobe and Stanford is accepted to CHI 2019.

Popup paper accepted to IUI 2019

Our paper “Popup: Reconstructing 3D Video Using Particle Filtering to Aggregate Crowd Responses” led by Jean Song with collaborators at Michigan is accepted to IUI 2019.

Minsuk's Research Internship at Autodesk Research

Minsuk Chang started his research internship at Autodesk Research, working with Ben Lafreniere and Tovi Grossman (U of Toronto).

Winter 2019 Undergraduate Research Internship

We are looking for a few undergraduate research interns to join KIXLAB this winter. You can find the details on Prof. Juho Kim’s website.

Research Internships at Adobe Research

Minsuk Chang and Hyungyu Shin spent the summer as research interns at Adobe’s Creative Intelligence Lab. Minsuk worked on voice interfaces for how-to video navigation in Seattle with Oliver Wang and Anh Truong, and Hyungyu worked on slide authoring tools in Cambridge with Valentina Shin.

Many Ideas paper accepted to CSCW 2018

Our paper “Personalized Motivation-supportive Messages for Increasing Participation in Crowd-civic Systems” led by Paul Grau is accepted to CSCW 2018.

FourEyes paper accepted to ACM TIIS

Our paper “FourEyes: Leveraging Tool Diversity as a Means to Improve Aggregate Accuracy in Crowdsourcing” led by Jean Song with collaborators at Michigan is accepted to ACM Transactions on Interactive Intelligent Systems (TIIS).

More news