KAIST Fall 2020

CS492F: Human-AI Interaction

Humans and AI are more closely interacting than ever before, in all areas of our work, education, and life. As more intelligent machines are entering our lives, their accuracy and performance are not the only important factor that matters. As designers of such technology, we have to carefully consider the user experience of AI in order for AI to be of practical value. In this course, we will explore various dimensions of human-AI interaction, including ethics, explainability, design process involving AI, visualization, human-AI collaboration, recommender systems, and a few notable application areas.

A side goal of this course is to encourage all of us to bridge the gap between the two fields of HCI and AI. As a step toward this vision, we want to encourage students with HCI and AI background to mingle, interact, discuss, and collaborate through this course. We expect most students taking this course to have background knowledge in either HCI or AI through at least intro-level coursework. If you’re unsure if you meet this criterion, please contact the course staff immediately. Having background in both is great, although not required.

This is a highly interactive class: You’ll be expected to actively participate in activities, projects, assignments, and discussions. There will be no lectures or exams. Major course activities include:

  • Reading Response: You'll read and discuss important papers and articles in the field. Each week, there will be 1-2 reading assignments, for which you'll write a short response.
  • Assignments: You'll design, implement, and analyze a few human-AI interaction scenarios.
  • In-class Activities: Each class will feature activities that will help you experience and practice the core concepts introduced in the course.

Course Staff

Instructors: Prof. Jean Young Song & Prof. Juho Kim
    Office Hours: by appointment

TA: Hyungyu Shin
    Office Hours: by appointment

Staff Mailing List: human-ai@kixlab.org

Time & Location

When: 4:00-5:15pm Tue/Thu
Where: Zoom live sessions (As active participation in in-class activity, discussion, and presentation is expected, attending live sessions is required.)

Links

Course Website: https://kixlab.org/courses/human-ai/
Submission & Grading: KLMS
Discussion and Q&A: Campuswire
Reading Groups

Updates

Schedule

Week Date Instructor Topic Reading (response indicates a reading response is required for the material.) Due
1 9/1 Kim Introduction & Course Overview
1 9/3 Kim Introduction to Human-AI Interaction (1) Licklider, Joseph CR. "Man-computer symbiosis." IRE transactions on human factors in electronics 1 (1960): 4-11.
(2) Shyam Sankar. The Rise of Human Computer Cooperation. TED Talk Video, 2012 (12 mins).
2 9/8 Song Primer on AI response Lubars, Brian, and Chenhao Tan. "Ask not what AI can do, but what AI should do: Towards a framework of task delegability." In Advances in Neural Information Processing Systems, pp. 57-67. 2019. RR by all
2 9/10 Song Primer on AI response Xu, Anbang, Zhe Liu, Yufan Guo, Vibha Sinha, and Rama Akkiraju. "A new chatbot for customer service on social media." In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3506-3510. 2017. RR by A
3 9/15 Kim Primer on HCI (1) response Amershi, Saleema, et al. "Guidelines for human-AI interaction." Proceedings of the 2019 chi conference on human factors in computing systems. 2019.
(2) Google PAIR. People + AI Guidebook. Published May 8, 2019.
RR by B
3 9/17 Kim Primer on HCI (1) response Heer, Jeffrey. "Agency plus automation: Designing artificial intelligence into interactive systems." Proceedings of the National Academy of Sciences 116.6 (2019): 1844-1850.
(2) Henriette Cramer and Juho Kim. "Confronting the tensions where UX meets AI." interactions 26.6 (2019): 69-71.
RR by A
(Assignment #1 announced)
4 9/22 Song Historical Perspectives on Human-AI Interaction (1) response Horvitz, Eric. "Principles of mixed-initiative user interfaces." In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 159-166. 1999.
(2) Ben Schneiderman and Pattie Maes. "Direct Manipulation vs. Interface Agents". Interactions 1997.
RR by B
4 9/24 Kim Ethics and FAccT of AI (1) response Davidson, Thomas, Debasmita Bhattacharya, and Ingmar Weber. "Racial bias in hate speech and abusive language detection datasets." arXiv preprint arXiv:1905.12516 (2019).
(2) Kate Crawford and Trevor Paglen, “Excavating AI: The Politics of Training Sets for Machine Learning" (September 19, 2019)
RR by A
5 9/29 Song Ethics and FAccT of AI (1) response Timnit Gebru. "Computer vision in practice: who is benefiting and who is being harmed?" (video, 51 mins) Slides
(2) Bolukbasi, Tolga, et al. "Man is to computer programmer as woman is to homemaker? debiasing word embeddings." Advances in Neural Information Processing Systems. 2016.
RR by B
Assignment #1 DUE
5 10/1 No class (Chuseok)
6 10/6 Song Metrics to Measure Human-AI Performance (1) response Gagan Bansal, Besmira Nushi, Ece Kamar, et al. "Beyond accuracy: The role of mental models in human-AI team performance." In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. 2019.
(2) Matthew Kay, Shwetak N. Patel, and Julie A. Kientz. "How good is 85%? A survey tool to connect classifier evaluation to acceptability of accuracy." In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 2015.
RR by A
6 10/8 Song Interpretable and Explainable AI (1) response Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ""Why should I trust you?" Explaining the predictions of any classifier." In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
(2) Zachary C. Lipton. "The mythos of model interpretability." 2018.
RR by B
7 10/13 Kim Interpretable and Explainable AI (1) response Daniel S. Weld, and Gagan Bansal. "The challenge of crafting intelligible intelligence." Communications of the ACM. 2019.
(2) Alison Smith-Renner, Ron Fan, Melissa Birchfield, et al. "No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML." In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 2020.
RR by A
7 10/15 Both Project Pitch Feedback Meetings (Assignment #2 announced)
8 10/20 No class (Midterms week)
8 10/22 No class (Midterms week)
9 10/27 Kim AI Design Process (1) response Jennifer Wortman Vaughan. 2018. Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Research. Journal of Machine Learning Research 18, 193: 1–46.
*** Instructor note: the sections 3 and 5 could be skimmed.
(2) Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, et al. The Future of Crowd Work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW '13), 1301–1318. 2018.
RR by B
9 10/29 Kim AI Design Process (1) response Sculley, David, et al. "Hidden technical debt in machine learning systems." Advances in neural information processing systems. 2015.
(2) Kocielnik, Rafal, Saleema Amershi, and Paul N. Bennett. "Will you accept an imperfect ai? exploring designs for adjusting end-user expectations of ai systems." Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019.
RR by A
10 11/3 Song InfoViz and Data Visualization (1) response Kay, Matthew, Tara Kola, Jessica R. Hullman, and Sean A. Munson. When (ish) is my bus? user-centered visualizations of uncertainty in everyday, mobile predictive systems. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016.
(2) Amershi, Saleema, Max Chickering, Steven M. Drucker, Bongshin Lee, Patrice Simard, and Jina Suh. Modeltracker: Redesigning performance analysis tools for machine learning. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015.
RR by B
10 11/5 Song InfoViz and Data Visualization (1) response Cai, Carrie J., Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg et al. Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019.
(2) Cheng, Hao-Fei, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 chi conference on human factors in computing systems, 2019.
RR by A
Assignment #2 DUE
11 11/10 Both Project Pitches
11 11/12 Both Project Pitches
12 11/17 Kim Human-AI Collaboration (1) response Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki, and Eric Horvitz. 2019. "Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff." Proceedings of the AAAI Conference on Artificial Intelligence 33, 01: 2429–2437.
(2) Hoffman, Guy, and Cynthia Breazeal. "Collaboration in human-robot teams." AIAA 1st Intelligent Systems Technical Conference. 2004.
RR by B
12 11/19 Kim Human-AI Collaboration (1) response Zhou, Sharon, Melissa Valentine, and Michael S. Bernstein. "In search of the dream team: temporally constrained multi-armed bandits for identifying effective team structures." Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 2018.
(2) Nguyen, An T., et al. "Believe it or not: Designing a human-ai partnership for mixed-initiative fact-checking." Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 2018.
RR by A
13 11/24 Both Project Feedback Meetings
13 11/26 Kim Recommender Systems (1) response Olteanu, Alexandra, Fernando Diaz, and Gabriella Kazai. "When Are Search Completion Suggestions Problematic?." Proceedings of the ACM on Human-Computer Interaction 4.CSCW2 (2020): 1-25.
(2) Gomez-Uribe, Carlos A., and Neil Hunt. "The netflix recommender system: Algorithms, business value, and innovation." ACM Transactions on Management Information Systems (TMIS) 6.4 (2015): 1-19.
RR by B
14 12/1 Song Application Areas (1) response Amershi, Saleema, Andrew Begel, Christian Bird, Robert DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan, Besmira Nushi, and Thomas Zimmermann. "Software engineering for machine learning: A case study." In 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), pp. 291-300. IEEE, 2019.
(2) Xiao, Ziang, Michelle X. Zhou, and Wat-Tat Fu. "Who should be my teammates: Using a conversational agent to understand individuals and help teaming." In Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 437-447. 2019.
RR by A
14 12/3 Song Application Areas (1) response Hara, Kotaro, Jin Sun, Robert Moore, David Jacobs, and Jon Froehlich. "Tohme: detecting curb ramps in google street view using crowdsourcing, computer vision, and machine learning." In Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 189-204. 2014.
(2) Stangl, Abigale, Meredith Ringel Morris, and Danna Gurari. ""Person, Shoes, Tree. Is the Person Naked?" What People with Vision Impairments Want in Image Descriptions." In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-13. 2020.
RR by B
15 12/8 Both Final Presentations
15 12/10 No class (Undergraduate Admission Interviews Day)
16 12/15 No class (Finals week)
16 12/17 No class (Finals week)

Topics (tentative)

Major topics include: Ethics and FAccT in Machine Learning, Metrics to Measure HAI Performance, AI Design Process, Interpretable and Explainable AI, InfoViz and Data Visualization, Recommender Systems, and Human-AI Collaboration

Grading

  • Design project: 30%
  • Reading responses: 30%
  • Assignments: 30%
  • Class participation: 10%
Late policy: Three lowest reading response grades will be removed. No late submissions are allowed for the reading responses. For assignments and project milestones, you'll lose 10% for each late day. Submissions will be accepted until three days after the deadline.

Prerequisites

You need to have at least introduction course-level knowledge in either HCI (e.g., CS374, CS473) or AI (e.g., CS470, CS376). If you're unsure whether you quality, please contact course staff.