Keynote Speakers

A photo of Antti Oulasvirta

Antti Oulasvirta

Putting back the Human in the Loop

Human-in-the-loop (HITL) methods utilize human input to steer a learning or optimization algorithm. Why is it that, despite the good intention, these methods are less popular than expected? In this talk, I claim that significant improvements can be made by revisiting some foundational assumptions that are made about humans in algorithms and system designs. Some HITL methods were designed to improve algorithmic accuracy with no regard of human agency and cognition. I show examples of studies where the algorithmic and interactive aspects of HITL methods were rethought from this perspective. To conclude, I discuss a roadmap for human-centric human-in-the-loop methods.

Antti Oulasvirta leads the User Interfaces research group at Aalto University and the Interactive AI research program at FCAI (Finnish Center for AI). Prior to joining Aalto, he was a Senior Researcher at the Max Planck Institute for Informatics and the Cluster of Excellence on Multimodal Computing and Interaction at Saarland university. He was awarded the ERC Starting Grant (2015-2020) for research on computational design of user interaces.

A photo of Nava Tintarev

How do we make explanations beneficial to different users

A lot of people recognize the importance of explainable artificial intelligence. This is also the case for personalized online content which influences decision-making at individual, business, and societal levels. However, we often lose sight of the purpose of these explanations and whether understanding is an end in itself. This talk addresses why we may want to develop decision-support systems that can explain themselves and how we may assess that we are successful in this endeavor. For example, what appropriate trust might mean in low-confidence domains. This talk will describe some of the state-of-the-art explanations in several domains that help link the mental models of systems and people. However, it is not enough to generate rich and complex explanations; more is required to support effective decision-making. This entails that decisions around which information to select to show to people and how to present that information, often depending on the target users and contextual factors.

Nava Tintarev is a Full Professor of Explainable Artificial Intelligence at the University of Maastricht, and a guest professor at TU Delft. She currently participates in a Marie-Curie Training Network on Natural Language for Explainable AI (October 2019-October 2024). She is also representing Maastricht University as a Co-Investigator in the ROBUST consortium, selected for a national (NWO) grant with a total budget of 95M (25M from NWO) to carry out long-term (10-years) research into trustworthy artificial intelligence, and co-director of the TAIM lab on trustworthy media.