Start Making Sense
Cognitive and Affective Confidence Measures for Explanation Generation using Epistemic Planning
EPSRC | Human-Like Computing | Grant No. EP/R031045/1 | 2018-2020
Overview
The EPSRC/Human-Like Computing-funded Start Making Sense project addresses the need for dynamic trust maintenance in interactive and autonomous systems by bringing together experimental research in cognitive science involving cooperative joint action with the practical construction of AI planning tools to apply to the task of explanation generation. This challenge is addressed through these concrete objectives:
To study cooperative joint action in humans to identify the affective or cognitive factors that are essential for successful human communicative goals,
To enhance epistemic planning techniques with heuristics derived from the cognitive science studies, and
To deploy the resulting system to generate human-like explanations and evaluate the effectiveness of the resulting approach with human participants.
People
Ron Petrick, Principal Investigator, Heriot-Watt University
Robin Hill, Co-Investigator, University of Edinburgh
Sara Dalzel-Job, Researcher, University of Edinburgh
Bart Craenen, Researcher, Heriot-Watt University
Alan Lindsay, Researcher, Heriot-Watt University
We are indebted to our friend and colleague, Jon Oberlander, who helped shape this project but whose unexpected passing in 2017 meant that we had to proceed without his invaluable collaboration. Jon is sorely missed and this project is dedicated to his memory.
Publications
S. Dalzel-Job, R. Hill, and R. Petrick. (2022). Start Making Sense: Identifying Behavioural Indicators When Things Go Wrong During Interaction with Artificial Agents. International Conference on Principles and Practice of Multi-Agent Systems (PRIMA), 582-591, doi:10.1007/978-3-031-21203-1_36.
A. Lindsay and R. Petrick. (2021). Supporting Explanations Within an Instruction Giving Framework. ICAPS 2021 Workshop on Explainable AI Planning (XAIP). pdf
A. Lindsay, B. Craenen, and R. Petrick. (2021). Within Task Preference Elicitation in Net Benefit Planning. ICAPS 2021 Workshop on Knowledge Engineering for Planning and Scheduling (KEPS). pdf | video
S. Dalzel-Job, R.L. Hill, and R. Petrick. (2021). Start Making Sense: Predicting confidence in virtual human interactions using biometric signals. Proceedings of Measuring Behavior 2020-21: 12th International Conference on Methods and Techniques in Behavioral Research and 6th Seminar on Behavioral Methods, 1:72-75. pdf
A. Lindsay, B. Craenen, S. Dalzel-Job, R.L. Hill, and R. Petrick. (2020). Investigating Human Response, Behaviour, and Preference in Joint Task Interaction. UK Planning and Scheduling Special Interest Group (UK PlanSIG). pdf | video
A. Lindsay, B. Craenen, S. Dalzel-Job, R.L. Hill, and R. Petrick. (2020). Investigating Human Response, Behaviour, and Preference in Joint Task Interaction. ICAPS 2020 Workshop on Explainable AI Planning (XAIP), doi:10.48550/arXiv.2011.14016. pdf
A. Lindsay, B. Craenen, S. Dalzel-Job, R.L. Hill, and R. Petrick. (2020). Supporting an Online Investigation of User Interaction with an XAIP Agent. ICAPS 2020 Workshop on Knowledge Engineering for Planning and Scheduling (KEPS). pdf | video
S. Dalzel-Job. (2019). Start Making Sense: How to Design Likeable, Trustworthy and Helpful Virtual Humans. Poster at Beyond Conference. pdf
R. Petrick and R.L. Hill. (2019). Start Making Sense: Cognitive and Affective Confidence Measures for Explanation Generation using Epistemic Planning. AAAI 2019 Spring Symposium on Story-Enabled Intelligence. pdf
R. Petrick, S. Dalzel-Job, and R.L. Hill. (2019). Combining Cognitive and Affective Measures with Epistemic Planning for Explanation Generation. ICAPS 2019 Workshop on Explainable Planning (XAIP), 141-145. pdf