
Submitted by Rachel Gardner on Mon, 27/01/2025 - 13:23
Human-centred computing and human-machine interaction were the topics in our latest Research Showcase. Taking place on 23 January 2025, it comprised a series of short talks by early-career researchers here.
We heard from four speakers:
Nida Itrat Abbasi talked on how robots can be used for assessing the mental wellbeing of children. Her doctoral research explores the intersection of human-robot interaction, behavioural analysis and mental health assessment, with a focus on children's wellbeing. As part of this, she designs child-robot interaction experiences aimed at assessing children's mental wellbeing. Using a combination of verbal responses to structured tasks and non-verbal cues – such as speech patterns and facial expressions – the study investigates how these interactions can reveal insights into the wellbeing of the future generation.
Justas Brazauskas discussed the human-centred design of real-time digital twins. In his research, Justas is creating a digital twin of the William Gates Building, home to the Department of Computer Science and Technology. In his talk, hel highlighted the ways in which tailored sensor deployments, real-time data integration and effective data visualisations can improve the functionality of digital twin buildings. By addressing the diverse needs of user segments and improving responsiveness to time-critical events, the talk demonstrates how these innovations empower building occupants, fostering data democratisation and improved engagement in dynamic environments.
Dr Fethiye Irmak Dogan discussed her work on how robots can use explainability to enhance human-robot interaction, ensuring their ethical and transparent integration into human lives. She highlighted two specific use cases: using robot explanations to resolve ambiguities in user instructions and incorporating human explanations into a robot's decision-making process to generate socially appropriate behaviours. "Ambiguities are inevitable during human-robot interaction," she noted, demonstrating how explainability can help identify sources of uncertainties and address them effectively. She also detailed their recent efforts to develop robot behaviours that align with social norms and human preferences across various environments. She showcased their system that integrates Large Language Models (LLMs) for common-sense reasoning with human explanations through a generative deep neural network architecture to predict socially appropriate actions.
Dr Maliha Ashraf who, in her talk on 'Seeing Beyond: Modelling visual constraints for smarter tech', outlined how understanding the limitations of human contrast vision can guide the design and optimisation of technology. By measuring and modelling contrast vision across multiple dimensions, she said, we gain insights into how the visual system processes (and sometimes fails to process) information. These findings drive advancements in tone mapping, image quality, and display design, aligning technology with human perception for more intuitive and effective human-machine interactions.
This is the fifth of our Research Showcases where we highlight work taking place in a key application area. Previous showcases have been on:
- Climate & Sustainability Research, January 2024
- Security Research, January 2023
- Education Research, January 2022
- Healthcare Research, January 2021