Submitted by Rachel Gardner on Mon, 16/12/2024 - 00:00
Our next Research Showcase will showcase our work on human-centred computing and human-machine interaction. Taking place in the West Hub on Thursday 23 January 2025, it will feature a series of short talks by early-career researchers here.
The talks will be followed by refreshments and an opportunity to meet and chat to the researchers. Please register if you would like to attend.
Confirmed speakers so far include:
- Nida Itrat Abbasi, who will talk on how robots can be used for assessing the mental wellbeing of children. Her doctoral research explores the intersection of human-robot interaction, behavioural analysis and mental health assessment, with a focus on children's wellbeing. As part of this, she designs child-robot interaction experiences aimed at assessing children's mental wellbeing. Using a combination of verbal responses to structured tasks and non-verbal cues – such as speech patterns and facial expressions – the study investigates how these interactions can reveal insights into the wellbeing of the future generation.
- Justas Brazauskas who will discuss the human-centred design of real-time digital twins. In his research, Justas is creating a digital twin of the William Gates Building, home to the Department of Computer Science and Technology. In his talk, he'll highlight the ways in which tailored sensor deployments, real-time data integration and effective data visualisations can improve the functionality of digital twin buildings. By addressing the diverse needs of user segments and improving responsiveness to time-critical events, the talk demonstrates how these innovations empower building occupants, fostering data democratisation and improved engagement in dynamic environments.
- Dr Fethiye Irmak Dogan who will speak about her work with robots that understand natural language instructions and resolve ambiguities. "Ambiguities are inevitable during human-robot interaction," she'll explain. "For instance, when a user asks the robot to find and bring 'the porcelain mug', the mug might be located in the kitchen cabinet or on the dining room table, depending on whether it is clean or full (semantic ambiguities). Additionally, there can be multiple mugs in the same location, and the robot can disambiguate them by asking follow-up questions based on their distinguishing features, such as their color or spatial relations to other objects (visual ambiguities). In this talk, I will address our efforts to resolve semantic and visual ambiguities during human-robot conversation."
- Dr Maliha Ashraf who, in her talk on 'Seeing Beyond: Modelling visual constraints for smarter tech', will discuss how understanding the limitations of human contrast vision can guide the design and optimisation of technology. By measuring and modelling contrast vision across multiple dimensions, she'll say, we gain insights into how the visual system processes (and sometimes fails to process) information. These findings drive advancements in tone mapping, image quality, and display design, aligning technology with human perception for more intuitive and effective human-machine interactions.
This is the fifth of our Research Showcases where we highlight work taking place in a key application area. Previous showcases have been on:
- Climate & Sustainability Research, January 2024
- Security Research, January 2023
- Education Research, January 2022
- Healthcare Research, January 2021
Our Research Showcase is open to anyone interested in human-centred computing and human-machine interaction research. Please join us if you would like to do so.