How does Sense of Control influence Human-Robot Interaction?

This talk was given on 1st June 2016 by Adeline Chanseau.

Abstract
One of the main purposes of research in Human-Robot Interaction (HRI) is to study how to enhance the interaction between a robot and its user. My PhD research focus how sense of control can influence HRI. It is hoped that this research will help to reduce robot anxiety and improve people’s trust towards robots. I will present in this RAGS talk the concept of sense of control in HRI and some results found in my first experiment.

Overview of Joint-Action workshop April 16

This talk was given on 25th May 2016 by Frank Foerster.

Abstract
In this presentation I will give a overview of the joint action workshop 'From Human-Human Joint Action to Human-Robot Joint Action and vice-versa' that I attended last month.Various researchers from a range of disciplines such as robotics (mainly but not exclusively HRI), cognitive and developmental psychology, pragmatics, and philosophy of (joint) action attempt to tease apart what joint action precisely is and, in the case of HRI, to operationalise it.

My genuine impression gleaned from the workshop is that this relatively new area of research is everything but consolidated both in terms of methodology and terminology. It is still unclear what precisely constitutes joint-action and how lower-level phenomena such as entrainment and motor coordination relate to higher-level concepts originating from philosophy of action or planning in the case of robotics.

A outline of the different academic fields and researchers present at the workshop will be followed by a summary of selected talks.

Reinforcing experiences on a domestic robot

This talk was given on 18th May 2016 by Nathan Burke.

Abstract
This talk discusses the usage of an Experience Metric space, specifically a modified version of the Interaction History Architecture on a domestic robot. It introduces some difficulties in using such a system, and methods of circumventing them. Examples from trials in a simulated robot house will be shown, with emphasis on the robot utilising such a system in order to discover the users preferred robot behaviour.

Adaptive Robotic Upper Limb Rehabilitation

This talk was given on 27th April 2016 by Azeemsha Thacham Poyil.

Abstract
Adaptability to individual’s ability and effort level is an important aspect of rehabilitation robotics. Conventionally, this is the norm in physical therapy where the therapist adapts the therapy goal to individual needs, based on personal experience, skills and natural human-human interaction. Existing robotic therapies are designed without sufficient potentials for personalisation, e.g. to respond to pain or state of fatigue of the patient. Stroke patients may easily get tired due to their reduced muscle capabilities and reduced cognitive or motor capabilities. So, we aim to address the adaptability of rehabilitation training according to the level of individual contributions to the interaction, and based on the extent of their tiredness. We hypothesise that intensity of rehabilitation training can be altered according to the user’s fatigue assessed with the help of Electromyogram (EMG) signals. These, as well as kinematic data are studied to understand the current physical state and effort exerted by the patient during HRI sessions. Muscle fatigue can be detected from EMG signals using a range of signal processing algorithms and the corresponding EMG features can be potentially added to our kinematic benchmarks to alter the adaptive training exercises. Such an adaptive solution can also be used in a wide-range of human-machine interactions by tuning the interaction using accurate physiological and kinematic assessment. Benefiting from these, it is believed that more active contributions to the therapy results in a better recovery outcomes.

An Analysis of Perceptual Cues in Robot Group Selection Tasks

This talk was given on 20th April 2016 by Alessandra Rossi.

Abstract
The aim of the proposed investigation is to provide the users with the capability of creating robot teams “on–the–fly” using grouping strategies expressed through speech.

Our working hypothesis is that people are inclined to assemble objects into macro-entities, or groups, according to perceptual principles. We observed the real linguistic utterances used by individuals in a testing environment, showing that the type of robots and their mutual arrangements can affect both the choice of elements to form a team, and the way such choice is made. Moreover, we provide an initial insight for the capabilities needed by a robot for reasoning about its membership in a team.

Interaction with socially interactive robot companions

This talk was given on 13th April 2016 by Kheng Lee Koay.

Abstract
The talk will discuss the role of embodied communication and interaction in human-robot interaction scenarios in an assistive context. Examples of research on robot companions will be presented i.e. home companion robots meant to assist people in their own homes. The emphasis of the talk will be on modes and modalities of interaction in order to create engaging scenarios.

Sharing insights and results on involving professionals on interventions using robot KASPAR for children with autism spectrum disorder"

This talk was given on 23rd March 2016 by an external speaker, Claire Huijnen from ZUYD University in the Netherlands.

Abstract
In The Netherlands, over the past two years quite a number of Autism Spectrum Disorder (ASD) professionals and other experts have been involved to build up knowledge on possible interventions using KASPAR for children with autism. This is work is done in the context of the project "Social Robots in Care" of which the University of Hertfordshire is a consortium member as well. Results of two rounds of focus groups and a questionnaire will be presented during this talk. More concretely we aim to give an overview of the therapy and educational objectives that professionals work on with children with autism and their expectations of where robot KASPAR might contribute in a meaningful manner in their day to day work.

Adaptive Smart Environments: detecting human behaviour from multimodal observations

This talk was given on 16th March 2016 by Rory Heffernan.

Abstract
It is desirable to enhance the social capabilities of a smart home environment to become more aware of the context of the human occupants’ activities. By taking human behavioural and contextual information into account, this will potentially improve decision making by the various smart house systems. Full mesh Wireless Sensor Networks (WSN) can be used for passive localisation and tracking of people or objects within a smart home. By monitoring changes in the propagation field of the monitored area from the link quality measurements collected from all the nodes of the network, it is feasible to infer target locations. It is planned to apply techniques from Radio Tomographic Imaging (RTI) and machine vision methods, adapted to the idiosyncrasies of RTI, which will facilitate real-time multiple target tracking in the University of Hertfordshire Robot House (UHRH). Using the Robot Operating System (ROS) framework, these data may then be fused with concurrent data acquired from other sensor systems (e.g.) 3-D video tracking and ambient audio detection in order to develop a high level contextual data model for human behaviour in a smart environment. We present experimental results which could provide support for human activity recognition in smart environments.​

What Communication Modalities Do Users Prefer in Real Time HRI?

This talk was given on 2nd March 2016 by Ori Novanda.

Abstract
Robots are now increasingly being used in a number of application areas where people can interact with them more naturally and in some ways similar to how they would interact with living creatures. Often this interaction is multi-modal, using more than one modality to maintain and encourage interaction.

Developing such robotic systems poses many challenges as they require a substantial amount of computing power and robust integration algorithms. The performance of a multi-modal system also depends on each unimodal technology. Currently each modality has its own ongoing progress as an active research field.

This presentation presents results from an experiment where humans taught an autonomous KASPAR robot to mime a nursery rhyme via one of three interaction modalities.The experiment attempted to investigate the users’ preferred interaction modalities.

Empowerment and the Three Laws of Robotics

This talk was given on 24th February 2016 by Christoph Salge.

Abstract
The greater ubiquity of robots creates a need for generic guidelines for robot behaviour. We focus less on how a robot can technically archive a predefined goal, and more on what a robot should do in the first place. Particularly, we are interested in what heuristics should motivate the robot's behaviour in interaction with human agents. We make a concrete, operational proposal as to how the information-theoretic concept of empowerment can be used as a generic heuristic to quantify concepts such as self-preservation, protection of the human partner and lead-taking. We present a proof-of-principle that this allows one to specify the concepts behind the Three Laws of Robotics in a quantitative way. Notably, this route does not depend on linguistic specifications and incorporates the ability to take varied situations and types of robotic embodiment into account.

Collaboration in Co-Creative Scenarios via Coupled Empowerment Maximization: A Case-Study in Video Games

This talk was given on 10th February 2016 by a guest speaker, Christian Guckelsburger from Goldsmiths University of London.

Abstract
Recently, embodied and situated agents have become increasingly popular in co-creative systems (where humans and artificial agents jointly work on creative tasks). Intrinsically-motivated agents are particularly successful here, because of their capacity to act flexibly and adapt in open-ended interactions without clearly specified goals. Unfortunately, existing implementations do not manage to establish and maintain collaboration as a core mechanic in such systems without constraining the flexibility of the agents by means of explicitly specified interaction rules. This talk introduces the information-theoretic principle of coupled empowerment maximization as a means to establish a frame for both collaborative and antagonistic behaviour within which agents can interact with maximum flexibility. We study this mechanism in a dungeon-crawler video game testbed, to drive the behavior of an NPC supporting the human player. We demonstrate our progress, future challenges, and argue that the principle could eventually allow for the emergence of truly creative behavior.

Measuring the Quality of Interaction

This talk was given on 3rd February 2016 by Frank Foerster.

Abstract
In this talk I will give an (incomplete) overview of recently proposed quantitative measurements which are ought to serve as indicators for quality of interaction between humans and robots and which form the basis of my future research.

The measurements under consideration, frequently linked to the mirror neuron system, are motor interference, motor contagion, or automatic imitation and might be thought of particular expressions of the more general construct of motor resonance.

Most of these measurements have been taken within interactionally very restrictive setups, and I will therefore reflect critically about some of the underlying assumptions and expected difficulties when trying to employ these measurements in more realistic interaction scenarios.

Cognitive and Developmental Systems 2016

This talk was given on 20th January 2016 by Caroline Lyons.

Abstract
“Cognitive and Developmental Systems” is the new title of the IEEE journal formerly called Transactions on Autonomous Mental Development – to which ASRG people contributed. The journal reaffirms the goal of developing human-like capabilities in artificial systems, and a particular target is enabling the autonomous realization of new representations.

In this talk I first look at how far the goal of a human-like model is desirable in robotics. Then I examine an area where the human analogy is essential: language and communication. However, in much research the representation of language is subject to the tyranny of the written word. Since there are quantities of written corpora, notably where children’s talk is transcribed, it is convenient to work with this orthographic data rather than the audio stream. There is only a partial match between written words and sounds that are actually heard; written words can be a misleading representation.

“Representational Redescription” is a new challenge. There is scope for further interdisciplinary work with phonological neuroscience on the one hand and speech recognition engineering on the other.

Formal methods for robot behavior verification using linear temporal logic

This talk was given on 13th January 2016 by Joe Saunders.

Abstract
How can we have sophisticated interactions with robots in a safe and trustworthy manner? This is a fundamental research question that must be addressed before the traditional physical safety barrier between the robot and the human can be removed. There has been some work on safety at lower mechanical/compliance levels to restrict robot movements near humans. However the safety of robotic high-level behaviors during interaction with humans has not been actively researched.

In this talk I will present some work carried out by UH in collaboration with the University of Liverpool studying the formal verification of high level robot behaviors using a technique called `model checking'. I will outline this verification method, and introduce the concept of temporal logic for ensuring correctness of behaviors. I will also present come results of applying this work to real robotic scenarios operating in the UH robot house.

2012 © Adaptive Systems Research Group

web stat counter