Articles

How does Sense of Control influence Human-Robot Interaction?

This talk was given on 1st June 2016 by Adeline Chanseau.

Abstract
One of the main purposes of research in Human-Robot Interaction (HRI) is to study how to enhance the interaction between a robot and its user. My PhD research focus how sense of control can influence HRI. It is hoped that this research will help to reduce robot anxiety and improve people’s trust towards robots. I will present in this RAGS talk the concept of sense of control in HRI and some results found in my first experiment.

Overview of Joint-Action workshop April 16

This talk was given on 25th May 2016 by Frank Foerster.

Abstract
In this presentation I will give a overview of the joint action workshop 'From Human-Human Joint Action to Human-Robot Joint Action and vice-versa' that I attended last month.Various researchers from a range of disciplines such as robotics (mainly but not exclusively HRI), cognitive and developmental psychology, pragmatics, and philosophy of (joint) action attempt to tease apart what joint action precisely is and, in the case of HRI, to operationalise it.

My genuine impression gleaned from the workshop is that this relatively new area of research is everything but consolidated both in terms of methodology and terminology. It is still unclear what precisely constitutes joint-action and how lower-level phenomena such as entrainment and motor coordination relate to higher-level concepts originating from philosophy of action or planning in the case of robotics.

A outline of the different academic fields and researchers present at the workshop will be followed by a summary of selected talks.

Adaptive Robotic Upper Limb Rehabilitation

This talk was given on 27th April 2016 by Azeemsha Thacham Poyil.

Abstract
Adaptability to individual’s ability and effort level is an important aspect of rehabilitation robotics. Conventionally, this is the norm in physical therapy where the therapist adapts the therapy goal to individual needs, based on personal experience, skills and natural human-human interaction. Existing robotic therapies are designed without sufficient potentials for personalisation, e.g. to respond to pain or state of fatigue of the patient. Stroke patients may easily get tired due to their reduced muscle capabilities and reduced cognitive or motor capabilities. So, we aim to address the adaptability of rehabilitation training according to the level of individual contributions to the interaction, and based on the extent of their tiredness. We hypothesise that intensity of rehabilitation training can be altered according to the user’s fatigue assessed with the help of Electromyogram (EMG) signals. These, as well as kinematic data are studied to understand the current physical state and effort exerted by the patient during HRI sessions. Muscle fatigue can be detected from EMG signals using a range of signal processing algorithms and the corresponding EMG features can be potentially added to our kinematic benchmarks to alter the adaptive training exercises. Such an adaptive solution can also be used in a wide-range of human-machine interactions by tuning the interaction using accurate physiological and kinematic assessment. Benefiting from these, it is believed that more active contributions to the therapy results in a better recovery outcomes.

Reinforcing experiences on a domestic robot

This talk was given on 18th May 2016 by Nathan Burke.

Abstract
This talk discusses the usage of an Experience Metric space, specifically a modified version of the Interaction History Architecture on a domestic robot. It introduces some difficulties in using such a system, and methods of circumventing them. Examples from trials in a simulated robot house will be shown, with emphasis on the robot utilising such a system in order to discover the users preferred robot behaviour.

An Analysis of Perceptual Cues in Robot Group Selection Tasks

This talk was given on 20th April 2016 by Alessandra Rossi.

Abstract
The aim of the proposed investigation is to provide the users with the capability of creating robot teams “on–the–fly” using grouping strategies expressed through speech.

Our working hypothesis is that people are inclined to assemble objects into macro-entities, or groups, according to perceptual principles. We observed the real linguistic utterances used by individuals in a testing environment, showing that the type of robots and their mutual arrangements can affect both the choice of elements to form a team, and the way such choice is made. Moreover, we provide an initial insight for the capabilities needed by a robot for reasoning about its membership in a team.

2012 © Adaptive Systems Research Group

web stat counter