My blog tends to consist of summaries of projects I am currently involved in or papers I have recently published.

Prototyping voice user interfaces

The recent growth in popularity of Voice User Interfaces (VUIs), from smartphone assistants (e.g. Siri) through to smart speakers (e.g. Amazon Echo) has led to a recent resurgence of research in HCI that examines the design and use of novel technologies such as ‘natural language’ interfaces. A tried and tested method in research with natural language interfaces is to make use of the Wizard of Oz approach, presenting the user with what seems to be a fully-functional system, when in reality parts of the ‘intelligence’ of the system is secretly human-powered.

Supporting discoverability in voice interfaces

Discoverability is the ability for users to find and execute features through a user interface, and is also often considered an aspect of learnability. It’s also a common problem many of us encounter with screen-less devices, such as smart speakers. Often users will not know what a screen-less device can do without going through some trial and error, looking up information online, or receiving help from another person. In the world of graphical user interfaces, elements like menus, buttons, links, dialogs, etc., are designed to help users discover what a system can do quickly. Voice interfaces, on the other hand, are largely invisible interfaces, except for, say, a status indicator, and thus we cannot rely upon these methods to help users.

Getting it done with a voice interface

I wondered into examining the use and design of voice interfaces a few years ago during my PhD (see this paper or the summary blog post). The rapid mass marketisation of the technology has led to the quick succession of research paper after research paper and Medium post after Medium post, but there is seemingly no “golden” set of specific usability heuristics for voice interfaces. Rather, there are several re-examinations of similar rules that were crafted for GUIs, so far (but the times, they are a changin’). Voice interfaces are different for a plethora of well-rehearsed reasons but much of this is to do the invisibility and ephemerality of speech.

Cleaning a factory with robots

I am currently involved with the RoboClean project at the University of Nottingham. This project is investigating the potential of human-robot collaboration, integrated with IoT smart sensors for cleaning and allergen detection on factory floors. The outcomes of this project will include the design, implementation, and evaluation of an interactive system that could clean factory floors alongside human workers, while performing on-line detection of allergens.

Predicting consumption in the home

The hyperbole of the “smart refrigerator” has been propelled to the mainstream rhetoric of the IoT connected future, in part because it seems many of the pieces of such a fanciful device are coming together. The idea of a fridge that can measure the consumption of items and automatically re-order them for home delivery “seems inevitable” given we have partially-automated re-ordering through Dash buttons, other IoT devices to track product waste, door locks that let through delivery companies, and internet-connected refrigerators.