Glass User Interface Pattern

Auf Deutsch lesen

At school I am involved in yet another project. We're working in a team of six and the task is to provide assistance for power plant repair and maintenance procedures. The customer wants to know whether it's possible to replace paper-based instruction manuals with Smart Glasses using Augmented Reality. Until now the focus of such case studies lay more in the area of showing the instruction to the user using augmented objects. But once the procedure gets more complicated there is a need for more sophisticated user interaction.

For a good Augmented Reality experience the following need to be taken into consideration.

Since our interface focuses on the execution of mostly mechanical tasks, there are some limitations.

Because of these limitations the proposed way for user interaction will use only speech recognition for user input. The interaction will differ from human-to-human, in the sense that the words to be used are clearly defined and have a definite effect. One could say that the speech input method merely replaces physically touching buttons, as used on smart phones.

Separation of Concerns

The things on the screen can be divided into three categories.

Different Object on Screen

Information is the reason why you are using the glasses. Everything that can also be found in the instruction manual is information. Function elements let you interact with the glasses and tell them what information you need. Lastly, the state elements are the way the glasses give you feedback.

Human Computer Interaction

In an Augmented Reality application the information lies in the objects being augmented onto the real world. But because not all of these objects can possibly be shown at the same time and not everybody wants to see the same things, there is a need for function and state elements on the otherwise transparent screen.

On touch or mouse interfaces you can recognize function elements as clickable buttons with text or icons in them, while state usually consists of plain icons or text. In the context of a speech interface icons have the problem, that each user will say a different word for the same icon. Therefore, the elements have to look like this:

Function
Text
Information

Augmented Objects

State
  • Icons / Graphics

  • Questions?

  • Notifications!

If it's not possible to demonstrate all the instructions (information) using Augmented Reality, it's not recommended to use Smart Glasses. The cost of creating such an application would not be justified.

Common Interaction Elements

Here are some well known Interaction elements from HTML ported to the world of smart glasses and speech recognition.

Checkbox

Smart Glasses Checkbox

Select

Smart Glasses Select

Readabilty

Reading text written on top of the real world, which can have any color, can be difficult.

AR Text Readability

The choice depends on the case at hand.

Icons by Diego Naive & Convoy from the Noun Project

December 15, 2014


Stamina
Stamina

December 20, 2014

Look, But Don't Touch
Look, But Don't Touch

August 31, 2014