Design Engineering
Showcase 2021

Moception

Tags
Inclusive Design
Human-Computer Interaction
Accessibility

Project Details

Course
Global Innovation Design
Supervisor
Dr David Boyle
Theme
Inclusive interaction
Links
LinkedIn
Instagram

Moception is a single-handed eyes-free text entry and editing method using a hybrid approach of speech and mid-air gesture input. Initiated by intuitive gestures, speech-to-text is used to input text content and correct the unwanted text using an ‘audio-patching’ technique. Mid-air gestures detected by a wrist-mounted wearable are used to navigate through text contents and perform discrete commands.

Find out more about Moception here

Text entry beyond vision

Text entry and editing have always been cumbersome for visually impaired users. Current solutions for eyes-free text entry demand high cognitive and motor abilities. Mainstream touchscreen-based solutions usually involve both hands, making them difficult to use for visually impaired users with a cane in one hand.

Wrist-worn Wearable:

Moception presents a wrist-worn wearable for gesture recognition and audio/haptic feedback.

Wrist-rotation:

Even with our eyes closed, we can still tell the positions of different body parts confidently. Taking advantage of proprioception, Moception maps the spatial layout of a line of text to the spherical space under our palm, which improves visually impaired user’s awareness and perception of the text layout. By rotating their wrist, users can intuitively navigate through a line of text.

Gestures for discrete commands:

Low-key and intuitive gestures like swipe, draw a line were defined by 10 visually impaired participants in gesture study, following the participatory design paradigm proposed by Wobbrock et al. Each gesture was mapped to a basic command of the text entry system.

The final working prototype was built in Unity with a Leap Motion Controller for gesture recognition and tested with 4 visually impaired participants. The average completion time of task “Enter-review-edit” was cut down by 53.2% compared to the current speech-based input method on an iPhone. Moception was also perceived to be more effortless and intuitive to use.