Real-time object recognition integrated high precision haptic feedback interface

Published Sep 12, 2019. 3 minutes to read.
Tagged with Accessibility.

This study is a living document and a very early draft.

This study describes a prototype device that allows for the use of haptic and sensory input to convey information about one’s surroundings and environment. The primary intention behind the design of this device is to provide environmental information to individuals who are visually or visually and hearing impaired. The device has a potential of other applications as well - whenever an artificial sensor can perceive information better than the vision of a person (smoke, darkness, etc).

A physical implementation of the device would be a combination of haptic feedback glove(s) and other small sensory devices attached to the wearer.

What makes this device different from similar concepts is the combination of advanced microelectromechanical systems design and machine learning to provide very high precision haptic, environmental and contextual feedback in real-time. The device also allows for the wearer to dynamically adjust the parameters of the device using micro-gestures for high contextual adaptability.

Sensory input packet

Sensory input packet consists of several real-time sensors arranged both around, in and on the interface itself, and the wearer. The goal of these sensors is to provide environmental information

LIDAR based environment measurement

LIDAR module provides real-time input for the haptic feedback interface . Objects of the field of view of the LIDAR module are converted into a digital representation of the object or objects, which in turn are supplied to the haptic feedback system.

Optical object recognition

In addition to providing textures and multi-dimensional input to the haptic image display, the device contains an optical image recognition module. It allows for near real-time object recognition to provide additional context to the wearer about the object, in addition to the information presented using the haptic feedback.

Using additional sensors to provide contextual information

Optical sensors embedded in the device, in addition to optical object recognition, are also used to provide context to the environment observed by the haptic feedback device.

For example, object recognition would determine if the user is pointing towards a plant and provide haptic imagery of the plant, but optical sensor input analysis would be used to provide contextual information about said plant - color, other properties.

A thermal sensor is embedded in the device to provide a narrow field context of temperature to the user in addition to other sensors. For example, a near field temperature sensor could warn the user of very hot or very cold items in its field of vision - for example, about something the wearer is about to touch.

Micro-gestures and device control

The device is controlled using micro-gestures allowing the wearer to adjust sensitivity, depth, the field of view, and other properties and functionality of the device in real-time.

Haptic feedback using layered electroactive polymer arrays

The inner lining of the device is composed of layered electroactive polymer arrays arranged in a grid formation. Each layer contains electroactive polymer actuators capable of exerting different force relative to its position towards other layers.

Layers are arranged in a way that less powerful and physically smaller actuators are closer to the haptic surface, and layers that are more powerful and physically larger are arranged further from the haptic surface of the device. It is possible to control each layer and each actuator in each grid independently to combine the effect of individual actuators to produce a haptic image.

Secondary output of contextual information

All information contextual to the context of a view of the device is output by either audio (synthesized speech) or/and by a haptic output device (refreshable braille display).

© Matīss Treinis 2022, all rights, some wrongs and most of the lefts reserved.
Unless explicitly stated otherwise, this article is licensed under a Creative Commons Attribution 4.0 International License.
All software code samples available in this page as part of the article content (code snippets and similar) are licensed under the terms and conditions of Apache License, version 2.0.