Skip to content

Haptic Feedback Gloves

Project Description

The goal of this project is to allow a visually impaired person to better navigate their environment. This is done through the use of an Intel D435 depth sensing camera, mounted to the front of the user through a harness, which sends depth data to a Raspberry Pi 4. The Raspberry Pi 4 processes this information in Python, ultimately dividing the field of view into six regions and determining where there are obstacles. This information is used to drive motors worn on gloves on the user’s hands – the left hand’s motors fire for obstacles in the upper half of the field of view, and the right hand’s motors fire for obstacles in the lower half.

Hardware

The system is run by a Raspberry Pi 4, which takes input form an Intel RealSense D435 depth sensing camera. A portable, rechargeable battery is used to provide power to the Pi, which in turn powers the rest of the hardware. Three motors per hand, which are connected to the Raspberry Pi, are glued to different fingers of a pair of compression gloves worn by the user. The camera is attached to a chest mounted harness; the battery and Raspberry Pi are placed in a slightly modified fanny pack.

Software

The camera captures as a depth-image and presents it as a Numpy array. This is, by default, 1280×720 2-D array, where 1280 and 720 are the width and height of the depth image. For our purposes, these dimensions were too large, as even the smallest depth image we could directly obtain was 424 x 240 pixels. To turn the data into something more meaningful, we trimmed off about 30 pixels on each side of this array to eliminate aliasing on the edges of the depth-image, and then shrunk the whole array down to a 2×3 grid, with each grid point in this new array corresponding to one motor. This was done in two steps – first, a new array of size 6×13 was created, where each cell was the average value from a 30×30 block of the original array. Second, the 2×3 array was generated where each cell was the minimum value from a 3×4 block of the 6×13 intermediate array. With this done, the value in each of the six segments was compared to a maximum distance threshold of 2 meters. This allowed us to pinpoint the area of highest priority so that no more than one motor would fire on each hand at a given time, so as to not overwhelm the user.

Images and Demo

The above video shows a simulation of the output that the user would receive when using the system. The video output shows a heat-map render of the depth data fed to the Raspberry Pi, with the input for the motors displayed in the array above it. As an example, a readout of 1 in front of TL in the array signals that there is an obstacle in the top left field of view of the depth sensor and that motor associated with that portion of the field of view would would be activated. The white bars at on the top and bottom of the video signify each motor turning on in that sector for that duration.

The first part of the video shows a subject walking across the screen and back which causes the system to turn on both the top and bottom motors in each section. The second part of the video shows a bucket being moved back and forth across the top half screen. This time only the top half is activated.

Takeaways

A key part of our project was striking the balance between providing the user with as much feedback as possible, and providing it to them in a way that’s easy to understand. We made a few changes to our design at various points where data was sacrificed for ease of use – first by reducing the number of sections of the display to six, and then later by preventing more than one motor per hand from firing at once. It highlighted the sheer amount of visual information that someone takes in and processes, and it demonstrated that the capabilities of a project like ours are, relatively speaking, very limited. However, even though the information our project can produce is relatively limited, the hope is that it will still be beneficial to its user.

Team Members

  • Noah Mullane
  • Matthew Rosenbloom
  • Mohammad Ali Raza

Supervisors

  • Jack G. Mottley
  • Daniel Phinney
Return to the top of the page