A wearable translates sign language into English

Researchers are working on a high-tech wearable to bridge the gap between the deaf and those who don’t understand sign language.
Researchers have developed a wearable to bridge the gap between the deaf and those who don't understand sign language.
Researchers have developed a wearable to bridge the gap between the deaf and those who don't understand sign language.

Biomedical engineering researchers at the Texas A&M University are working on a device that translates sign language while being worn on the wrist. The wearable tech could bridge the communications gap between the deaf and those who don’t know sign language.

According to Roozbeh Jafari, an associate professor at the university, the device uses a combination of motion sensors and the measurement of electrical activity generated by muscles to interpret hand gestures. Still in its prototype stage, the device is capable of recognising 40 American sign language words with nearly 90 per cent accuracy.

Unlike other similar initiatives, Jafari’s device does not use video-based recognition. It relies instead on wearable technology. "Wearables provide a very interesting opportunity in the sense of their tight coupling with the human body,” Jafari says. “Because they are attached to our body, they know quite a bit about us throughout the day, and they can provide us with valuable feedback at the right times. With this in mind, we wanted to develop a technology in the form factor of a watch.”

The device uses two sensors: The first is an inertial sensor that responds to motion. “This sensor plays a major role in discriminating different signs by capturing the user’s hand orientations and hand and arm movements during a gesture,” says Jafari.

The second sensor is used to differentiate between different muscle activities. It is called an electromyographic sensor (sEMG) and it non-invasively measures the electrical potential of muscle activities, Jafari explains. 

“These two technologies are complementary to each other, and the fusion of these two systems will enhance the recognition accuracy for different signs, making it easier to recognize a large vocabulary of signs,” Jafari adds.