Voice interfaces like Alexa have become an integral part of everyday technology, but what about people who don’t communicate using speech? This is what inspired Abhishek Signh, software developer, to create a mod that allows Amazon’s Alexa assistant to understand simple sign language commands, according to The Verge.
In order for this to work, an Amazon Echo must be connected to a laptop while the webcam decodes Signh’s gestures in text.
“If these devices are to become a central way in which we interact with our homes or perform tasks then some thought needs to be given towards those who cannot hear or speak,” said Signh in an interview with The Verge. “Seamless design needs to be inclusive in nature.”
The mod was created using the help of Google’s TensorFlow software, which allows users to code machine learning applications in JavaScript. Using the machine vision software, Signh taught his program to understand certain visual cues by feeding it training data. Because he couldn’t find datasets for sign language online, he created his own set of basic signs for the computer to understand.
The software is a proof-of-concept, and cannot read any signs that Signh has not coded into the software. Signh said that adding more vocabulary and signs would be relatively easy. He plans to open-source the code for anyone to use.
“By releasing the code people will be able to download it and build on it further or just be inspired to explore this problem space,” Signh said.
Currently, Amazon has released an update for Alexa that allows users with the screen-equipped Echo Show to interact with the virtual assistant without any voice commands. Perhaps sign language could be Alexa’s next step.
“There’s no reason that the Amazon Show, or any of the camera-and-screen based voice assistants couldn’t build this functionality right in,” says Signh. “To me that’s probably the ultimate use-case of what this prototype shows.”