This Amazon Echo mod lets Alexa understand sign language

It seems like voice interfaces are going to be a big part of the future of computing; popping up in phones, smart speakers, and even household appliances. But how useful is this technology for people who don’t communicate using speech? Are we creating a system that locks out certain users?

These were the questions that inspired software developer Abhishek Singh to create a mod that lets Amazon’s Alexa assistant understand some simple sign language commands. In a video, Singh demonstrates how the system works. An Amazon Echo is connected to a laptop, with a webcam (and some back-end machine learning software) decoding Singh’s gestures in text and speech.

“Seamless design needs to be inclusive in nature.”

Speaking to The Verge, Singh says the project was a “thought experiment” inspired by the recent vogue for voice-based assistants. “If these devices are to become a central way in which in interact with our homes or perform tasks then some thought needs to be given towards those who cannot hear or speak,” says Singh. “Seamless design needs to be inclusive in nature.”

The actual mod itself was made with the help of Google’s TensorFlow software, specifically TensorFlow.js, which allows users to code machine learning applications in JavaScript (making it easier to run applications in web browsers). As with any machine vision software, Singh had to teach his program to understand visual signals by feeding it training data. He couldn’t find any datasets for sign language online, and instead created his own set of basic signals.

The software is just a proof-of-concept at this point, and is unable to read any signs that aren’t demoed in the video. But adding more vocabulary is relatively easy, and Singh says he plans to open-source the code and write an explanatory blog post for his work. “By releasing the code people will be able to download it and build on it further or just be inspired to explore this problem space,” he tells The Verge.

Coincidentally, yesterday Amazon released its own update for Alexa that lets users with the screen-equipped Echo Show interact with the virtual assistant without using voice commands. That shows that Amazon is at least beginning to consider how to build accessibility into its voice assistant, and who know, understanding sign language could be the next step. It’s certainly technically possible, as Singh’s demo shows.

“There’s no reason that the Amazon Show, or any of the camera-and-screen based voice assistants couldn’t build this functionality right in,” says Signh. “To me that’s probably the ultimate use-case of what this prototype shows.”

ncG1vNJzZmivp6x7tbTEr5yrn5VjsLC5jmtnanBfbHxzgI5qbm9oZmt%2BdXvAppizp55irq2x15pknpuYpHquu8NmqqKfnmK5orrGrpignV2csrTA1KucrGWRng%3D%3D