An aspect of video calls that many of us take for granted is the way they can switch between feeds to highlight whoever’s speaking. Great — if speaking is how you communicate. Silent speech like sign language doesn’t trigger those algorithms, unfortunately, but this research from Google might change that.

It’s a real-time sign language detection engine that can tell when someone is signing (as opposed to just moving around) and when they’re done. Of course it’s trivial for humans to tell this sort of thing, but it’s harder for a video call system that’s used to just pushing pixels.

A new paper from Google researchers, presented (virtually, of course) at ECCV, shows how it can be done efficiency and with very little latency. It would defeat the point if the sign language detection worked but it resulted in delayed or degraded video, so their goal was to make sure the model was both lightweight and reliable.

This hand-tracking algorithm could lead to sign language recognition

The system first runs the video through a model called PoseNet, which estimates the positions of the body and limbs in each frame. This simplified visual information (essentially a stick figure) is sent to a model trained on pose data from video of people using German Sign Language, and it compares the live image to what it thinks signing looks like.

Image showing automatic detection of a person signing.

Image Credits: Google

This simple process already produces 80 percent accuracy in predicting whether a person is signing or not, and with some additional optimizing gets up to 91.5 percent accuracy. Considering how the “active speaker” detection on most calls is only so-so at telling whether a person is talking or coughing, those numbers are pretty respectable.

In order to work without adding some new “a person is signing” signal to existing calls, the system pulls clever a little trick. It uses a virtual audio source to generate a 20 kHz tone, which is outside the range of human hearing, but noticed by computer audio systems. This signal is generated whenever the person is signing, making the speech detection algorithms think that they are speaking out loud.

Right now it’s just a demo, which you can try here, but there doesn’t seem to be any reason why it couldn’t be built right into existing video call systems or even as an app that piggybacks on them. You can read the full paper here.

iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks


TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *