The world is getting better at combining machine learning and computer vision, but it's not just cars and drones that benefit from that. For instance, the same technology could be used to dramatically improve the lives of people with visual impairments, enabling them to be more independent. One of the startups looking to do just that is Eyra, which is showing off a wearable called Horus that could help the blind "see.
I got to try out the prototype hardware at TechCrunch Disrupt, and while the design is close to being done, the current device is still rough around the edges. The starting point is a pair of Aftershokz bone-conduction headphones with a camera module attached to the right hand side. The module contains two FHD mobile cameras roughly a centimeter apart which kinda/sorta hangs off the side of your head.
The headset is connected via a microUSB cable to a black plastic box the size of about half a paperback book. Nestled inside is NVIDIA's Tegra K1 chip and a battery capacious enough to sustain the device for a full day of use. Unlike other visual aids we've seen, Horus is designed to all of its processing locally, so you're not left at the mercy of your wireless connection when out and about.
Horus is currently designed to read pieces of text, identify objects or recognize the faces of the people that you're speaking to. You need to select which option you're going to use with three menu buttons on the box (each one a different shape, for ease of use) and then select. If you want to have a book read out to you, select the option and then move the tome up towards the glasses. Audio cues will move from left to right to guide you into the right position for the camera to automatically trigger.
After a short pause, as the system crunches the data, it'll begin reading out what you've just snapped in a synthetic voice. It's not perfect by any stretch, but it's certainly smart enough to be able to let you read any book, magazine or newspaper as long as the font is legible. The system will also recognize objects and faces, although both will need to be trained before it'll reliable enough to launch. In our demo, we were able to get it to read out a new magazine article and it could differentiate between Diet Pepsi and its full-fat equivalent.
The startup was created by a pair of students from the University of Genoa who were looking to develop a computer vision system. While their research was centered around enabling robots to navigate, they found the technology had other applications. In the subsequent two years, they've been working on producing a portable version of the gear, and think that they're getting close to completion. In the future, the device is also expected to offer up scene description that'll offer users a greater ability to "see."
Read more here: www.engadget.com/2016/12/05/computer-vision-may-help-the-blind-see-the-world/