
Photo courtesy of Microsoft Research Blog
Smartphones are really useful in making visual information accessible to people who are blind or low vision. For instance, the Seeing AI app allows you to take a picture of your surroundings in scene mode, and then it reads aloud what things are recognized in the picture (for example, “a person sitting on a sofa”). AI recognizes objects in a scene easily enough if they are commonly found. For now, however, these apps can’t tell you which of the things it recognizes is yours, and they don’t know about things that are particularly important to users who are blind or low vision. For example, has someone moved your keys again? Did your white cane get mixed up with someone else’s? Imagine being able to easily identify things that are important to you or being able to easily locate your personal stuff.
Apps like Seeing AI use artificial intelligence techniques in computer vision to recognize items. While AI is making great strides toward improving computer vision solutions for many applications, such as automated driving, there are still areas where it does not work so well—personalized object recognition is one such area. Previous research has started to make some advances to solving the problem by looking at how people who are blind or low vision take pictures, what algorithms could be used to personalize object recognition, and which kinds of data are best suited for enabling personalized object recognition.
(Article continued at this link.)