Mobile App for Visually Impaired People


Help visually impaired people to navigate better.

As technology advances, can we make the world more accessible to everyone by addressing the needs of visually impaired people?

Can we improve visually impaired people’s lives by providing the experiences of visual world?


Modern urban environments are complicated and confusing.


This is an example of the view at the airport.

If you do not have a visual impairment, this would be a normal view. The modern wold can be very busy, complicated and confusing for even people with normal eye vision. Imagine how it can be very difficult to navigate through evironment for visually impaired people.


Help them navigate through their ambient.

This is visually impaired people’s reality.
If you have a difficulty visually, this might be how you see the world.
How can we make this easier for visually impaired people to get information and help them be more independent?

Help them navigate through their ambient.

The function of Revealed:
By integrating IBM Watson solutions, visual recognition technology an recognize the objects, spits out the data into your phone and text to speach technology can read outloud to visual impaired users. This project hopes to have users to have a bit more clear clear access to their ambient.


The technology behind this project.
The Revealed works this way. First, a user takes a photo. Revealed uses Node JS and integrates IBM Watson’s solutions. For this hackathon, we used text to speech, visual recognition and geocoding. It will return the data to your phone and a user can hear the list of objects and exact geolocation information.



Expansion on this idea.
In the future, it is possible to integrate the concepts like Realtime alerts, Geofencing, and Sensor network.
As a user experience designer, I see this app vision taking a step further to provide not only the object information but relative distance to the object from a user and/or the object relative loation like right side or left side from a user can make this holistic experience even more meaningful.




Original Design: Screen1
  1. It says “Please hold a camera and take a photo by tapping the screen”
  2. Once a user taps the screen a shutter sound indicates the user’s action is perceived.




Original Design: Screen2 – Result page
  1. Once the image is recognized, it will spits out the list of objects.
  2. It reads outloud for a user to be able to hear.