OSM for the dyslexic first public release

 Published On April 27, 2016

A GIS application to make OpenStreetMap more accessible by dyslexics, mitigating common reading errors

The development of the application is at the end, and the app is on the play store. Check it out

and OSM-for-the-dyslexic on the OpenStreetMap Wiki

Use the application effectively

Main idea of the application

Main idea

Pan, zoom, identify, interact

We identified the behaviour of the application for each gesture.

Gesture Action performed
Pan Move a finger on the map - it generates a pan
Pinch Pinch on the map - it generates a zoom out by a level
Stretch Stretch on the map - it generates a zoom in by a level
Long press Long press on the map - it generates an identify location command
Long press Long press on functionality - activates that functionality

Specific functionalities

Location history

Location history

The location history functionality allows the user to recognize the path followed to reach the location he is currently looking at. When the user presses (long press) one of the entry the user is guided to that location. A message on the notification area informs the user about the behaviour.

Overlay mode

Overlay mode

The overlay mode allows the user to recognize what is around the location he is watching. Once recognized he can restore the previous visualization simply pressing the overlay button.

This functionality can be used to navigate quickly from a place to another.

Text2speech capability

Text2speech capability

The text2speech capability relyies on the android native capability to syntetize speech from text. The use has to install the desired language on its device before using this functionality.

We think this is not a limitation because the text2speech capability is free of charge, and there are voices for many languages. The only operation the user has to do in do download the desired voice from google servers.

Information about a location

Information about a location

Once the user identified a location, a notification appear in the notification area. After that he can know more about the location, pressing the informations button.

The maximum number of features present is three to avoid crowding. Features presented follows the rule “from generic to particular” and an information about the context is always returned (if present in original data) .

All the alphanumeric information are spaced and distinguised. A press on the text2speech capability makes the application start to tell all the information contained on the screen.

If text2speech is not configured, if the user presses the text2speec button the application will show the text to speech configuration page.

The identification process in detail

Identification Identification
1. The user goes to the desired location 3. The user knows how many features hit
2. The user presses a location on the map 4. The identification succededs: two features: Milan and a supermarket
Identification Identification
5. The user preses the information button 6. The user scrolls and reads if text2speech is enabled, a voice tells the content

Tags: OSM maps dyslexic

Comments:

comments powered by Disqus

© 2016 - Massimiliano Bernabé. All rights reserved
Built using Jekyll