Ordinary devices using a graphic user interface (GUI) may receive user inputs from a pointing input device such as a keyboard or a mouse. That is, a user may select one of objects displayed on a screen by pressing keys on the keyboard or clicking the mouse.
As devices become smaller and more portable, devices that include touch screens rather than additional input devices have come into widespread use. As the devices including the touch screens have become popular, the devices use UIs that are executed based on various touch gestures. For example, the touch gestures may include a swipe gesture for switching screens, a zoom-in or zoom-out gesture for enlarging or reducing a certain object, a drag and drop gesture for moving a certain object, or the like.
However, persons having upper limb disabilities may not be able to easily perform the touch gestures. For example, a person who cannot freely move his/her body due to a spinal cord injury, etc., or a person whose fingers are not strong enough to perform the touch gestures may not be able to accurately perform the touch gestures.
Also, persons having upper limb disabilities may locate a cursor at desired coordinates by using the backs of their hands, feet, chins, tongues, or the like according to the type of disability, but may not be able to select coordinates through, for example, clicking. Also, persons having upper limb disabilities may not be able to simultaneously touch multiple locations.
The above information is presented as background information only to assist with an understanding of the present disclosure. No determination has been made, and no assertion is made, as to whether any of the above might be applicable as prior art with regard to the present disclosure.