Users that carry handsets often multi-task, for example, by walking along the street and simultaneously inputting a text message, such as a short message service (SMS) message, or by reading an already received text message.
The challenge when users multi-task is that different tasks can have overlapping needs for the same sense. For example, composing text messages while walking on a busy street a user needs to simultaneously visually be aware of where they are walking (and avoiding bumping into objects), and looking at the display to determine whether the text has been correctly entered. In this situation a novice user may also need to look at the keypad to view which button corresponds to which letter.
Users that interact with handsets while walking, even in an event-rich environment, may easily ignore nearby people, objects and noises when their attention is focused on the task that they are performing with the handset.
Both humans and animals have well developed biological movement detection mechanisms based on their experience of monitoring various objects subject to the laws of physics. This is usually based on visual information, although some species rely on acoustic information more so than visual information about their immediate environment.
The exact mechanism is not fully known, but most likely uses information about expected object sizes, their coverage of the field of vision, and a rate of change (in scale and/or position). As a result, the observer can estimate where an object will be at a given future time and, if they seem to be on a collision course, the observer can prevent the collision by changing the observer's movement. In some cases movement is detected by peripheral vision. In this case a good estimation of a potential for a collision is often not possible, but such peripherally-detected movement can serve as a warning that prompts the observer to look towards the object and thereafter perform the more accurate movement detection described above.
As can be appreciated, this natural collision avoidance mechanism can be impaired when the observer is instead focused on the display of a handset.
Further, switching the focus of attention of the visual component of different tasks (for example, looking at display, the keypad and where the user is walking) reduces the overall efficiency of each task, and increases the likelihood of introducing errors into those tasks.
Handsets are increasingly equipped with one or more cameras, and in some handsets the angle of the lens can be adjusted so as to change or steer the field of view (FOV).