This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
So called head mounted displays are displays which a user would put on his/her head to view visual information provide by e.g. a mobile phone, a laptop, a tablet computer or some other device capable of producing images and/or video. Head mounted displays usually have two separate display elements, one for a left eye and one for a right eye of the user. However, some head mounted displays may use only one display so that a left part of the display is seen by the left eye and a right part of the display is seen by the right eye.
Head mounted displays may also comprise a movement detector which provides information of movements of the user's head. This movement information may be utilized by the information producing device to determine whether the displayed information should change. This may be the case when the user is watching so called 360 degrees panorama video or image. The panorama video contains information about the surroundings of the scene, but only a part of this scene is shown by the head mounted display. When the user turns his/her head, the head mounted display should follow this movement so that a different scene should be shown.
Head mounted displays may also be used in so called virtual reality (VR) and/or augmented reality (AR) applications in which the user may feel to be present in an environment shown by the head mounted display. In other words, the user is virtually present in the environment.
In practical implementations there may be a delay between the actual movement of the head and a corresponding change in the displayed information. This delay may be called as motion-to-photon latency. There may be several reasons which may affect the motion-to-photon latency. For example, the movement detector may have some delay before it sends information of a detected movement to the device, the device may have some delay before the received information of the detected movement is processed and used in determination of how the scene to be shown by the head mounted display should change as a consequence of the movement.
The degree of the motion to photon latency may affect how the user experiences the scene and changes of the scene shown by the head mounted display. Furthermore, the user may even feel sick if the motion to photon latency is too high (i.e. there is a long delay from the movement to the change in the display). High motion-to-photon latency could induce motion sickness and/or nausea, whereas a low motion-to-photon latency may improve the condition for presence.
In addition to motion-to-photon latency, there are also other aspects with head mounted displays which may affect the viewing experience, such as pixel persistence, frame jerkiness, frame jitter, dropped/duplicated frames, audio/video synchronization, application-to-motion latency and/or left versus right eye frame delay. Pixel persistence means the time the display uses to show the pixels. The pixel persistence value should be within specific time frame. Too long pixel persistence causes motion blur while too short pixel persistence may affect the brightness and contrast of the viewed video. Too long pixel persistence may be one of the reasons for motion sickness and user's nausea. Frame jerkiness is the average content update speed, also expressed as frames per second (fps). In some high-end virtual reality systems the frame jerkiness can be as high as 120 fps and is typically at least 60 fps. Frame jitter is the variation of content update, also expressed as standard deviation (in ms). Together with panning movement, which may be common in basic virtual reality usage, poor frame jitter may dramatically decrease the end-user perceived user experience. Dropped/duplicated frames are typically a sign of severe processing/synchronization issue. Also bandwidth issues can cause frames to drop. Dropped/duplicated frames may come together with frame jitter. Together with dynamic scene (e.g. panning movement), also dropped/duplicated frames may decrease the end-user perceived user experience. Audio/video synchronization is also an issue which may affect end-user perceived audiovisual experience. In multi-channel audio, all the audio channels should be synchronized with each other, as well as with the presented visual content. Audio/video synchronization and/or left versus right eye frame delay differences between left and right display refresh times (left versus right eye frame delay) may have annoying effect to the view experience.
Application-to-motion latency may be measured so that when a user moves her/his head, e.g. tilts and/or turns the head, sensors attached with the head mounted display detect the movement and provide an indication of the movement to a controller of the head mounted display. A signal may be sent to a simulation system, which determines how the image shown by the head mounted display should be changed. A new image will be generated accordingly and sent to the head mounted display to be displayed. The application-to-photon latency may be determined by the time it takes from the generation of the new image to actual display of the new image by the head mounted display. However, this measurement does not take into account the time it takes from the detection of the movement to the generation of the new image. Thus, the application-to-photon latency may indicate too short latency values compared to the actual motion-to-photon latency which the user experiences when using the head mounted display.