The development of higher-resolution displays is of central importance to the display industry. Leading mobile displays recently transitioned from pixel densities of less than 50 pixels per cm (ppcm) and now approach 150 ppcm. Similarly, the consumer electronics industry begins to offer “4K ultra-high definition (UHD)” displays, having a horizontal resolution approaching 4,000 pixels, as the successor to high-definition television (HDTV). Furthermore, 8K UHD standards already exist for enhanced digital cinema. Achieving such high-resolution displays currently hinges on advances that enable spatial light modulators with increased pixel counts.
Beyond these larger market trends, several emerging display technologies necessitate even greater resolutions than 4K/8K UHD standards will provide. For example, wide-field-of-view head-mounted displays (HMDs), such as the Oculus Rift, incorporate high-pixel-density mobile displays. Such displays approach or exceed the resolution of the human eye when viewed at the distance of a phone or tablet computer. However, they appear pixelated when viewed through magnifying HMD optics, which dramatically expand the field of view. Similarly, glasses-free 3D displays, including parallax barrier and integral imaging, require an order of magnitude higher resolution than today's displays. At present, HMDs and glasses-free 3D displays remain niche technologies and are less likely to drive the development of higher-resolution displays than the existing applications, hindering their advancement and commercial adoption.
The following briefly reviews the state-of-art related to high resolution display technologies.
Superresolution imaging algorithms have been used to recover a high-resolution image (or video) from low-resolution images (or videos) with varying perspectives. Superresolution imaging requires solving an ill-posed inverse problem: the high-resolution source is unknown. Methods differ based on the prior assumptions made regarding the imaging process. For example, in one approach, camera motion uncertainty is eliminated by using piezoelectric actuators to control sensor displacement.
In one of the superresolution display systems that have been developed, a “wobulation” method is used to double the addressed resolution for front-projection displays incorporating a single high-speed digital micro-mirror device (DMD). A piezoelectrically-actuated mirror displaces the projected image by half a pixel, both horizontally and vertically. Since DMDs can be addressed faster than the critical flicker fusion threshold, two shifted images can be rapidly projected, so that the viewer perceives their additive superposition. As with a jittered camera, the superresolution factor increases as the pixel aperture ratio decreases. The performance is further limited by motion blur introduced during the optical scanning process. More recently, wobulation has been extended to flat panel displays, using an eccentric rotating mass (ERM) vibration motor applied to an LCD.
Similar superresolution display concepts have been developed for digital projectors. Rather than presenting a time-multiplexed sequence of shifted, low-resolution images, projector arrays can be used to display the displaced image set simultaneously. Such “superimposed projection” systems have been demonstrated by multiple research groups. As with all projected arrays, superimposed projections required precise radiometric and geometric calibration, as well as temporal synchronization. These issues can be mitigated using a single-projector superresolution method where multiple offset images are created by an array of lenses within the projector optics. Unlike superimposed projectors, these images must be identical, resulting in limited image quality.
Wobulation and other temporally-multiplexed methods introduce artifacts when used to superresolve videos due to unknown gaze motion. Eye movement alters the desired alignment between subsequent frames, as projected on the retina. If the gaze can be estimated, then superresolution can be achieved along the eye motion trajectory, as reportedly demonstrated.
All of the superresolution displays discussed thus far implement the same core concept: additive (temporal) superposition of shifted low-resolution images. As with image superresolution, such designs benefit from low pixel aperture ratio—diverging from industry trends to increase aperture ratios.
The so-called “optical pixel sharing (OPS)” approach is the first reported approach to exploit dual modulation projectors for superresolution by depicting an edge-enhanced image using a two-frame decomposition: the first frame presents a high-resolution, sparse edge image, whereas the second frame presents a low-resolution non-edge image. OPS requires an element be placed between the display layers (e.g., an array of lenses or a randomized refractive surface); correspondingly, existing OPS implementations do not allow thin form factors. OPS reproduces imagery with decreased brightness and decreased peak signal-to-noise ratio (PSNR).
Dual-modulation displays are routinely applied to achieve high dynamic range (HDR) display. HDR projectors are implemented by modulating the output of a digital projector using large flat panel liquid crystal displays (LCDs). A high dynamic range and high resolution projector system has been reportedly developed, where a three-chip liquid crystal on silicon (LCoS) projector emits a low-resolution chrominance image, which is subsequently projected onto another higher-resolution LCoS chip to achieve luminance modulation.
Displays with two or more Spatial Light Modulators (SLMs) have also been incorporated in glasses-free3D displays for multi-view imagery. It was reportedly demonstrate that content-adaptive parallax barriers can be used with dual-layer LCDs to create brighter, higher-resolution 3D displays.