It happens very often that a digitized signal features several subsequent samples of the same underlying information (which by way of non-limiting examples might be a 2D image, a 3D volumetric image, or even a plane of elements featuring more than three dimensions), creating a multi-dimensional signal (e.g., by way of non-limiting examples a 3D signal representing a sequence of subsequent 2D images, or a 4D signal representing a sequence of 3D/volumetric images, etc.) where for one of its dimensions T (e.g., by way of non-limiting example, the time dimension in time-based signals) we can assume some degree of signal stability over several subsequent samples. Non-limiting real-life examples would be subsequent slices in a Computer Tomography scan, subsequent volumetric images in a MRI scan, subsequent frames in motion pictures, etc.
Due to the nature of real-life sensors and of transmission channels, it is very likely that different samples of a same underlying information will feature different characteristics. For instance, a specific sample might feature slightly different values of a same underlying information than previous and/or subsequent samples due to motion blur that wasn't present in other samples, or to slightly different radiation intensity (or light conditions) at the time of sampling, or to thermal noise in the sensor, or to transmission errors in a channel, etc. The end result of similar effects is a higher statistical variability of the signal elements along the dimension T with stability hypothesis than it would be necessary or desirable.
The variability of element settings that should otherwise be identical from one frame to the next generates high amounts of detailed information (e.g., unnecessary intensity/color variations, plane elements of the wrong color, etc.) that are hard to distinguish from “real” and necessary details in the signal, and that can complicate further signal processing (e.g., motion estimation, content identification, etc.).