It happens very often that a digitized signal features several subsequent samples of a same underlying information (which by way of non-limiting examples might be a 2D image, a 3D volumetric image, or even a plane of elements featuring more than three dimensions), creating a multi-dimensional signal (e.g., by way of non-limiting examples a 3D signal representing a sequence of subsequent 2D images, or a 4D signal representing a sequence of 3D/volumetric images, etc.) where for one of its dimensions, such as the time dimension there is some degree of signal stability over several subsequent samples. Non-limiting real-life examples would be subsequent slices in a Computer Tomography scan, subsequent volumetric images in a MRI scan, subsequent frames in motion pictures, etc.
Due to the nature of real-life sensors and of transmission channels, it is very likely that different samples of similar or even same underlying information will feature different characteristics. For instance, a specific sample might feature slightly different values of a same underlying information than previous and/or subsequent samples due to motion blur that is not present in other samples, or to slightly different radiation intensities or light conditions at the time of sampling, or to thermal noises in the sensor, or to transmission errors in a channel, or to other types of noises, etc. The end result is a higher statistical variability of the signal elements than it would be desirable. This generates large amounts of information (e.g., unnecessary intensity/color variations, plane elements of the wrong color, etc.) that are hard to distinguish from the “real” and necessary details in the signal, and that can complicate further signal processing (e.g., motion estimation, content identification, encoding/decoding, etc.). In addition, such variations discourage the use of quantization with relatively large quantization steps, as it would limit the possibility to adequately represent the subtle variations of relatively stable information from one element to the next.
Several existing signal processing methods include separating stable/relevant information (“core signal”) from transient/stochastic information (“transient layer signal”) before encoding/decoding a signal. Stable information is usually detailed and can typically be at least in part predicted from neighboring samples. In contrast, transient information is typically unpredictable from the transient information of neighboring samples. Several existing conventional methods are aimed at filtering/decreasing/toning down transient layer components of a signal and thus, among other things, improving the efficiency of data compression since the amount of information entropy of transient information tends to be higher than that of stable information. The problem with those methods is that in some situations a decoded signal with limited noise and/or limited transient components is perceived by users as a signal with limited fidelity to the original, since real signals do feature a certain amount of transient elements. In other words, a certain degree of transient/stochastic elements is desirable in the reconstructed signal.
Another characteristic of conventional approaches is that transient layer elements, when not being filtered out entirely, are encoded along with the core signal components, with the same signal encoding methods, and thus encoding details more than necessary.