Signal acquisition is a fundamental task of most conventional digital systems. Due to the time varying nature of most practical signals, a great deal of adaptive and/or robust techniques as well as fine tuning are required to achieve close to optimal reconstruction performances. Due to the time-varying nature of the signals' statistics, most signals cannot be modeled in the most general setting either as a stationary or Gaussian process.
This implies that optimal (e.g., in the minimum mean square error (MMSE) sense) signal reconstruction needs to resort to non-linear estimation (e.g., a conditional mean estimator), which requires a complete statistical characterization of the random process. However, this is both computationally complex and unfeasible due to the impossibility of an accurate description of higher order moments of the underlying random process. A conventional linear MMSE approach can still be implemented, but the performance degrades fast when the divergence between the true distribution and the Gaussian increases.
There are practical situations where, for signals with low time-variability of their statistics (e.g., block time-variability), the estimation can be carried under the assumption that the observed process is Gaussian. Reducing the problem to estimating a multivariate Gaussian observation has the advantage of low complexity since it can be solved with a linear MMSE estimator and limited knowledge of the channel statistics. For example, only a second order characterization of the signal is required. The performance of such an estimator however is relatively far from the optimum achievable performance if the time-variability of the statistics is not low enough. For several real scenarios which arise in communication systems, assuming low time-variability is not realistic and as a result the performance of linear MMSE estimation degrades as previously mentioned.
Another complication that arises in practical system is that even for the ideal case of block-time variability of the signals' statistics for which linear MMSE estimation performs optimally over each block, it still requires perfect knowledge or estimation of the second order moments of the signal, which can be quite challenging to obtain if the time-variations of the second-order statics of the signal are not slow enough. Therefore, in the case of having imperfect knowledge of the signals' statistics, the performance of the MMSE estimator may be degraded.
One conventional approach uses compressed sensing (CS) recovery algorithms, which exploit the underlying sparsity of signals. For example, most signals of practical interest admit a sparse representation in a given basis. Furthermore, sparsity levels can be expected to be almost stationary and not significantly affected by statistical changes of the underlying random process. However, the CS recovery algorithms may be sensitive to noise, which distorts the signals.