Compressive Sensing and Sparse Models
Compressive sensing (CS) and sparse models provide theories and tools for signal acquisition and signal processing applications. A sparse model assumes that a signal, when transformed to an appropriate basis, has very few significant coefficients that model signal energy. Promoting sparsity in some appropriate domain is a very efficient computational method to capture the structure of most natural and man-made signals processed by modern signal processing systems.
Sparsity is useful when inverting an undetermined linear system of the formy=Ax,  (1)where y is an M-dimensional measurement vector, A is a mixing matrix, and x is an N dimensional sparse signal vector. For the underdetermined system, the following optimization determines a sparsest solution
                                          x            _                    =                                    arg              ⁢                                                          ⁢                                                min                  x                                ⁢                                                                                                  x                                                              0                                    ⁢                                                                          ⁢                                      s                    .                    t                    .                                                                                  ⁢                    y                                                                        ≈            Ax                          ,                            (        2        )            where an l0 norm counts the number of non-zero coefficients of the signal vector x. This is a combinatorially complex problem. Under certain conditions on the mixing matrix A, a solution is possible in polynomial time using a convex relaxation of the l0 norm, or a number of available greedy procedures. These include orthogonal matching pursuit (OMP), Compressive Sampling Matching Pursuit (CoSaMP), Subspace Pursuit (SP), and iterative hard thresholding (IHT) procedures.
Joint and group sparsity models, and their variations, provide further structure to the signal of interest. Joint sparsity can be considered as a special case of group sparsity, thus only the latter is described herein. Under this model, the signal coefficients are partitioned into groups Gi, which partition the coefficient index set {1, . . . , N}. The group sparsity model assumes that only a few of these groups contain non-zero coefficients, and most groups contain all-zero coefficients. Group sparsity can also be enforced using a convex optimization problem, or greedy procedures.
In joint sparsity models, multiple sparse signals are measured concurrently. The assumption is that all the signals share the same sparsity pattern. In other words, the significant signal coefficients are located at the same positions for all signals. By considering the whole acquisition as a linear system, these models are a special case of group sparsity models, and a similar approach can determine the sparse output.
Model-based CS enables more complex constraints and structure than typical sparsity or group-sparsity problems. It is possible to modify conventional methods, such as CoSaMP, to enforce model-based sparsity. All that is necessary is a model-based thresholding function, which replaces the conventional thresholding function, and truncates the signal according to the model.