To represent a signal without error, the signal must be measured at a rate (the Nyquist rate) that is at least twice the highest frequency. However, certain signals can be compressed after measuring, which wastes resources if the signals are measured at the Nyquist rate, and then compressed.
Instead, compressive sensing (CS) can be used to efficiently acquire and reconstruct signals that are sparse or compressible. CS uses the structure of the signals measures at rates significantly lower than the Nyquist rate to reconstruct. CS can use randomized, linear, or non-adaptive measurements, followed by non-linear reconstruction using convex optimization or greedy searches.
The conventional solution without CS minimizes the l2 norm, i.e., the amount of energy in the system. However, this leads to poor results for most practical applications because it does not take into account the sparsity in the measured signal. The desired CS solution should minimize the l0 norm, which measures this sparsity. However, this is an NP-hard problem. Therefore, the l1 norm is usually minimized, which also promotes sparsity and can be shown to be equivalent to the l0 norm under certain conditions. Finding the candidate with the smallest l1 norm can be expressed as a linear program, for which efficient solutions exist.
Using CS, a signal x with K nonzero coefficients can be reconstructed from linear non-adaptive measurements obtained usingy=Ax,  (1)where A is a measurement matrix. Exact signal reconstruction is guaranteed when the measurement matrix A has a restricted isometry property (RIP). The RIP characterizes matrices, which behave similarly to orthonormal ones, at least when operating on sparse signals. A matrix A has RIP of order 2K if there exists a constant δ2K, such that for all 2K sparse signals z(1−δ2K)∥z∥22≦∥Az∥22≦(1+δ2K)∥z∥22  (2)
If δ2k is small, then the matrix A approximately maintains l2 norm distances between K sparse signals. In this case, a convex optimization reconstructs the signal as
                              x          ^                =                                                            arg                ⁢                                                                  ⁢                min                                            x                ⁢                                                                  ⁢                                  εℝ                  N                                                      ⁢                                                  ⁢                                                          x                                            1                        ⁢                                                  ⁢            subject            ⁢                                                  ⁢            to            ⁢                                                  ⁢            y                    =                      Ax            .                                              (        3        )            
An alternative method uses a greedy sparse reconstruction procedure. Similarly to optimization methods, the guarantees are based on the RIP of the matrix A. Surprisingly, random matrices with a sufficient number of rows can achieve small RIP constants with overwhelming probability. Thus, random matrices are commonly used for CS signal acquisition and reconstruction.
The randomness of the acquisition matrix also ensures a well-formed statistical distribution of the measurements. Specifically, if the matrix has independent and identically distributed (i.i.d.) random entries, then the measurements in the vector y also follow an asymptotic, normal distribution.
Measurements of signals can be quantized to a finite number of bits, e.g., only the most significant (sign) bit. However, reconstruction a signal from quantized measurements is difficult. One method in the art combines the principle of consistent reconstruction with l1 norm minimization on a sphere of unit energy to reconstruct the signal. Specifically, a signal is measured usingy=sign(Ax),  (4)where sign(.)=±1. The reconstructed signal is consistent with the signs of the measurements.
Because the signs of the measurements eliminate any information about the magnitude of the signal, a constraint of unit energy, ∥x∥2=1, is imposed during the reconstruction, i.e., the reconstruction is performed on a unit sphere. Sparsity is enforced by minimizing the l1 norm on the sphere of unit energy.
Consistency with the measurements is imposed by relaxing strict constraints, and introducing a one-sided quadratic penalty when a constraint is violated. This can be expressed as a squared norm of the measurements that violate the constraint. Specifically, the negative part of a scalar is denoted by (.), i.e.,
                                          (            x            )                    -                =                              -                          min              ⁡                              (                                  x                  ,                  0                                )                                              =                                                                                        x                                                  -                x                            2                        =                          {                                                                                                                  0                        ,                                                                                                                                      if                          ⁢                                                                                                          ⁢                          x                                                ≥                        0                                                                                                                                                -                        x                                                                                    otherwise                                                                      .                                                                        (        5        )            
Then, the penalty isc({circumflex over (x)})=(diag(y)A{circumflex over (x)})−∥22  (6)where diag(y) is a matrix with the signs of the measurements on the diagonal. The negative operator (.)—is applied element-wise to identify the constraint violations, and the amplitude of the violation.
An estimate of the signal that is consistent with the measurements produces no constraint violations and the penalty c({circumflex over (x)}) is zero. Using Equation (6), the reconstruction problem becomes
                              x          ^                =                                                                              arg                  ⁢                                                                          ⁢                  min                                ⁢                                                                                              x                ,                                                                                                  x                                                              2                                    =                  1                                                      ⁢                                                          x                                            1                                +                                    λ              2                        ⁢                                                                                                                        (                                                                        diag                          ⁡                                                      (                            y                            )                                                                          ⁢                        A                        ⁢                                                  x                          ^                                                                    )                                                              -                                                                                                                                                            2                2                            .                                                          (        7        )            
Equation (7) is non-convex, and convergence to a global optimum cannot be guaranteed.
Greedy search procedures attempt to greedily determine a sparse minimum for the penalty function. The Matching Sign Pursuit (MSP) procedure performs an iterative greedy search similar to Compressive Sampling Matching Pursuit (CoSaMP) and the Subspace Pursuit. Specifically, the MSP procedure updates a sparse estimate of the signal x by iteration, see related Application. The MSP modifies CoSaMP significantly to enable reconstruction using only the sign of measurements by enforcing a consistency constraint and an l2 unit energy constraint.