It is well known that a signal must be sampled at a rate that is at least twice its highest frequency to represent the signal without error, i.e., the Nyquist rate. However, most signals of interest can often be compressed after sampling. This demonstrates that such signals have structure that has not been exploited in the sampling process—often referred to as acquisition process. Clearly, this wastes resources. Sparsity is one of the most common forms of signal structure. Compressive sensing (CS) can be used to efficiently acquire and then reconstruct such sparse signals.
CS leverages the sparsity structure of the signals to enable sampling at rates significantly lower than the Nyquist rate. CS uses randomized, linear, non-adaptive measurements, followed by reconstruction using nonlinear optimization. The success of CS has provided systems and methods, which place significant emphasis on randomized incoherent measurements during acquisition, and increased computation during the signal reconstruction.
However, the emphasis in the development of conventional CS-based systems has not been on streaming signals. Instead, conventional reconstruction focus on finite-length signals, i.e., the signal length is known in advance. Furthermore, the execution time and the processing required by such reconstruction increases significantly as the signal length increases.
When conventional CS hardware and acquisition techniques are used to acquire sparse streaming signals, such as audio and video signals, the signal is typically processed as discrete blocks having finite lengths. Each discrete block is individually compressively sampled and individually reconstructed using a known finite dimensional method.
Using this blocking approach with conventional methods can introduce significant artifacts at boundaries between the blocks. Furthermore, conventional methods cannot provide guarantees on the processing delay for the individual blocks, which often is a critical requirement for real-time systems that produce streams of video or audio data. Thus, significant buffering of the input or excessive allocation of processing resources is needed to satisfy delay requirements.
FIG. 1A shows the general form of a conventional signal acquisition and reconstruction system. FIG. 1B shows an equivalent discretized version of the system in FIG. 1A.
A signal x(t) 101 is acquired using an acquisition system (Acq.) 110 at an average rate of M samples per time unit represented in ym102. The signal is reconstructed (Recon.) 120 as an estimated signal {circumflex over (x)}(t) 103. Most conventional acquisition systems use an analog-to-digital converter (ADC), which obtains linear measurements using a low-pass antialiasing filter followed by uniform time sampling, and quantization. The reconstruction component (Recon.) 120 in conventional systems is a digital-to-analog converter (DAC), which performs linear band limited interpolation of the samples ym.
FIG. 1B shows a discrete equivalent of the system in FIG. 1A, using a discrete representation xn 104 of the signal x(t) by N coefficients per time period. In conventional band limited sampling and interpolating, xn=x(nT), where T is the Nyquist period. In this case, the acquisition and reconstruction components are an identity, i.e., m=n, and {circumflex over (x)}n 105=yn 102=xn 104, and the estimated signal {circumflex over (x)}(t) is a band limited interpolation of {circumflex over (x)}n.
The Nyquist theorem states that the sampling rate M must be equal to or greater than the input rate N. Otherwise, the system is not invertible and information can be lost. However, with additional information on the signal structure, it is possible to acquire a signal at a sampling rate M that is much smaller than the input rate, and still allow reconstruction. Sparsity is an example of such information exploited by Compressive Sensing.
With CS, a sparse or compressible finite-length signal x can be efficiently sampled and reconstructed using very few linear measurements. The signal x is measured according toy=Φx,  (1)where y denotes a measurement vector, and Φ is a measurement matrix. The signal x is K sparse, i.e., the signal has only K non-zero coefficients.
Under some conditions on the measurement matrix, the signal can be reconstructed using a convex optimization
                              x          ^                =                                            argmin              x                        ⁢                                                          x                                            1                        ⁢                                                  ⁢                          s              .              t              .              y                                =                      Φ            ⁢                                                  ⁢                          x              .                                                          (        2        )            
The measurement matrix Φ has a restricted isometry property (RIP) of order 2K. The RIP is satisfied if there exists a constant δ2K<1, such that for K-sparse signals x(1−δ2K)∥x∥22≦∥Φx∥22≦(1+δ2K)∥x∥22.  (3)
The RIP characterizes matrices that behave as if they are nearly orthonormal when operating on sparse vectors. If the RIP constant δ2K of the measurement matrix Φ is sufficiently small, then the convex optimization in Equation (2) provides exact reconstruction of the signal x. Furthermore, a small RIP guarantees recovery in the presence of measurement noise, and for sampling signals that are not sparse but can be well approximated by a sparse signal.
The RIP is also sufficient to provide similar guarantees when using certain greedy reconstruction algorithms instead of convex optimization. Such greedy methods recover the support of the signal using a greedy search, and reconstruct the signal over that support only using linear reconstruction. The greedy search is usually performed within an iterative feedback loop using an unexplained residual to improve the estimate of the support for the signal x.
Most hardware implementations of CS enable the implementation of random projections that satisfy the RIP. However, it is assumed that the signal is processed in finite length blocks. Each block is compressively sampled and reconstructed using one of the known finite dimensional methods (such as the aforementioned convex optimization or greedy algorithms), independent of other blocks. However, a streaming signal and the corresponding acquired measurements are essentially continuous infinite dimensional vectors. Thus, reconstruction using a conventional finite dimensional method does not work.
A formulation for streaming signals poses significant difficulties compared to a fixed-length signal. A streaming signal and the corresponding measurements are essentially infinite dimensional vectors. Thus, the usual CS definitions of sparsity and dimensionality reduction are not valid and need to be reformulated as rates.
Another method in the prior art can be used to reconstruct infinite-dimensional signals. However, that method explicitly assumes the signal has a multiband structure in the frequency domain. That method attempts to recover that structure using a separate system component. That method then uses this structure to control the reconstruction. That method provides no computational and input-output delay guarantees, and is not suitable for streaming signals that are sparse in the time or some other domain.
Therefore, it is desired to have a method that can reconstruct streaming signals that are sparse in an arbitrary domain, such as time, wavelets or frequency.