As is known in the art, there is an increasing trend for mobile phones, “smart” phones, sensors and other lightweight and battery-operated devices to increasingly affect the everyday life of human's. Such devices have already started to become more and more intelligent, equipped with a wide range of features.
The majority of these devices, however, are resource constrained. For example, the devices are typically powered by batteries, or even by energy scavenging, and thus the devices typically have strict power budgets. Thus, reducing power consumption of such devices is beneficial for battery lifetime.
One problem, however, is that many applications require such devices to continuously measure different quantities such as voice, acceleration, light or voltage levels. After processing these quantities and extracting required features, the devices then typically communicate information to humans or to other devices. A concern in all of these applications is the large amount of power which is consumed to continuously sample, process and transmit information.
As is also known, a significant challenge is posed by sampling a signal while at the same time trying to satisfy sampling rate and reconstruction error requirements. This is particularly true in certain applications within the signal processing domain. It should be appreciated that in many applications, energy consumed during a sampling process can be a significant part of a system's energy consumption.
A variety of sampling techniques are known. Sampling techniques which follow a Nyquist sampling theorem utilize an appropriate uniform sampling setup for band-limited deterministic signals. One problem with such an approach, however, is that some samples may be redundant because the maximum bandwidth of the signal may not be a good measure of signal variations at different times. Redundant samples result in extra power consumption in the sampling procedure as well as in processes which follow the sampling. For example, if it is necessary, to transmit samples to another location, having a relatively large number of additional samples results in higher transmission energy and/or in extra energy spent for compressing the samples.
As a way to more efficiently sample a signal, adaptive nonuniform sampling schemes have been proposed. Such adaptive nonuniform sampling schemes result in reduced power consumption since the number of redundant samples is decreased or, in some cases, even eliminated.
Several non-uniform adaptive sampling schemes are known. For instance, a non-uniform sampling scheme based upon level-crossings with iterative decoding has been proposed as has an approach based upon level crossings with a filtering technique which adapts the sampling rate and filter order by analyzing the input signal variations. Also, two adaptive sampling schemes for band-limited deterministic signals have also been proposed. One scheme is based upon linear time-varying low pass filters and another scheme is based upon time-warping of band-limited signals.
Such non-uniform sampling schemes give rise to at least two problems which make it difficult to apply these schemes in practical applications. First, non-uniform sampling schemes are designed for specific signal models (i.e., they are not generic). This is because it is difficult to determine the next sampling time step at each time (i.e. rate control). Second, it is necessary to keep (e.g. store) or transmit sampling times since they are required in the reconstruction process.
For a discrete stochastic signal, one sampling scheme samples uniformly at a fixed high sampling rate (e.g. using an analog to digital converter (ADC)). Source coding is then used to compress these samples approximately to their entropy rate before transmission. While this technique is theoretically optimal, in practice this technique has some inefficiencies in terms of extra sampling and processing power required. Moreover, to achieve desired performance levels, long blocks of samples are necessary to be able to use source coding efficiently. This is particularly true if statistical properties of the signal vary slowly in time. This block-based approach may lead to a large delay on the reconstruction side.
It would therefore, be desirable, to provide an adaptive, nonuniform sampling technique which is generic (i.e. can be used in a wide variety of applications) and which does not need to store or transmit sampling times.