In the field of speech processing, one of the major challenges faced by engineers is how to maintain the quality of speech intelligibility in environments containing noise and interference. This occurs in many practical scenarios such as using a cellphone on a busy street or the classic example of trying to understand someone at a cocktail party. A common way to address this issue is by exploiting spatial diversity of both the sound sources and multiple recording devices to favor particular directions of arrival over others, a process referred to as beam-forming.
Whilst more traditional beam-formers, for acoustic processes, are comprised of physically connected arrays of microphones, the improvement in both sensor and battery technologies over the last few decades has made it practical to also use wireless sensor networks (WSNs) for the same purpose. Such systems are comprised of a large number of small, low cost sound processing nodes which are capable of both recording incoming acoustic signals and then transmitting this information throughout the network.
The use of such wireless sound processing nodes makes it possible to deploy varying sizes of networks without the need to redesign the hardware for each application. However, unlike dedicated systems, such WSNs have their own set of particular design considerations. The major drawback of WSNs is that, due to the decentralized nature of data collection, there is no one location in which the beam-former output can be calculated. This also affects the ability of WSNs to estimate covariance matrices which are needed in the design of statistically optimal beamforming methods.
A simple approach to solving this issue is to add an additional central point or fusion center to which all data is transmitted for processing. This central point though suffers from a number of drawbacks. Firstly, if it should fail, the performance of the entire network is compromised which means that additional costs need to be taken to provide redundancy to address this. Secondly, the specifications of the central location, such as memory requirements and processing power, vary with the size of the network and thus should be over specified to ensure that the network can operate as desired. And thirdly, for some network topologies such a centralized system can also introduce excessive transmission costs, which can cause the depletion of each node's battery life.
An alternative to these centralized topologies is to exploit the computation power of the nodes themselves and to solve the same problem from within the network. Such distributed topologies have the added benefit of removing the single point of failure whilst providing computation scalability, as adding additional nodes to the network also increases the processing power available. The main challenge with distributed approaches stems back to the lack of a central point where all system data is available which requires the design of alternative and typically iterative algorithms.
Although a number of approaches for providing a distributed beamforming algorithm already exist in the literature, they are not without their limitations. The most notable of these is that hardware based requirements, such as memory use, often still scale with the size of the network making it impractical to deploy these algorithms using the same hardware platform in ad-hoc or varying size networks. Such a constraint relates to the need of these “distributed” algorithms to have access to some form of global data, be it in a compressed form or not. Thus there is a current need in the art for a truly distributed, statistically optimal beamforming approach, in particular for use in wireless sensor networks.