The widespread use of mobile devices equipped with high resolution cameras is increasingly pushing computer vision applications within mobile scenarios. A common paradigm is represented by a user taking a picture of the surroundings with a mobile device to obtain informative feedback on its content. This is the case, e.g., in mobile shopping, where a user can shop just by taking pictures of products, or landmark recognition for ease of visiting places of interest. As in the aforementioned scenarios visual search needs to be typically performed over a large image database. Therefore, applications communicate wirelessly with a remote server to send visual information and receive informative feedback. As a result, a constraint is set by the bandwidth of the communication channel, whose use ought to be carefully optimized to bound communication costs and network latency. For this reason, compact but informative image representation is typically in the form of a set of local compact feature descriptors (e.g. scale-invariant feature transform (SIFT), speeded up robust features (SURF), etc.) that are extracted from the image and that must be subsequently communicated and processed. These descriptors will be alternatively referred to as “compact descriptors” or “compact feature descriptors herein.” Improved architectures for implementing visual search systems are needed for various applications, such as object detection and navigation in automotive systems and object detection and recognition in surveillance systems.