In the context of RAKE receivers, and in particular generalized Rake (G-RAKE) receivers, problems of the formRw=h  (1)arise. Here, R is an N×N matrix that can be an impairment covariance matrix with known values, h is an N×l vector with known values, which are net channel coefficients, and w is an N×l vector that needs to be determined and contains combining weights. Methods like Gaussian elimination, LU decomposition, QR factorization, or other known methods can be employed to solve for the unknown vector w. These methods typically require O(N3) (i.e. “of the order of N3”) operations to determine w, which may be too computationally complex for many applications. Also, the aforementioned methods may encounter numerical problems if the matrix R is ill-conditioned.
An alternate method of solving equation (1) for w is to use an iterative method. Iterative algorithms generally exploit some property of matrix R to recursively solve for the unknown vector w. An iterative linear system solver starts with some initial guess for the unknown vector w and then refines the estimate of w with each iteration. As an example, consider the following iterative formulationwi+1=wi+g(ri)ri+1=f(h−Rwi+1)  (2)Here, ri+1 is the residual error vector at iteration i+1, f(h−Rwi+1) indicates an expression that is a function of the combining weights at iteration i+1 (e.g. f(h−Rwi+1)=h−Rwi+1), and g(ri) indicates an expression that is a function of the ith residual error vector (e.g.
            g      ⁡              (                  r          i                )              =                                        (                          p                              i                +                1                                      )                    T                ⁢                  r          i                                                  (                          p                              i                +                1                                      )                    T                ⁢                  Rp                      i            +            1                                ,where pi+1=ri+βi+1pi and
      β          i      +      1        =                              -                                    (                              p                i                            )                        T                          ⁢                  Rr          i                                                  (                          p              i                        )                    T                ⁢                  Rp          i                      .  Here w0 represents the initial guess at the solution and r0 is the residual error due to the initial guess. Many iterative algorithms fit into the framework of the equations (2), but other formulations are possible.
Iterative linear systems solvers are not necessarily guaranteed to converge to the true solution to equation (1), but these algorithms generally determine a solution that is “close” to the right answer. The advantages of employing an iterative method include reduced complexity, numerical tractability, and/or less sensitivity to near-singularity of the matrix R. There is, however, one major disadvantage of using an iterative algorithm. The disadvantage is that it is often unclear how many iterations are required to achieve an answer that is “good enough”. This disadvantage will herein be called the stopping criteria problem.
There are at least two general methods of solving the stopping criteria problem.
Empirical Experimentation
In this method a number of representative Rw=h problems for an intended application are solved using an iterative algorithm. The number of iterations required to determine a “good enough” solution are recorded for each problem. The ultimate number of iterations employed in practice for such problems would then be a function of the maximum number needed to solve any one of the representative problems.
This method has at least two problems. First, if the matrix R is ill-conditioned for some reason, the number of iterations determined for representative Rw=h problems may in practical use result in very poor solutions. This in turn may affect other parts of the application that depend on a reliable solution to Rw=h. A poor solution for the combining weights could e.g. cause an extremely high block error rate. The second problem associated with this method is that it may not be possible to determine a representative set of Rw=h problems. In such cases, it is likely that this method will set the iteration requirements higher than needed for some, or most, Rw=h problems. In this case, computation power is wasted on extra iterations.
Adaptive Stopping Criteria
This approach can be used alone or in combination with the empirical experimentation approach. The idea here is to compute a metric of the fidelity of the solution and halt the iterative process when the metric crosses a threshold. One example of such a metric is ρi=∥h−Rwi∥22, where ∥q∥2 is the 2-norm of vector q. This metric is compared to ε∥h∥22 where ε is some small predetermined quantity and the iterative process stopped when ρi<ε∥h∥22.
This method has as well at least two problems. First, no experimental evidence has been found that shows that it prevents divergence if the matrix R is ill-conditioned. Second, the choice of the value of the quantity E requires some experimentation in order to get reasonable performance. As with the empirical experimentation method, it may not be possible to determine a representative set of Rw=h problems from which values of the quantity E can be chosen. Therefore, ε may be chosen so small that some, or at least most, Rw=h problems will use more iterations than necessary. Again, computational power is wasted.