The FFT and IFFT are extremely important transforms in communications technology because, by way of example, they transform the description of a factual basis in the time domain into a factual basis in the frequency domain, and vice-versa.
In digital signal processing, an ‘N point’ discrete Fourier transform, called DFT below, is frequently calculated which is defined as follows:             X      ⁡              (        k        )              =                            ∑                      n            =            0                                N            -            1                          ⁢                              x            ⁡                          (              n              )                                ⁢                      W            nk                    ⁢                                           ⁢          k                    =      0        ,  1  ,  …  ⁢           ,            N      -              1        ⁢                                   ⁢        where        ⁢                                   ⁢        W              =          e                        (                      -            j                    )                ⁢        2        ⁢                  π          /          N                    where W=e(−j) 2π/N 
The complexity of calculating the DFT is proportional to O(N2). By using the FFT, it is possible to reduce the complexity of the calculation to O(N log(N)) This is done by hierarchical splitting of the calculation into transforms having shorter successions.
To calculate the FFT, there are two basic algorithms. One is called “Decimation in Frequency” (DIF) and the other is called “Decimation in Time” (DIT). The text below deals with the DIT algorithm by way of example.
To calculate the FFT, the “in-place” variant is preferably used in which calculated intermediate results of the Butterfly calculation are written to the same memory, from where they are in turn read and put to further use, as shown in FIG. 1. This provides for particularly economical use of the memory.
FIG. 2 shows the calculation operation of the “in-place” variant for N=8 in the form of a signal flowchart. As can be seen from FIG. 2, at the start of the calculation, the memory needs to contain the data in a particular arrangement, usually referred to as “bit-reversed”. At the end of calculation, the result can be read linearly. The calculation itself is performed in a plurality of stages, as shown in the signal flowchart in FIG. 2. In the cited example, three stages are necessary. In each case, two data items are read from the memory, the Butterfly is then calculated, and the two results are written back again to the same locations in the memory. In this context, the data items are not necessarily situated in adjacent memory locations, however. In addition, the calculation differs from one stage to the next.
If the FFT is implemented in integrated circuits, the complexity is primarily determined by the memory used. In this context, large memories are usually of page-for-page design, which means that access to a memory cell within such a page is very fast; by way of example, such a memory access operation can be carried out within one clock cycle. Changing from one page to another takes significantly longer, however, i.e. a plurality of clock cycles. It is possible to increase the throughput in page-oriented memories by processing a page of a memory as completely as possible and only then changing the page again, because addresses of the other page are required. In the case of the aforementioned “in-place” FFT, however, the data are, in principle, required in very unorganized form. Small memories do not have this drawback, since access to the individual cells is possible without restriction in this case.
The speed of the FFT is therefore primarily limited by the interface to the memory.