A fixed point-to-point radio link is a two-way communication system designed for communication between two fixed locations, each location comprising at least a transmitter unit and receiver unit, i.e., a transceiver. The transceiver is often equipped with, or connectable to, at least one highly directive antenna.
Fixed point-to-point radio links are commonly deployed in networks used for cellular backhaul, and are therefore often subject to strict requirements on performance, e.g., in terms of allowed bit error rate and link availability. Such requirements are, e.g., defined by standardization organizations such as the European Telecommunications Standards Institute, ETSI, and the American National Standards Institute, ANSI.
Common requirements on radio link availability are on the order of 99.99% to 99.999%, meaning that the radio link cannot be down for more than 0.01% to 0.001% of the year. The requirement on bit error rate, BER, is often on the order of 10^-11. This complicates the design of transceivers for fixed point-to-point radio links and drives cost compared to, e.g., design of transceivers used for cellular access, which are often associated with less strict requirements on availability and bit error rates. It is always desirable to keep requirements as low as possible since strict requirements often drive complexity and cost.
The distance in frequency between frequency channels in a radio link network, i.e., how closely adjacent frequency channels are allocated in the network, is to a large extent determined by the carrier to interference, C/I, requirement of the communication system deployed in the network. The lower the C/I requirement, the higher the interference power which can be accommodated, and thus the more denser the frequency channels can be allocated in the network. A dense frequency channel allocation implies a higher network spectral efficiency in terms of bits/sec/Hz and is therefore highly desirable since frequency spectrum is often a scarce resource, especially in radio link networks.
The radio propagation channel over the radio link between a transmitter and a far end receiver often includes fading phenomena. This fading alters the attenuation of a transmitted radio signal compared to normal free space path loss. Fading caused by, e.g., rain and multipath propagation usually results in an increased channel attenuation. This type of fading therefore decreases the input power in a receiver, and is usually referred to as down fading.
A common practice when planning a fixed point-to-point radio link network is to allow a certain receiver threshold degradation due to interference, e.g., interference from adjacent frequency channels. So, depending on the acceptable threshold degradation a radio planner must decide if adjacent channels can be allocated relatively close in frequency or not, i.e., how high bandwidth or transmission symbol rate that the system can utilize and how much bandwidth must be allocated to guard bands. This analysis is almost always performed assuming a worst case scenario, meaning that the own link is faded down into its own noise floor. In this worst case scenario even a relatively weak interference can pose problems and cause, e.g., bit errors. This is a drawback since it generates very strict requirements on the level of interference which can be accommodated.
The performance degradation suffered from interference, e.g., adjacent channel interference, is related to a degradation of the noise floor of the communication system. However, during normal operation the radio link will not be faded and in this case the adjacent channel interferers are of insignificant relative power compared to the main communication signal. I.e. the signal to noise ratio, SNR, is good enough and the input power is high, so the noise degradation due to adjacent channels is negligible. As normal radio link network planning is based on very unlikely fading scenarios (occurring on the order of 0.01% to 0.001% of the time), the capacity penalty due to fading together with requirements on C/I becomes unreasonably large, which is a drawback.
ETSI and ANSI, both specify a transmitter spectrum mask and a receiver adjacent channel C/I requirement for radio link usage. Usually the adjacent channel C/I requirement is more stringent resulting in that the spectrum mask cannot be fully be utilized for transmission. This is a drawback since transmission symbol rate must be decreased compared to the rate allowed by the transmission mask.