DTMF signalling is used extensively in communications voice networks for the signalling of dialled numbers and for access to services during progress of a call. The signalling tones are generated by a user terminal by the operation of keys on a keypad. Operation of each key causes the generation of a respective pair of audio frequency tones. The pairs of tones are decoded at the system switching centres or nodes to recover the corresponding digits that have been dialled by the subscriber so that routing of a call may be determined or access to an appropriate service may be provided.
Traditionally, voice traffic had been carried on Time Division Multiplex (TDM) networks in which traffic is allocated to 64 kb/s Channels in TDM frames. Such networks are circuit based in nature. However, an increasing volume of long haul voice traffic is now being transported over Asynchronous Transfer Mode (ATM) or Internet Protocol (IP) networks which are connectionless in nature and which transport traffic in cells or packets. In a typical arrangement, voice traffic from a TDM network is packaged into cells at the boundary of an ATM or IP network for transport across that network to a remote TDM network. At the remote boundary of the ATM or IP network, the TDM frame structure is reinstated.
A particular problem with such an arrangement is that of transporting the DTMF signalling tones across the ATM or IP network. At the boundary of the connectionless network, the TDM traffic is packaged into cells or packets using speech compression algorithms which minimise the bandwidth that is required. It has been found that the use of compression algorithms on pure tones, such as DTMF signalling tones, causes distortion of those tones and thus renders their subsequent detection and decoding uncertain. A potential solution to this problem is to detect the DTMF tones at the input to the connectionless network and to decode and convert those tones into corresponding digital information. However, current DTMF tone detectors are relatively slow in operation and may thus allow short bursts of DTMF tones to ‘leak’ across the connectionless network as compressed speech. This causes significant degradation in the quality of the far-end reconstituted DTMF tones and, in extreme cases, a lead to double detection of tones and/or incorrect detection. Further, some DTMF tone detectors can respond incorrectly to other signals such as fax and modem tones, or even to speech tones of a similar frequency.
The conventional DTMF detectors currently in use require one hundred and two samples for detection (taking 12.75 milliseconds) and a further one hundred and samples (taking a further 12.75 milliseconds) for decoding using a Goertzel algorithm. This process thus requires a total time of almost twenty six milliseconds. This is significantly in excess of twenty milliseconds which is now the generally accepted absolute maximum detection time that is necessary to overcome the aforementioned problem of tone leakage as compressed speech. A maximum detection time of 20 milliseconds is specified by ITU Recommendation I.366.2.