a. Field of the Invention
This invention relates to a method of generating sample error coefficients, in particular for use in an audio signal assessment system.
Signals carried over telecommunications links can undergo considerable transformations, such as digitisation, encryption and modulation. They can also be distorted due to the effects of lossy compression and transmission errors.
The perceived quality of a speech signal carried over telecommunications links can be assessed in a subjective experiment. Such experiments aim to find the average user's perception of a system's speech quality by asking a panel of listeners a directed question and providing a limited response choice. For example, to determine listening quality users are asked to rate “the quality of the speech” on a five-point scale from Bad to Excellent. The mean opinion score (MOS), for a particular condition is calculated by averaging the ratings of all listeners. However, subjective experiments are time consuming and expensive to run.
Objective processes that aim to automatically predict the MOS value that a signal would produce in a subjective experiment are currently under development and are of application in equipment development, equipment testing, and evaluation of system performance.
Some objective processes require a known (reference) signal to be played through a distorting system (the communications network or other system under test) to derive a degraded signal, which is compared with an undistorted version of the reference signal. Such systems are known as “intrusive” quality assessment systems, because whilst the test is carried out the channel under test cannot, in general, carry live traffic.
The use of an automated system allows for more consistent assessment than human assessors could achieve, and also allows the use of compressed and simplified test sequences, which give spurious results when used with human assessors because such sequences do not convey intelligible content.
b. Related Art
A number of patents and applications relate to intrusive quality assessment, most particularly European Patent 0647375, granted on 14 Oct. 1998. In this invention two initially identical copies of a test signal are used. The first copy is transmitted over the communications system under test. The resulting signal, which may have been degraded, is compared with the reference copy to identify audible errors in the degraded signal. These audible errors are assessed to determine their perceptual significance—that is, errors that are considered significant by human listeners are given greater weight than those that are not considered so significant. In particular inaudible errors are perceptually irrelevant and need not be assessed.
One problem with known methods of intrusive quality assessment is that if there is even a slight difference between the sampling rate of a reference signal and a degraded signal then the resultant MOS can be artificially low (ie the MOS predicted by the automated system does not match that which would be given by a human listener).
This problem can happen for sampling-errors as small as 0.01%, and is due to the fact that if the reference signal is sampled at rate R and the degraded signal is sampled at a rate R+e, then this difference in sampling rate e will mean that the spectral content of the two signals will no longer be aligned in terms of frequency. This alignment error is proportional to frequency and is therefore worse at high frequencies.
Sampling-error is most likely to occur if one or more stages of the end-to-end chain, including the test system itself, includes an analogue stage. In this situation, the effective sample rates of the reference and degraded signals may be determined by different clock sources, and consequently any difference between the clock rates will result in a sample-error. Another source of error can be up or down-sampling operations performed in software that uses approximate sample conversation factors.
One of the requirements of any solution is that it must work in the presence of time-warping algorithms. This condition is satisfied by this invention because it is based on an analysis of the periodic parts of one a test signal and the purpose of a time-warping algorithm is to increase or decrease the duration of a part of a signal without changing the pitch period, i.e. the periodicity.