The present disclosure relates to editing digital audio data.
Different visual representations of audio data are commonly used to display different features of the audio data. For example, an amplitude display shows a representation of audio intensity in the time-domain (e.g., a graphical display with time on the x-axis and intensity on the y-axis). Similarly, a frequency spectrogram shows a representation of frequencies of the audio data in the time-domain (e.g., a graphical display with time on the x-axis and frequency on the y-axis).
Audio data can be edited. For example, the audio data may include noise or other unwanted components. Removing these unwanted components improves audio quality (i.e., the removal of noise components provides a clearer audio signal). Alternatively, a user may apply different processing operations to portions of the audio data to generate particular audio effects.
Phase noise is the frequency domain representation of rapid, short-term, random fluctuations in the phase of a wave, caused by time domain instabilities (e.g., jitter). The phase of an oscillation or wave is the fraction of a complete cycle corresponding to an offset in the displacement from a specified reference point at time t=0. An oscillator, for example, can generate a series of waves which can simulate rhythms, patterns, and repetition in nature. Two oscillators that have the same frequency and different phases have a phase difference, and the oscillators are said to be out of phase with each other. It is common for acoustic (sound) waves to become superimposed in their transmission medium such that the phase difference determines whether the acoustic waves reinforce or weaken each other, (e.g., causing phase noise). Additionally, complete cancellation is possible for waves with equal amplitudes. Thus, editing operations to adjust phase differences can be performed to reduce or eliminate phase noise or signal (e.g., wave) cancellation.
An audio channel is a means for delivering audio data (e.g., audio signals) from one point to another. Audio data (e.g., an audio waveform) submitted to the channel input results in a similar waveform at the channel output. Panning is the spread of a monaural (e.g., single channel) signal in a stereo or multi-channel sound field. An audio pan control can be used in a mix to create the impression that audio data is moving from one channel to the other. For example, when audio data is directed to different channels located in different places, panning the audio data from one channel to another can create the impression that the audio data is moving from a first place to a second different place.
Panning can also be used in an audio mixer to reduce (e.g., to 0%), or enhance (e.g., beyond 100%), a stereo field of a stereo signal. For instance, the left and right channels of a stereo source can be panned straight-up (e.g., sent equally to both the left output and the right output of the mixer), creating a dual mono signal.