Digital audio content appears in many instances, including for example music and movie files. In most instances, an audio signal is encoded, where the encoding need not necessarily be for purposes of data-rate reduction but could simply be for purposes of format conversion, to enable the storage or transmission of a resulting media file or stream, thereby allowing numerous deliveries or transmissions to occur simultaneously (if needed). The media file or stream can be received in different types of end user devices, where the encoded audio signal is decoded before being presented to the consumer through either built-in or detachable speakers. This has helped fuel consumers' appetite for obtaining digital media over the Internet. Creators and distributers of digital audio content (programs) have several industry standards at their disposal, which can be used for encoding and decoding audio content. These include Digital Audio Compression Standard (AC-3, E-AC-3), Revision B, Document A/52B, 14 Jun. 2005 published by the Advanced Television Systems Committee, Inc. (the “ATSC Standard”), European Telecommunication Standards Institute, ETSI TS 101 154 Digital Video Broadcasting (DVB) based on MPEG-2 Transport Stream in ISO/IEC 13818-7, Advanced Audio Coding (AAC) (“MPEG-2 AAC Standard”), and ISO/IEC 14496-3 (“MPEG-4 Audio”), published by the International Standards Organization (ISO).
Audio content may be decoded and then processed (rendered) differently than it was originally mastered. For example, a mastering engineer could record an orchestra or a concert such that upon playback it would sound (to a listener) as if the listener were sitting in the audience of the concert, i.e. in front of the band or orchestra, with the applause being heard from behind. The mastering engineer could alternatively make a different rendering (of the same concert), so that, for example upon playback the listener would hear the concert as if he were on stage (where he would hear the instruments “around him”, and the applause “in front”). This is also referred to as creating a different perspective for the listener in the playback room, or rendering the audio content for a different “listening location” or different playback room.
Audio content may also be rendered for different acoustic environments, e.g. playback through a headset, a smartphone speakerphone, or the built-in speakers of a tablet computer, a laptop computer, or a desktop computer. In particular, object based audio playback techniques are now available where an individual digital audio object, which is a digital audio recording of, e.g. a single person talking, an explosion, applause, or background sounds, can be played back differently over any one or more speaker channels in a given acoustic environment.
However, tonal balance as heard by a listener is affected when a previously mixed recording (of certain audio content) is then rendered into a different acoustic environment, or rendered from a different listener perspective. To alleviate such tonal imbalance, mastering engineers apply equalization (EQ) or spectral shaping to a digital audio signal, in order to optimize the audio signal for a particular acoustic environment or for a particular listener perspective. For example, rendering a motion picture file for playback in a large movie theater may call for certain EQ to be applied (to the digital audio tracks of the motion picture file) to prevent the resulting sound from being too bright during playback. But rendering the file for playback through a home theater system, e.g. as a DVD file or an Internet streaming movie file, calls for a different EQ because of the smaller room size (and other factors).