Content-based audio recognition is the process of identifying similarities between the audio content of audio files. Performing content-based audio recognition usually involves comparing the audio content of a given audio file, called the query audio file, to the audio content of one or more other audio files, called the reference audio file(s). In many commercial applications, the number of reference audio files is very large, possibly in the order of millions.
The need for accurate, fast, and scalable content-based audio recognition is readily apparent in a wide range of practical situations. For example, the owner of a large musical catalogue may wish to determine whether a newly delivered song exists within that catalogue, even if the musical catalogue contains many millions of entries, and even if the arriving song has no associated metadata besides the audio signal.
Many different content-based audio identification methods are well-known in the prior art. Generally speaking, such methods consist of four phases. In a reference fingerprint ingestion phase, one or more fingerprints, called reference fingerprints, are extracted from the audio content information in each of the reference audio files, and ingested into a database, called the reference database. In a query fingerprint extraction phase, one or more fingerprints, called query fingerprints, are extracted from the audio content information in the query audio file. In a fingerprint matching phase, the query fingerprints are compared to the reference fingerprints in the reference database, to assess their similarity. Finally, in a decision-making phase, a set of decision-making rules are applied to assess whether audio content of the query audio file is similar (or identical) to audio content of one or more of the reference audio files.
A core problem with prior-art content-based audio identification methods is that they tend to perform very poorly on audio signals that have undergone audio obfuscations, such as changes in pitch or tempo, the application of filtering or compression, or the addition of background noise.
By their very nature, the fingerprints that content-based audio identification methods extract have a functional link to the sounds or events (such as rhythmic or melodic structure or timbre) in the corresponding audio files. In prior-art content-based audio identification methods, these fingerprints are typically extracted using pre-specified recipes. For example, a method for extracting fingerprints is disclosed in U.S. Pat. No. 8,586,847, by Ellis et al. In the disclosed method, a music sample is filtered into a plurality of frequency bands, and inter-onset intervals are detected within each of these bands. Codes are generated by associating frequency bands and inter-onset intervals. For a given sample, all generated codes, along with the time stamps indicating when the associated onset occurred within the music sample, are combined to form a fingerprint.
In such prior-art content-based audio recognition methods, the application of audio obfuscations can influence extracted fingerprints in unknown ways. Therefore, the values of fingerprints extracted from unobfuscated audio can vary considerably from the values of fingerprints extracted from the same audio after one or more obfuscations have been applied. This can cause such audio recognition methods to perform poorly in the presence of audio obfuscations. The application of audio obfuscations is common to many practical situations, such as DJs mixing together different songs to create a continuous mix of music. Therefore, there is a clear practical need for content-based audio recognition that performs well in the presence of audio obfuscations.
Another core problem with prior-art content-based audio identification methods is that they tend to perform very poorly at identifying query audio in which some, but not all, of the audio content is shared with one or more reference audio files. Many modern music producers make use of samples taken from other songs, called parent works, to make new songs, called derivative works. Therefore, there is also a clear practical need for content-based audio recognition that can identify a reference audio file sharing at least part of the audio content with a query audio file, to enable automatic detection of the similarities between derivative works and the parent works from which they were derived.