The modern communications era has brought about a tremendous expansion of wired and wireless networks. Computer networks, television networks, and telephony networks are experiencing an unprecedented technological expansion, fueled by consumer demand. Wireless and mobile networking technologies have addressed related consumer demands, while providing more flexibility and immediacy of information transfer.
Current and future networking technologies continue to facilitate ease of information transfer and convenience to users. Due to the now ubiquitous nature of electronic communication devices, people of all ages and education levels are utilizing electronic devices to communicate with other individuals or contacts, receive services and/or share information, media and other content. One area in which there is a demand to increase ease of information transfer relates to provision of speech-based content via communication devices.
For instance, currently, applications for voice user interfaces that create speech-based content are being utilized. The usefulness of these applications may be greatly enhanced if they are coupled with automatic sentiment detection (SD). At present, many applications performing sentiment detection operate solely on a text level by analyzing textual words. However, speech typically carries information about sentiment that is supplemental to the words spoken. At present, carrying out signal processing independently or separately for these textual and speech based sentiment tasks may unnecessarily increase processing load and latency and may also reduce the battery life of a communication device.
As such, it may be beneficial to provide an efficient and reliable mechanism for combining textual and acoustic information to perform sentiment detection for generating content.