The modern Internet is built on the back of search. The dynamic, decentralized nature of online content creation means that there is a virtually infinite and ever growing amount of information available. However, this information has no value in the absence of some at least moderately effective mechanism for identifying material that is likely to be relevant for a particular context. Moreover, such a mechanism must provide a technological solution, since information management solutions from the pre-Internet world (e.g., relying on subject matter experts to categorize new content into a predefined ontology) are ill equipped to address the dynamicism and decentralization that are the hallmarks of online content creation. Thus far, the only mechanism that has proved viable has been search. Indeed, even social media platforms such as Facebook and Twitter can be considered search companies, as the task of identifying relevant content for populating a user's feed is fundamentally similar to the task of identifying relevant content to use in populating a traditional search result page.
Despite the foundational role of search, the technology has many flaws. For example, the very efficacy of search has resulted in search algorithms effectively becoming gatekeepers for information, providing incentives for people or organizations who wish to manipulate opinion for financial or other gains to actively subvert them (e.g., link farming, fake news). Similarly, the fact that search technology is expected to identify relevant information from as broad a swath of results as possible means that items which could be provided as search results may include errors that could decrease the accuracy or perceived accuracy of results which will be provided to users. Accordingly, there exists a need for improvements that address or improve on one or more of the flaws in currently used search technology.