As the Internet becomes more widespread, the amount of data that can be accessed increases as well. However, access to the data does not necessarily mean an Internet user can actually find the data. The sheer quantity of data available often overwhelms a user, and they often resort to using search engines to help them easily locate desired information. This was initially a suitable means to find data but as the quantity increased, the mechanisms that located the data (e.g., search crawlers) began to look at only a portion of the information on a Web page to determine its relevancy due to the massive quantity of data it must review. Thus, Web page owners increased their Web page accesses by including even non-relevant terms as part of the metadata that a search crawler would encounter. The search crawlers themselves then became smarter and some can even review the information in context to determine its true relevancy to a particular topic.
Unfortunately, Web pages themselves have become even more complex over time and have even challenged the smartest of the search crawlers. Employment of scripting and other automated means have generally left the average search crawlers misinterpreting and/or missing entirely the information on some Web pages. This is because a search crawler typically looks at textual data to index the Web pages. Other data artifacts on the Web page are largely ignored and lost to the general user who desires to find such information. This is unfortunate because as the Internet evolves it will contain increasingly complex data formats to facilitate in condensing large quantities of data even further. Thus, on the one hand, a Web page will be able to offer more information, but at the same time it may reduce the availability and access to users because of the complexity.