Managing the flow of work in a large, distributed computing system requires a high degree of coordination. A typical example is web crawling, which involves crawling a large number of uniform resource locators (URLs) on the Internet to extract information from associated web pages. Absent a software process to coordinate crawling the URLs, web crawling using parallel processes may not be performed as efficiently as possible. For example, a web crawling process can become delayed when a host computer associated with a URL to be crawled is unavailable since the crawling process may wait for the host to come online rather than crawl other URLs. Descriptions of tasks to be performed in parallel, such as by web crawling processes, are commonly stored in file systems or databases. Access to file systems and databases by parallel processes can become bottleneck as the number of processes increases, however.