In connection with the National Resident Matching Program, which is an example of a state of the art large-scale matching scheme, the hospital-residents problem (how to match hospitals to residents) is described as follows (emphasis added), at http location //eprints.gla.ac.uk/15/1/hvsm.pdf.
“A generalisation of SM (Stable Marriage problem) occurs when the preference lists of those involved can be incomplete.
In this case, we say that person p is acceptable to person q if p appears on the preference list of q, and unacceptable otherwise. We use SMI to stand for this variant of SM where preference lists may be incomplete. A matching M in an instance I of SMI is a one-one correspondence between a subset of the men and a subset of the women, such that (m;w) 2 M implies that each of m;w is acceptable to the other. The revised notion of stability may be defined as follows: M is stable if there is no (man,woman) pair (m;w), each of whom is either unmatched in M and finds the other acceptable, or prefers the other to his/her partner in M1. (It follows from this definition that, from the point of view of finding stable matchings, it may be assumed, without loss of generality, that p is acceptable to q if and only if q is acceptable to p.) A stable matching in I need not be a complete matching. However, all stable matchings in I have the same size, and involve exactly the same men and exactly the same women [4]. It is a simple matter to extend the Gale/Shapley algorithm to cope with preference lists that may be incomplete (see [6, Section 1.4.2]).
“We shall refer to the classical many-one generalisation of the (one-one) problem SMI, which is relevant in a number of important applications, as the Hospitals/Residents problem (HR) [6, 22]. An instance I of HR involves a set of residents and a set of hospitals, each resident seeking a post at one hospital, and the i'th hospital . . . . Each resident strictly ranks a subset of the hospitals, and each hospital strictly ranks its applicants. A matching M in I is an assignment of each resident to at most one hospital so that, for each i, at most ci residents are assigned to the ith hospital. Matching M is stable if there is no (resident,hospital) pair (r; h) such that (i) r; h find each other acceptable, (ii) r is either unassigned or prefers h to his assigned hospital, and (iii) h either has an unfilled post or prefers r to at least one of the residents assigned to it. Again, the Gale/Shapley algorithm may be extended to find a stable matching for a given instance of HR.”
A conventional method for matching hospitals to residents is described in the NRMP website as follows:
“The process begins with an attempt to match an applicant to the program most preferred on that applicant's rank order list (ROL). If the applicant cannot be matched to that first choice program, an attempt is made to place the applicant into the second choice program, and so on, until the applicant obtains a tentative match or all the applicant's choices on the ROL have been exhausted.
A tentative match means a program on the applicant's ROL also ranked that applicant and either:    the program has an unfilled position, in which case there is room in the program to make a tentative match between the applicant and program, or    the program does not have an unfilled position, but the applicant is more preferred by the program than another applicant who already is tentatively matched to the program. In that case, the applicant who is less preferred by the program is removed to make room for a tentative match with the more preferred applicant.
Matches are “tentative” because an applicant who is matched to a program may be removed from that program to make room for an applicant more preferred by the program. When an applicant is removed from a tentative match, an attempt is made to re-match that applicant, starting from the top of the applicant's ROL. This process is carried out for all applicants until each applicant has either been tentatively matched to the most preferred choice possible or all choices submitted by the applicant have been exhausted.
When the Match is complete, all tentative matches become final.”
A greedy algorithm is a computerized functionality which employs a problem solving heuristic of making, at each stage, a locally optimal choice rather than determining and making the globally optimal choice.
State of the art related technologies are described inter alia in:    1. Papadimitriou, Christos; Raghavan, Prabhakar; Tamaki, Hisao; Vempala, Santosh (1998). “Latent Semantic Indexing: A probabilistic analysis” (Postscript). Proceedings of ACM PODS. http://www.cs.berkeley.edu/˜christos/ir.ps.    2. Hofmann, Thomas (1999). “Probabilistic Latent Semantic Indexing” (PDF). Proceedings of the Twenty-Second Annual International SIGIR Conference on Research and Development in Information Retrieval. http://www.cs.brown.edu/˜th/papers/Hofmann-SIGIR99.pdf.    3. Blei, David M.; Ng, Andrew Y.; Jordan, Michael I; Lafferty, John (January 2003). “Latent Dirichlet allocation”. Journal of Machine Learning Research 3: 993-1022. doi:10.1162/jmlr 2003.3.4-5.993. http://jmlr csail.mit.edu/papers/v3/blei03a.html.    4. Blei, David M. (April 2012). “Introduction to Probabilistic Topic Models” (PDF). Comm. ACM 55 (4): 77-84. doi:10.1145/2133806.2133826. http://www.cs.princeton.edu/˜blei/papers/Blei2011.pdf    5. Sanjeev Arora; Rong Ge; Ankur Moitra (April 2012). “Learning Topic Models—Going beyond SVD”. arXiv:1204.1956.    6. Girolami, Mark; Kaban, A. (2003). “On an Equivalence between PLSI and LDA”. Proceedings of SIGIR 2003. New York: Association for Computing Machinery. ISBN 1-58113-646-3.    7. Griffiths, Thomas L.; Steyvers, Mark (Apr. 6, 2004). “Finding scientific topics”. Proceedings of the National Academy of Sciences 101 (Suppl. 1): 5228-5235. doi:10.1073/pnas.0307752101. PMC 387300. PMID 14872004.    8. Minka, Thomas; Lafferty, John (2002). “Expectation-propagation for the generative aspect model”. Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence. San Francisco, Calif.: Morgan Kaufmann ISBN 1-55860-897-4.    9. Blei, David M.; Lafferty, John D. (2006). “Correlated topic models”. Advances in Neural Information Processing Systems 18.    10. Blei, David M.; Jordan, Michael I.; Griffiths, Thomas L.; Tenenbaum; Joshua B (2004). “Hierarchical Topic Models and the Nested Chinese Restaurant Process”. Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference. MIT Press. ISBN 0-262-20152-6.    11. Quercia, Daniele; Harry Askham, Jon Crowcroft (2012). “TweetLDA: Supervised Topic Classification and Link Prediction in Twitter”. ACM WebSci.    12. Li, Fei-Fei; Perona, Pietro. “A Bayesian Hierarchical Model for Learning Natural Scene Categories”. Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) 2: 524-531.    13. Wang, Xiaogang; Grimson, Eric (2007). “Spatial Latent Dirichlet Allocation”. Proceedings of Neural Information Processing Systems Conference (NIPS).Topic modeling (Wikipedia): In machine learning and natural language processing, a topic model is a type of statistical model for discovering the abstract “topics” that occur in a collection of documents. An early topic model was described by Papadimitriou, Raghavan, Tamaki and Vempala in 1998. [1] Another one, called Probabilistic latent semantic indexing (PLSI), was created by Thomas Hofmann in 1999. [2] Latent Dirichlet allocation (LDA), perhaps the most common topic model currently in use, is a generalization of PLSI developed by David Blei, Andrew Ng, and Michael Jordan in 2002, allowing documents to have a mixture of topics. [3] Other topic models are generally extensions on LDA, such as Pachinko allocation, which improves on LDA by modeling correlations between topics in addition to the word correlations which constitute topics. Although topic models were first described and implemented in the context of natural language processing, they have applications in other fields such as bioinformatics.Topics in LDA (Wikipedia): In LDA, each document may be viewed as a mixture of various topics. This is similar to probabilistic latent semantic analysis (pLSA), except that in LDA the topic distribution is assumed to have a Dirichlet prior. In practice, this results in more reasonable mixtures of topics in a document. It has been noted, however, that the pLSA model is equivalent to the LDA model under a uniform Dirichlet prior distribution. [12]
The disclosures of all publications and patent documents mentioned in the specification, and of the publications and patent documents cited therein directly or indirectly, are hereby incorporated by reference. Materiality of such publications and patent documents to patentability is not conceded.