Researchers use pattern recognition technology to identify templates that may be represented by data samples such as letters from an alphabet, words from a specific language, faces from a group of people, fingerprints from a database, and the like. Certain pattern recognition technology may create or train a recognition model to determine the template of a given data sample. Training robust recognition models, however, may require large amounts of data samples to achieve a high recognition accuracy rate in identifying patterns. Typically, a recognition model may increase in accuracy as more high-quality data samples are provided in training the model. Researchers from different organizations, however, currently collect their own data samples and may not have the ability to share their data samples with each other which may limit the overall accuracy of their recognition models. Furthermore, since each researcher provides his own recognition algorithms and uses his own data samples, comparing recognition models against one another may be infeasible due to lack of commonly shared data sets.
Most recognition algorithms may be computationally complex and intensive. Therefore, when trained on a large data set, the computations may take up to several weeks to conduct a single training experiment on a single machine. As a result, the overall computation process for training a recognition model may be very expensive and time consuming.