Modern computing often uses a large number of machine learned models, such as random forest models, regression models, neural networks, k-means clustering models, and the like. However, when a company or other group builds a large number of such machine learned models, it becomes difficult and costly to maintain the models and store the models in a meaningful fashion. Indeed, conventionally, many machine learned models are lost on company-wide systems due to inability to effectively find and use models whose creators have left the company or moved to other projects.
Moreover, machine learned models comprise different types of models. Accordingly, it is not possible to index these different models in comparable ways. One technique is to index hyperparameters of the models, but different types of models have different types of hyperparameters, which are not always comparable. For example, neural networks have a number of layers and a number of nodes while decision trees have a number of leaves. Embodiments of the present disclosure may solve these technical problems.