To maximize usability and benefits of features of computer implemented entities, experiments are configured to determine the best features and feature characteristics to provide to users of the computer implemented entity. These experiments involve a series of alternative features for each of one or more features of the computer-implemented entity. The alternative features are presented to users and the behavior of the users with respect to the alternative features is received and analyzed to determine those alternative(s) that maximize the usability and productivity of the computer-implemented entity.
Currently, the experiment process is performed manually. This manual process requires relying on human intuition when grouping the users and selecting the alternative features for testing. Furthermore, because the alternative features are selected and analyzed manually, it becomes increasingly difficult to select and analyze experiments with alternative sets for testing joint features (e.g., adjusting content distribution and UI elements provided to a user). Additionally, it is difficult to adjust the experiment elements during the experiment as the selection of alternative features may only be adjusted manually, after the user has viewed the results of the interaction of user groups with respect to their alternative sets.