For any organizations with sizable IT assets, deploying new software or updating existing software to a newer version is an overwhelming task because very often the IT assets have different hardware/software configurations and it may simply impossible to predict how differently configured assets react to the deployment. Some organizations have relied on a lengthy full test pass prior to deployment. The full test pass model, however, cannot scale to faster cadence releases. Hence, some organizations have been dealing with this issue by a rule of thumb to complete the pre-deployment tests in a shorter period of time. For example, some IT professionals may manually select assets for pre-deployment pilot testing based on personal relationships with or availability of the users. Other IT professionals may randomly select a certain percentage of the total assets or a certain number of the assets based on the word-of-mouth in the industry, which has no scientific basis. These approaches, however, cause many problems. For example, in the software industry, one of the biggest challenges is compatibility of new software bits with existing assets (e.g., operating system, applications, versions, languages, addons, device maker/models, driver, etc.). By choosing manually or randomly selecting test sample asset, all the characteristic variations of the IT assets cannot be covered, and some of the characteristics of the assets may not be tested at all. There has been more elaborate approaches of determining a sample size based on a population size, but this approach has been proven unsuitable for the IT industry. Accordingly, there still remain significant areas for a new and improved method for more accurate and reliable IT asset sampling and selection for pre-deployment pilot tests.