Deep learning artificial intelligence (AI) models are trained using large amounts of data, e.g., terabytes or petabytes, which typically must be stored, moved and managed for model generation, model verification, and later access as examples. All of the data movement and access takes time and consumes resources, local and remote, and communication bandwidth. Vector access and searching, and example access can create processing, network and data accessing bottlenecks. These factors and more slow down AI processes and do not scale well with standard processing and data storage as data amounts, AI model sizes and depths increase, which are present trends. Therefore, there is a need in the art for a solution which overcomes the drawbacks described above.