Conventional data storage architectures have leveraged technological advances in general purpose microprocessors to run integrated storage control plane and data plane architectures, wherein the storage processing control and data plane functions have been converged onto a general purpose microprocessor. Such conventional architectures are problematic for various reasons. For example, general purpose microprocessor solutions may provide sub-optimal performance in executing certain data plane functions. Moreover, integrated control and data plane architectures require tight coupling between media (e.g., disk and FLASH) and compute elements. This tight coupling fixes compute-to-media ratios, and makes scaling difficult. This limitation manifests itself as reduced system efficiency across widely varying workloads. Furthermore, current storage architectures are difficult to scale out. Indeed, system scale out requires sharing of system information amongst system elements. As the amount of shared information can increase exponentially, the amount of available system resources (memory bandwidth, CPU cycles, I/O bandwidth, etc.) can be readily exceeded. Moreover, future workloads are difficult to predict, especially in hyper scale computing in distributed computing environment in which the demand for certain types of workloads (as well as volume of data) can increase exponentially.