The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.
Computing devices are often connected to peripherals to provide storage, networking, processing, or other computing capabilities. Interconnects or data busses of host devices that enable peripheral connection, however, are often limited by physical or logical access, such as physical port connections or logical address space, respectively. To address these limitations, some peripherals expand connectivity of the host device by replicating ports or logical address space through which additional peripherals can connect to the host device.
Access to resources of the host device through the replicated ports or logical address space, however, typically conflicts with access to internal resources of the peripheral providing the expanded connectivity. For example, a storage controller providing connectivity to a host device may include a large amount of internal memory that is not mapped to a memory space of the host device. Mapping the internal cache memory to a downstream peripheral device creates a blind spot in the host device's memory space for any downstream peripheral device when attempting to communicate with the host device. In other words, other peripheral devices connected to the host device through the storage controller cannot directly access (e.g., see into) the memory space of the host device if that memory space is mapped to the internal cache memory of the storage controller.
As such, attempts by other peripherals to access host device resources behind a blind spot are typically routed via a series of address translations that create windows into the host device's memory space behind the blind spot. While the use of these windows may permit access to an entire memory space of the host device, the windowing process introduces latency and processing overhead because transaction requests and associated data are cached at the intermediate peripheral (e.g., storage controller) while the address translations are set up to create a window. These latency issues are further compounded when concurrent transactions require address translation and windowing, as a subsequent transaction may be forced to wait for previous transactions to complete and release particular ranges of translated addresses (e.g., windows) before subsequent address translations can be initiated.
In some cases, a downstream peripheral is allowed to access host device resources by caching all data received from downstream peripheral devices to the internal cache memory. In such cases, the aforementioned address translation and latency issues apply to all transactions, even those transactions associated with addresses not behind a blind spot in the host device's memory space.