Virtual machines, particularly those that attempt to capture an entire machine's state, are increasingly being used as vehicles for deploying software, providing predictability and centralized control. The virtual environment provides isolation from the uncontrolled variability of target machines, particularly from potentially conflicting versions of prerequisite software. Skilled personnel assemble a self-contained software universe (potentially including the operating system) with all of the dependencies of an application, or suite of applications, correctly resolved. They then have confidence that this software will exhibit the same behavior on every machine, since a virtual machine monitor (VMM) will be interposed between it and the real machine.
Virtualization (system and application) technology has been gaining widespread commercial acceptance in recent years. System virtualization allows multiple operating system (OS) stacks to share common hardware resources such as memory and CPU. System virtualization is generally implemented as a mediation layer that operates between the OS and the hardware. Application level virtualization technologies allow multiple application stacks to share a common OS namespace, such as files and registry entries. Application level virtualization is generally implemented as a mediation layer that operates between the application processes and the OS. With system virtualization, an OS stack can be given the illusion that required hardware resources are available, whereas in reality they may be shared by multiple OS stacks. With application virtualization, an application can be given the illusion that its files and registry entries are exactly where it expects them to be on the host machine, whereas in reality multiple application install images may be sharing the same locations in the namespace.
General reference with regard to a virtual execution environment (one known as a Progressive Deployment System (PDS)) may be made to VEE '05, Jun. 11-12,2005, Chicago, Ill., USA, “PDS: A Virtual Execution Environment for Software Deployment”, Bowen Alpern, Joshua Aurbach, Vasanth Bala, Thomas Frauenhofer, Todd Mummert, Michael Pigott.
The two types of virtualization technology (i.e., system and application) operate at different levels of the stack, and their value propositions are complimentary. System virtualization enables encapsulation of the state of a complete OS and applications software stack within a virtual system container, while application virtualization enables encapsulation of the state of an application stack only within a virtual application container. Both types of virtualization allow their respective containers to be deployed and managed as an appliance, i.e., as a pre-installed and pre-tested environment within a secure region that is isolated from other stacks that share the same environment. This has significant commercial value from an IT management standpoint, since appliances provide greater robustness and security assurances than conventional install-based methods of deployment.
During software execution some files are required more frequently than other files, and there can exist “phase” changes in which multiple files are required in a short period of time. Application start-up is one particularly important phase change.
Rotating data storage devices (e.g., magnetic disk) spin at a constant rate. This implies that those tracks farthest from the center of a disk can be read more quickly than those closer to the center. In addition, access to some disk sectors is faster than to others. It is known to exploit these characteristics by moving files observed to be accessed frequently to those disk sectors observed to be accessed quickly, thereby reducing disk latency and increasing the effective data transfer rate from disk.
If the files required for a phase change are widely separated, the seek-time to move from one file to the next can be a significant factor in the total time required to effect the phase change. This effect can manifest as an application appearing to take an inordinately long time to start. Existing products attempt to eliminate such effects by ordering files that are observed to be accessed within a short time window next to each other in the order that they were accessed. The effectiveness of this technique is limited by the fact that such products are unable to distinguish accesses that are accidentally contemporaneous from those that are necessarily so.
Windows™ has an API that allows user processes to move files on disk. Diskeeper's I-FAAST™ technology exploits this API to rearrange files on disk to avoid seek latencies for a file observed to be accessed contemporaneously. This technology is said to be specifically developed to accelerate the speed of file access time in order to meet the heavy workloads of file-intensive applications, and monitors file usage and reorganizes those files that are used most for the fastest users of applications such as CAD/CAM, database applications, and graphic and video-intensive applications are said to experience an increase in speed and response.
U.S. Pat. No. 7,062,567, Intelligent Network Streaming and Execution System for Conventionally Coded Applications, states in col. 31, lines 4-10, that “frequently accessed files can be reordered in the directory to allow faster lookup of the file information. This optimization is useful for directories with large number of files. When the client machine looks up a frequently used file in a directory, it finds this file early in the directory search. In an application run with many directory queries, the performance gain is significant.” However, this technique is not disclosed to pertain to where the files are actually stored on disk.