The present invention relates to a new and useful Hardware Independent disk imaging method that is particularly useful in a Windows PC and other operating system environments.
Disk imaging was created a number of years ago as a way to speed up computer deployment, rather than the old technology of inserting a CD ROM, or other media, and loading up a computer “operating system” (e.g. Windows, Linux, Unix, etc) by a manual file copy, which is what the manufacturers set up process in fact did.
Imaging took a completely set up computer and then the procedure was to insert a boot floppy, or some other boot media, and capture that fully configured hard drive of data (consisting of the operating system, applications, hardware device drivers, user data and configuration to any of the preceding). The emergence of “disk imaging” or “imaging” would be circa Windows 95 or Windows 98, about that time frame which had all the drivers and applications loaded, and it would take a snapshot of that drive and imaging would be accomplished by booting the system to a “pre-OS” (typically this was Microsoft DOS or some other small lightweight operating system on a boot floppy that had a primitive networking client and an imaging agent).
What the imaging agent did upon being booted would be to follow the user's instructions to either capture an image, which is to read the hard drive that presumably had an operating system, along with hardware device drivers, applications and whatever configurations that were intended to be captured. It would then use its networking client to copy the complete content of the drive up to the server. However, it wouldn't perform this capture file-by-file; rather it would actually take a look at the data as it was laid out on the hard drive without bothering to read or understand what the data was, where files started and stopped, and what types of files those were—all of which are of concern when copying data file-by-file. The agent instead just looked at the massive data on the hard drive and stored the nature and the arrangement of that data in a single file. Thus, a user's computer would be captured in a very large single file, but it could be done quite efficiently because the previous methodology would be to perform a file-by-file copy operation (e.g. a file backup). A comparable analogy to file copying versus imaging would be where if a file copy would be like taking a newspaper and retyping it word-for-word, disk imaging would be like making a photocopy of the newspaper and being able to see the exact text and it's arrangement and relation without caring about what the words said. The photocopy machine doesn't care what the text means and it doesn't have to read it, it doesn't care where words start and stop, all that matters is that it can recreate that. And that's a lucent analogy to disk imaging. It just takes all the data, copies it up onto the network typically and to a large single file which can then later be restored to that or another computer using the same process in reverse direction. In a scenario where a user wanted to protect against a crashed hard drive or wanted to load a larger hard drive and move all of the user's data, a disk image or snapshot would be taken, with any of a number of available imaging tools
NOTE: There are a number of imaging tools back when disk imaging was invented, but Ghost was one of the more popular ones and in fact some refer to disk imaging as “ghosting” a hard drive. Altiris also had an imaging agent called Rapid Deploy which acted and behaved similar to the Ghost tool, in its early days. In its later days, Altiris integrated the Rapid Deploy tool into an integrated client/server platform that allowed administrators to control imaging and other aspects of PC deployment from a centralized server console.
From a process advantage standpoint, disk imaging, or hard drive imaging, took the act of deploying a computer which typically took from 2-3 hours to do the install, set up the applications and move the data, and really made redeploying a cloned copy of a computer a matter of half hour or less that was totally hands off because the imaging agent was dumping data down to a disk rather than asking configuration questions that are typical of performing a traditional media based operating system setup.
Altiris made the process better with its tool called Deployment Server. Deployment Server added a central server console for disk imaging and other deployment aspects such that the disk image could be laid down and from a centralized console. So, no longer did someone have to actually go to a system and boot it with a boot floppy. With Altiris, a system could be configured upon boot to look at some network resources, or a virtual boot floppy could be stored on the computer's hard drive and act just like a floppy that was inserted into the disk and it could be booted to a pre OS. The same types of pre OS that we saw in the early days of imaging are usable in the Altiris Deployment Server world. Those are DOS, Linux and newly arriving upon the scene was Windows PE, which is a thinned down version of Windows XP. So now instead of the 8 bit DOS world, we're able to operate 32 bits, a much faster newer code, and better take advantage of our hardware. Instead of having one person load a floppy, with the Altiris console, you could see a number of machines, such as in a scenario where we had a computer lab at a school to void with 25 machines, an Altiris Administrator could go to the DS console and designate those 25 machines and drag and drop an image job on them and it rolls the image out to the different machines. The Altiris system stays in constant communication and control of its managed machines, and has the ability to tell them to boot down from Windows and to a prescribed pre OS and accept a certain disk image.
So, imaging worked well. It was and remains an efficient means to deploy computers, but there is a problem. When you take an image of a computer, it's just that, an exact and complete copy of that computer's data and data arrangement without respect to the proprietary nature of the hardware from which it may have come. With a disk image, you can take an image off a template computer with the intent to deploy it to numerous business systems. And, this is in fact how companies currently standardize their computer deployment. It's a staple procedure, but it does have some limitations in that the data from the PC from which the image was taken includes all the hardware device drivers and other operating system files that are proprietary to that exact make, model and hardware configuration of the computer from which that disk image was taken. That gives rise to an issue in that as long as the hardware is sufficiently different from the template machine, the image often failed upon boot because the device drivers are not proper for the target system. An image taken from a desktop often would not work on other models . . . or sometimes even on like model system that had hardware differences introduced by the OEM or by corporate staff during or post deployment. What matters is that unaltered disk images will only work on PC's that are very similar in hardware build to the image source machine, and that even within a model group, some differences can occur such that images were sometimes not usable within a PC model group.
As a partial solution to this issue, Microsoft invented a technology called Sysprep and it would run on a system just prior to being captured for imaging and it attempted to make the image more generic by removing hardware references in the Window system that were to hardware proprietary to the image source system. This made the image upon boot not try to load specific drivers, but DID NOT provide the correct drivers for the system targeted to receive the image. The method companies currently use is to try to catalog all the different models of all the different systems deployed in their environment, get those systems in a lab or test environment, obtain the device drivers from varied Internet (which can be a significantly labor intensive process to extract the device drivers in a format compatibility with Windows Plug and Play loading), copy all the “driver packs” to the system about to be captured as an image and finally, run Microsoft Sysprep on it. It should be noted that the step of loading the driver packs in the image can inflate the disk image to twice its nominal size or even more.
It's not uncommon to support 20 or more models or to have 2 or 3 GB of drivers on the system in order to support some of those different models. Again, it must be emphasized that this often doubles the size of the image, which means every time it's transmitted across to a system for deployment, most of the data may in fact be wasted because it's actually device drivers that don't apply to that system, but we had to have them there just in case. So Altiris developed a technology called FIRM Injection and tokens. What this pair of technologies do (in the case of FIRM) is to provide a way to copy files across the network into an image and (in the case of tokens) to allow variables in scripts that Altiris Deployment Server evaluates against any data in its management database, that is associated with a specific PC system being deployed at any given moment, and substitutes the variables with values specific to a PC being deployed at any given moment. So, instead of having a bloated image with all the drivers in the image, what it is possible to do, is to load up the device drivers on the actual server use Altiris and FIRM to lay down the image on a target system and before allowing it to boot, a specialized script could be created that would effectively provide a way to copy administrator provided driver packs down into the image. The tokens allow the strategic placement of a key variable (usually PC model number) in a file copy path so that the copy could select the proper driver pack from the server (this assumes that the administrator shared PC driver packs on the server that were arranged in folders that had the PC model (or other relevant token) as part of the path:
FIRM COPY \\DeploymentServer\eXpress\drivers\<modelnumber> to the local drive of the freshly imaged, but not yet booted system. NOTE: The above command line syntax has been “paraphrased” from the actual Altiris compatible syntax to conceptual clarity for those reading this work that are not familiar with Altiris proprietary command line syntax.
This worked well, but it is still necessary to:
a. Inventory the environment to obtain a list of PC models, procure the model samples and get them in a test lab. (Time est. 1 to 7 days depending on inventory data availability),
b. Find all the drivers—e.g. download them from the OEM site or use a driver backup utility—which often produces results that are not fully compatible with Windows Plug and Play. (Time estimated is a function of the number of models, but typically took weeks to many months to support organizations that have thousands or tens of thousands of PC's with the accompanying hardware variance),c. Build the scripts necessary to utilize FIRM and tokens in a way that that could successfully employ them (usually 1-3 weeks for individuals to debug and deploy a working scripted deployment model), andd. Continually monitor and maintain the process, running hardware inventories to identify newly emerging models and use the above steps to allow them to be supported by the corporate hardware independent imaging (HWI) process.
Applicants discovered limitations with the above process. Applicants found that deployed systems, even with the right model driver pack, did not always rebuild correctly and the drivers didn't always work. The reasons applicants discovered for that were because within PC models (just as is the case with cars) there are some options that can be ordered that are different. Just because two computers are the same models does not mean they have the same video card, network card, etc and users could have put any number of hardware cards in the computer after deployment, or as part of deployment. Consider that it is now as easy as plugging a device into the USB port on the front of a PC to add a device not provided for by the model concept. Thus it is trivial to change the hardware nature of a PC and in fact likely that the corporate user or IT staff will meaningfully change the hardware build of a many of their managed PC systems such that model based driver packs will not reliably work. Ultimately the model of the system ceases to have meaning in terms of any guarantee that a given driver pack will work from one system to another based solely that they are the same model.