Performance-based testing, or “cPBT,” is an examination approach wherein candidates must interact with real or simulated systems. PBT puts the test taker in one or more situations wherein the test taker must use his or her knowledge, demonstrate his or her skills, solve common problems and/or perform troubleshooting tasks, which are believed by the test crafter to correlate with the specialized knowledge and skill needed for performing certain tasks and activities. PBT is used in many industries and professions to test competency. For example, firemen, police officers, airline pilots, and flight deck crew are often tested using performance-based testing. If their test scores fall below key performance indicators, they are sent back to training or otherwise taken off the job.
PBT may take the form of interactive software simulation, wherein test items simulate the behavior of a particular software product and in the context of the simulation, the test taker is asked to perform specified functions correctly within the simulation. Interactive software simulation is a particularly useful strategy for gauging proficiency in the utilization of computer software programs. Unlike computer-based training (“CBT”) which moves a user linearly through a course of study, interactive software simulation places the user in a simulation of a computer application and asks the test taker to perform a function as if they were using real software Interactive software simulation may permit the application to be simulated without the need for the application to have special programming “hooks,” and without the need for the real application to be present on the testing workstation. Such programs may be self-contained, eliminating variation between different operating systems, product versions and languages.
In an information technology intensive era, companies are looking to streamline the hiring of computer-savvy individuals. This includes assessing their educational needs.
Evaluating potential and current employees can be a costly venture if it is found after a probationary period that the employee is ill-equipped to perform the job for which he or she was hired or trained. In a highly competitive and rapidly evolving field, it is often vital that employees come to the workplace with a grasp of the abilities their jobs demand. In situations where none of the prospective candidates possess all the necessary skills and abilities, testing can indicate which candidates will require the least amount of training. It can also show whether any candidates possess the skills to begin working and whether they have a strong enough grasp of materials to pick up the remaining skills through on-the-job training.
Degrees of computer performance testing complexity can vary greatly from testing a secretary in the use of a word processor, to testing an information technology professional in complex computer system administration. Administrators of such widely varying tests must establish environments to meet the criteria of the job. In particular, the administrator must pay attention to validity and reliability issues each time a test is given.
Validity refers to proof that a test accurately measures the skill or set of skills it is intended to gauge. Methods of assessing test validity include content, construct and criterion validation. Content validity refers to proof, normally provided by subject-matter experts, that items in a test cover the most important and frequently-used knowledge, skill and abilities needed to accomplish the job being measured by the test. Construct validity refers to proof that the individual items in a test are accurate measurements of the subject being tested. Criterion validity refers to proof that the overall test accurately correlates with some other independent measure.
Reliability references the ability of the test to provide consistent, replicable information about a user's performance. Reliability is a prerequisite to validity. Reliability depends on the consistency of the simulation of the test tasks and the consistency of rating responses to the tasks. For testing agencies, accuracy, validity and reliability of their computer performance tests are major selling points.
A computer systems administrator may frequently provide a complete network system, including workstation, server, applications and documents for a PBT. Computer networks may entail connecting hubs, wiring, and software. People are needed to make the network perform to provide the platform of applications desired, such as word processing, computer aided design and the like.
Once the realm of mainframe computers, networks with multiple servers handle everything from websites, to application support, to email, and to accounting. As the need for more separation and more services has risen, more servers have been implemented to cope with the need. However the increase in equipment has brought about management headaches for administration staff to maintain every unit at the required reliability level.
Recently there has been introduced a method of employing “virtual machines,” something pioneered on mainframes by companies such as IBM. So-called “virtualization” is the process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration. The term “virtual machine” references software that forms a virtualized environment, that is an environment which appears to a guest operating system as hardware, but is actually simulated and contained by the host system. One type of virtual machine is the VMware virtual machine by International Business Machines.
Internet hosting companies have become the primary users of virtualization. Using the abstraction of a virtual server, a hosting company can support multiple web servers on a single computer, considerably reducing their maintenance and support cost. While operating on a shared machine, virtualization may have the effect of providing complete environments with all the security of a dedicated machine, yet sharing the backup, archiving, monitoring, and related services for the system administrator.
A group of machines that have similar architecture or design specifications may be considered to be members of the same “family.” Although a group of machines may be in the same family because of their similar architecture and design considerations, machines may vary widely within a family according to their clock speed and other performance parameters.
Each family of machines executes instructions that are unique to the family. The collective set of instructions that a particular machine or family of machines can execute is known as the machine's “instruction set.” As an example, the instruction set used by the Intel 80.times.86 processor family is incompatible with the instruction set used by the PowerPC processor family.
The uniqueness of a particular family among computer systems also typically results in incompatibility among other elements of hardware architecture of other computer systems. For example, a computer system manufactured with a processor from the Intel 80.times.86 processor family will have a hardware architecture that is different from the hardware architecture of a computer system manufactured with a processor from the PowerPC processor family. Because of the uniqueness of the machine's instruction set and a computer system's hardware architecture, application software programs are typically written to run on a particular computer system running a particular operating system.
To expand the number of operating systems and application programs that can run on a particular computer system, a field of technology has developed in which a given computer having one type of central processing unit (“CPU”) called a host, will include a software and/or hardware-based emulator that allows the host computer to emulate the instruction set of an unrelated type of CPU, called a guest. Thus, the host computer will execute an application that will cause one or more host instructions to be called in response to a given guest instruction. Therefore, the host computer can both run software designed for its own hardware architecture and software written for a computer having an unrelated hardware architecture.
Typically, an emulator is divided into modules that correspond roughly to the emulated computer's subsystems. Most often, an emulator will be composed of the following modules: a CPU emulator or CPU simulator (the two terms are often interchangeable); a memory subsystem module; and various IFO devices emulators. Generally, buses are often not emulated, either for reasons of performance or simplicity, and virtual peripherals communicate directly with the CPU or the memory subsystem.
The CPU simulator is often the most complicated part of an emulator. Many emulators are written using “pre-packaged” CPU simulators in order to concentrate on good and efficient emulation of a specific machine. The simplest form of a CPU simulator is an interpreter, which follows the execution flow of the emulated program code and, for every machine code instruction encountered, executes operations on the host processor that are semantically equivalent to the original instructions.
When a guest computer system is emulated on a host computer system, the guest computer system is said to be a virtual machine, as the guest computer system exists only as a software representation of the operation of the hardware architecture in the host computer system. The terms “emulator” and “virtual machine” are sometimes used interchangeably to denote the ability to mimic or emulate the hardware architecture of an entire computer system. “Emulation” thus references a complete form of a virtual machine in which the complete hardware architecture is duplicated. Unlike “simulation,” which only attempts to reproduce a program's behavior, “emulation” attempts to model the state of the device being emulated. An emulator program that executes an application on the operating system software and hardware architecture of the host computer, such as a computer system having a PowerPC processor, mimics the operation of the entire guest computer system. The emulator program acts as the interchange between the hardware architecture of the host machine and the instructions transmitted by the software running within the emulated environment of the guest computer system. Emulations are used throughout the network industry to test new software rollouts prior to full implementation.
Administrators presently secure a server state by regular backups. In the event a failure of the system occurs, the administrator can bring the system back online with minor delays. An even more intensive task in the practice of backups is taking an “image” of an environment, such as a disk drive. An image is a computer file containing the complete contents and structure of a data storage medium or device. Images have an advantage in that in the event of a failure of the environment the structure would not have to be duplicated. The image file containing the structure negates the need to do so, thus decreasing the time to restore or build up a drive or environment.
Emulated computer systems typically involve the use of a virtual hard drive image. To emulate the presence of a physical hard drive for the guest operating system, the emulation program creates a virtual hard drive image. The emulation program will present the virtual hard drive image to the guest operating system. The guest operating system will boot from the virtual hard drive image and will refer to the virtual hard drive image for all other functions necessitating reading from or writing to a hard drive. The virtual hard drive image often exists as a single file on the physical hard drive of the computer system. Thus, the entire contents of the virtual hard drive of the guest computer system are represented as a single file on the physical hard drive of the host computer system.
In present computer performance testing administrators must manually create not only the test itself, but also the environment. Typically such tests are set up on multiple computers in a network system. For every possible scenario, there must be a method to present the test in a uniform manner and archive the test such that it can be given at a later time without undue burden on the system administrators. In some cases, test providers may need several hundred examples of suitable tests to sample from. Typically they seek a straightforward and manageable means to provide such tests. Having system administrators configure and reconfigure test platforms is onerous especially in light of the requirements for better return on company investments. All of this may be very costly.
Computer performance-based testing, as compared to multiple choice format testing, may also be more costly in that it may require significantly more time in the evaluation of the appropriateness of a response. While scoring of the examination may be designed to provide somewhat granular and discrete answers, distinctly right or wrong answers are typically much less common than on a multiple choice test as a result of the multiple correct routes to respond to a proposed scenario available on performance-based tests.
There is a need for improved computer performance-based testing methods that do not require the set-up of numerous stand-alone computers or a pervasive need for system administrators to configure and reconfigure test platforms on a network system. Further, there is a need for computer performance-based testing methods which allow for extemporaneous administration of tests depending on the test taker designated to take the test. In addition, there is a need for new methods to improve the granularity of performance-based tests in order to more adequately assess the skills of the person being tested.