Field of the Invention
The present invention concerns a computer system with multiple security levels.
Background of the Invention
Personal computers are widespread, and commonly and increasingly used on the Internet for banking, entertainment, social purposes etc. The average user can not be expected to have a high level of technical knowledge in general, or in the field of computer security in particular. Hence, criminal activities such as fraud and theft are facilitated by more or less protected personal computers. Currently, much attention is given to the actions of malicious software, malware for short, such as viruses, spyware etc, which may be used for taking control over remote computers, keeping track of a user's actions in order to obtain passwords and the like. Some malware, such as software used to track which websites the user visits in order to send targeted spam, may not be directly criminal. However, it may be a nuisance, and in some cases may slow down or even halt a computer. Hence, any kind of malware is undesired.
Today, antivirus software from a variety of vendors provides the main defence against malware. Antivirus software typically scan software for snippets of known virus code, and usually also provide filters to detect web pages trying to trick a user into entering information such as password or a credit card number and pass on the information (phishing). Antivirus software may also use a number of other techniques in order to discover, isolate and/or remove malware.
Many of the tools used for protecting personal computers are inadequate in that they are reactive, e.g. search for malware after the PC has been infected, they may perform post mortem analysis etc.
It is well known from e.g. the military, governmental and financial sectors that security must be built into the systems architecture from the start in order to obtain a truly robust and secure system, be it a computer system, an organizational system or any other system. The mathematical foundation for such secure systems was formulated in the 1970's primarily by Bell and LaPadula for confidentiality, and by Biba for integrity. A brief overview of these models is useful in order to explain the invention.
Brief Overview of Formal Security Models
Security is frequently defined as a combination of the security aspects confidentiality, integrity and availability. In this disclosure, the term ‘security’ is defined in a similar manner. However, it is noted that there may be several aspects of integrity, and that there may be no clear distinction between certain integrity and availability aspects. It should also be understood that all aspects of security herein are independent of each other, i.e. that a security aspect that can be expressed as a combination of other security aspects is not considered a separate security aspect.
Confidentiality means that information should not be disclosed to someone not entitled to know it. In the Bell-LaPadula (BLP) model, a confidentiality level is assigned to an information object such that a higher level implies more confidentiality. A ‘subject’, e.g. a person or process, is given a clearance at a certain confidentiality level. The information object may only be written to a subject having a clearance at or above the confidentiality level of the information object. In other words, ‘writing down’ to a less confidential level is not permitted, whereas ‘writing up’ is allowed in the BLP model. Further, if two information objects with different confidentiality levels are combined, e.g. present in one document, the combination is assigned the higher of the two levels of confidentiality. While information may be written up, it cannot be written back to a lower level without violating the model. This also applies to a combination of information by the combination rule. Thus, in order to avoid information from migrating to the highest possible confidentiality level and having to treat a lot of public information as if it was confidential, writing up should thus still be kept at a minimum. The BLP model can be extended with categories or compartments implementing the ‘need to know’ principle. For example, a company may decide not to grant access to salaries to every employee with a clearance for CONFIDENTIAL, but only to those who in addition belong to a certain category, e.g. SALARIES.
Integrity essentially concerns the trustworthiness or reliability of information. Biba's strict integrity model, ‘the Biba model’ for short, is similar to the BLP model in that information is assigned a level of integrity and in that a subject is assigned a clearance. A high level of integrity is associated with reliable and trustworthy information and/or subjects. However, unreliable information should not be allowed to mix with reliable information at a higher integrity level, as the information at the higher level then would be no more reliable than the least reliable information written to it. Hence, the Biba model differs from the Bell-LaPadula model in that writing up is forbidden, writing down is allowed and in that a combination of information from two levels of integrity is assigned the lower level. Like the Bell-LaPadula model, the Biba model can be extended with compartments, and although writing down is allowed, it should be kept to a minimum in order to prevent information from migrating to the lowest available integrity level.
Some security models combining integrity and confidentiality assume that a subject with access to confidential information, i.e. with a ‘high security clearance’, automatically should have a ‘higher integrity level’ than someone with a lower ‘security clearance’. This is a confusion of terms. In this disclosure, integrity and confidentiality are regarded as completely independent of each other. This complies with current theory, and means that information may be more or less reliable regardless of its level of confidentiality, and that a computer process may be assigned clearance along a confidentiality axis regardless of its assigned clearance along an integrity axis. Hence, a trusted process with the highest available confidentiality level and lowest possible integrity level will be able to see or read all information in a security system, but it will not be permitted to write any information to lower levels of confidentiality and/or higher levels of integrity. On the other hand, a process run on the lowest available confidentiality level and highest available integrity level will be able to write information to every level of confidentiality and integrity, but it will not be allowed to receive any information from other levels.
Information Security and Networking
In order to protect confidential information from being disclosed to unauthorized subjects, the information may be encrypted by some cryptographic algorithm using a key. Obviously, there is rarely a real need for encrypting a cake recipe or other trivia to the same level as top secret military information. However, some systems, for example some so-called Virtual Private Networks, do encrypt all messages to the same level regardless of content. To keep the required system resources (and expenses) at a reasonable level, such systems typically encrypt the information to a level appropriate for some medium level of confidentiality. Hence, information assigned a higher level of confidentiality is not permitted to enter such systems without additional encryption. Still, system resources are wasted on encrypting public information, or on encrypting information that has already been encrypted by a more advanced and demanding algorithm. The skilled person will know that different levels of confidentiality can be assigned different encryption algorithms and/or keys of different length in order to encrypt information according to its level of confidentiality. The skilled person will also know that the task of keeping confidentiality levels apart may be more demanding than simply encrypting everything to some medium level of confidentiality.
An important aspect of integrity is to ensure that information and the subject accessing it are authentic. Thus, authentication is needed to ensure that a user or process is the one he, she or it claims to be, for example the user or process initiating a banking transaction from a bank account. In the financial industry, a token or RSA-generator plus a personal password and/or other personal data may be required to identify a person properly before he or she is permitted access to a banking application. Similarly, a certificate or the like may authenticate a computer process.
One technique to prevent unauthorized alteration involves computing a cryptographic checksum called a hash. For example, a hash can be computed from a piece of software code and stored in a protected area. At runtime a new hash is computed and compared to the stored hash. If the two hashes are different, the code is not allowed to run. Hashing is also used to protect information from unauthorized alteration (tampering) in transit, e.g. to ensure that no one alters an account number and/or amount in a banking application. The HTTP Secure protocol (https) implements authenticity in this manner, and is widely used for banking applications and other transmissions over the Internet where integrity is important. It should be noted that while encryption may ensure some level of integrity in human based systems, it does not ensure authenticity in a computer system. The reason is that a person readily may recognize a decrypted altered message as garble. Then, if a decrypted message is readable, it probably has not been altered, and the sender may be assumed to be authenticated since he must have the proper key to encrypt the message. A computer process receiving a similar decrypted altered message cannot be expected to recognize the resulting content as garble. Consequently, no conclusion regarding tampering or sender should be made. In short, a hash may preserve integrity while encryption does not preserve integrity. Likewise, encryption may preserve confidentiality while a hash does not preserve confidentiality.
Current computer systems with functionality and architectures supporting the Bell-LaPadula model include, but are not limited to, Solaris version 10 and later, all current Linux distributions as well as secure proprietary systems used for military and governmental applications. In transfer, confidential information may be encrypted with algorithms of various complexity and keys of various lengths according to the confidentiality level of the information in transfer: Current Linux and Solaris systems do have some functions for integrity, for example a password system or a ‘smart card’ system for user authentication, the ability to check a hash before running an application (authenticity) and functions for other integrity aspects. Some of the functions related to integrity are implemented in hardware or kernel software, other functions are implemented by third party application software.
Functions for the third security aspect, availability are typically implemented by third party tools, e.g. application layer backup- or system recovery tools, or vendor specific disk-redundancy tools. We note that so called flooding attacks sometimes are regarded as threats against availability. They may equally well be regarded as unauthorized writes, and may as such be regarded as an integrity threat. Regardless of the terms used, we note that rules similar to Bell-LaPadula's and Biba's can be employed along a number of axes, some of which may be termed an integrity aspect or an availability aspect, but still be treated according to either the BLP or the Biba rules described above.
At least some of the threat posed by malware may be attributed to lack of system support for formal security models. If, for example, confidentiality or integrity is enforced such that an external process is unable to write into a restricted area, then a virus could not contaminate application software. Further, if a hash must be computed at runtime and required to be identical to an authenticated hash stored in a restricted area, then harmful code could automatically be prevented from running, in particular in restricted areas.
Thus, the effects of malware could be reduced or even eliminated if the formal security models were enforced.
However, a strict enforcement of security poses new problems. One example is an integrity control where a user is required to add each and every web page he or she visits to a list of ‘trusted’ pages. Considering the number of web pages visited by the average user, this quickly causes the user to automatically add web pages to the list. After some time, the user may even disable this ‘security’ function to get rid of the perceived nuisance. It is readily seen that this kind of integrity control has little or no effect, and that the user cannot be depended on to adequately assess integrity and/or confidentiality.
Another problem is cost. Today, even starting from a Solaris system which implements many of the required functions and which has a lot of verified code, developing and verifying even a relatively simple system for business use, can easily cost several millions of dollars. Starting from a Linux system, obtaining the necessary certification for the code adds to the cost before a trusted system would be put into business use, let alone military or governmental applications.
An important reason for the high cost is the use of unordered compartments in the formal Bell-LaPadula and Biba models. Given a set of N unordered ‘security compartments’, i.e. security related groups to which a user or process can belong, a superset of 2N−1 elements must be considered in a mathematically ordered and controllable set. For example, if a user can belong to groups A, B, and/or C, the superset of 23 elements a user can belong to is [Ø, {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}]. Formally, a user must have the proper security clearance AND belong to {A}, {A, B}, {A, C} or {A, B, C} in order to access information in compartment A. The empty set Ø, where a user belongs to none of the groups A, B or C is usually excluded from implementation for obvious reasons. The ordered superset is considered a subset of each of L security levels. Thus, a secure system must consider L·(2N−1) ordered levels along an axis of security, e.g. the confidentiality axis. In current systems, the number of possible confidentiality levels can be e.g. L=65536 or larger, and the number of available compartments may also be, for example, N=65536 or larger. This may seem like large numbers, but a few tens of thousand compartments in a system with several hundred of thousand users may still be too small. In the present context, however, the number of available levels along each axis will typically be L=3 or less, and the number of compartments may easily be reduced to a few or even 1 as explained below.
It should be understood that the various groups created in a typical operating system for personal computers may have different purposes, and do not necessarily have formal security significance. For example, a substantial number of the groups a user can belong to in a typical PC system will only contain public information with unknown reliability. Further, the rights assigned to processes in a PC system are many, various and can hardly be seen to constitute a formally complete set of rights associated with formal security. Thus, the number of formal security compartments is not large in a typical PC-environment in general. In a secure environment running machines at different levels, a virtual machine's ‘need-to-know’ is expected to be limited, and hence the number of security compartments is expected to be low, for example 1 per virtual machine.
State of the Art
As mentioned above, some operating systems, for example current Linux distributions and some UNIX-based systems, include security functions that employ techniques implementing the formal security models. One such technique is, as briefly mentioned above, to use a hashing algorithm to provide and store a hash of software during installation, calculate a new hash at runtime and only permitting the software to run if the runtime hash is identical to the stored hash. Another technique is to run applications in a “compartment” or “sandpit” isolated from other software running on the system. Running an entire operating environment on a virtual machine provided by a hypervisor system may be viewed as a variety of the sandpit-technique. There are other techniques known to those skilled in the art, all of which may be used with the present invention. In this disclosure, the term “operating environment” includes any operating system and/or hypervisor system capable of running computer applications, including different operating systems and user interfaces.
Known systems for implementing security in a low power system include the use of a processor and certificates and/or keys embedded in a plastic card the size of a credit card. Such cards may be inserted into a card reader connected to a computer. The card reader may be connected through a systems bus or a peripheral bus like, e.g., a Universal Serial Bus (USB). Such security cards have no internal power source, and electric power is supplied from a running system through the card reader. Further, the processing capability of such a card make it unsuitable for running computing intensive routines like, for example, hashing, encryption or booting a kernel in an operating environment. Usually, the card reader also depends on a driver supplied by a running operating environment. Hence, such card based systems are normally used for high-level security functions such as providing a certificate or key for verification, hashing and/or encryption in applications running within the operating environment.
From a security perspective, such card systems are still prone to various threats against confidentiality, integrity and availability. In particular, malware may infect the operating system and/or applications during startup (boot) or operation. Such malware might, at least in theory, steal the smartcard's keys or certificate, or mimic the driver to authorize something that would not be authorized by the smart card. This possibility renders the smart card unreliable from a formal integrity point of view.
An objective of the present invention is to provide a system capable of providing security related functions and data without requiring a running operating environment. In particular, the system may contain hashes of installed software, for comparison before software, possibly including kernel functions of an operating system, is allowed to run on the system. The system may also contain keys and other data, and be able to run security related routines without requiring external processing power or a running operating system.
Another objective of the present invention is to provide a computer system consistent with formal rules for confidentiality, integrity and availability, which system does not depend on a user's discretion and which hamper a user's activities as little as possible.