The present invention relates generally to computer security, and more particularly, to systems, methods and software that provide for remote password authentication using multiple servers.
Password-based authentication is the foundation of most personal authentication systems. Even when supplemented with carry-around tokens, smart-cards, and biometrics, the simple memorized password typically remains an essential core factor for establishing human identity in computer transactions. Low-grade passwords, the kind that people can easily memorize, are a particularly tough problem for many otherwise secure systems. To accommodate low-grade passwords, zero-knowledge password protocols were developed to eliminate network threats. However, a remaining problem is how to prevent attack on password verification data when that data becomes available to an enemy.
In a typical password system, a server maintains verification data for the password, which may be a copy of the password itself. In UNIX systems, for extra security, the verification data is an iterated one-way function of the password, which is typically stored in a file. This file is often further protected so that unprivileged users and programs do not have access to it. The classic UNIX password attack is where an attacker somehow has access to the file of password verification data, and runs a password-cracking tool against it to determine or “crack” as many passwords as possible. Password cracking is simply the application of the one-way function to a list of candidate passwords. When the output of the one-way function matches the verification data, the attacker knows he has guessed the correct password. Such attacks may be optimized to try the most common or likely passwords first, so as to be able to crack as many passwords as possible in a given period of time.
A goal is to reduce the threat of server-based attack on a password as much as possible in a password-based client/server authentication system, with the simplest possible security model—that is, using as few assumptions as possible. The goal is to prevent disclosure of the password even in the face of total unconstrained attacks on a server. In the present model, it is presumed that the attacker has the ability to alter a server's software, access all data available to that server, and control the use of that server while it authenticates legitimate and unsuspecting users.
To achieve this goal, it is clear that one needs to use more than one machine. In any single machine system, one who can access the user's password verification data, or can access any password-encrypted data, has sufficient information to perform an unconstrained off-line brute-force attack by simulating the actions of the user. The problem is specifically how to split password verification data among two or more servers, so as to eliminate this risk. No single server should have the ability to mount an off-line brute force attack.
Public-key cryptography is now becoming recognized as a crucial tool for secure personal electronic transactions. Personal digital signatures are expected to have widespread applications. However, a big problem is in how to protect and manage the user's private key. Leaving it on the client machine, perhaps weakly encrypted with the password, is an undesirable security risk. It seems foolish to leave persistent password-crackable data on client machines where the security is all-too-often notoriously difficult to manage. Furthermore, in a model where a number of users can share a single machine, it becomes impractical to require local storage of user credentials.
One class of user authentication solutions for so-called “roaming” users keeps the user's private credential stored on a remote server, and uses password-based authentication to retrieve these credentials. Strong zero-knowledge password (ZKP) protocols address many of the threats in these solutions, by eliminating network-based brute-force attacks, while at the same time eliminating the dependency on other stored keys and certificates on the client machine. ZKP protocols are essential for the multi-server case as well.
Two models for multi-server secure roaming systems are discussed: a strong model and a weaker model. In both models the attacker is presumed to be able gain access to copies of password-verification data, and has the ability to perform computations over a reasonably long period of time. (Reasonably long means unconstrained, but within the limits of computational feasibility. An arbitrarily long computation would be able to crack arbitrarily large public keys. As in most cryptographic methods, it is presumed that some practical upper-bound can be established for the time and power available to potential attackers, and that corresponding safe minimum reasonable sizes for keys may be determined.)
In the weaker model the attacker is presumed to have access to copies of the server's persistent data, but is unable to interfere with the operation of a running server.
In the strong model, a method must maintain security of the password even in the face of total active compromise of up to all but one of the authentication servers. The attacker is presumed to be able to modify server software and access all information available to a server while authenticating valid users. The strong model is simpler, and is thus preferred. It is also stronger than earlier models for similar systems that presumed the existence of prior secure channels.
In all of these models, each server may defend itself from unconstrained on-line attack by enforcing limits on the number of allowable invalid access attempts. These limits may be enforced in a number of usual ways. A preferred embodiment of the present invention also includes a feature that helps servers to better account for legitimate mistakes made in password entry.
The goal of a roaming system is to permit mobile users to securely access and use their private keys to perform public-key cryptographic operations. Mobility is referred to in a broad sense and encompasses acts of using personal workstation, and other people's workstations, without having to store keys there, using public kiosk terminals, as well as using modem handheld wireless network devices. It is desired to provide users password-authenticated access to private keys from anywhere, while minimizing opportunities for an enemy to steal or crack the password and thereby obtain these keys.
Smartcards have promised to solve the private key storage problem for roaming users, but this solution requires deployment of cards and installation of card readers. The tendency for people to sacrifice security for convenience has proved to be a barrier to widespread use of solutions requiring extra hardware. This is one motivation for software-based roaming protocols.
As used herein, a roaming protocol refers to a secure password-based protocol for remote retrieval of a private key from one or more credentials servers. Using just an easily memorized password, and no other stored user credentials, the user authenticates to a credentials server and retrieves her private key for temporary use on any acceptable client machine. The client uses the key for one or more transactions, and then afterwards, erases the key and any local password-related data.
As used herein, a client machine operating on behalf of a specific user is referred to as Alice and the credentials servers are generally referred to as Bob, or individually as Bi, using gender-specific pronouns for the female client and her male servers.
The concept of “roaming” systems is closely related to credentials servers. The SPX LEAF system disclosed by J. Tardo and K. Alagappan in “SPX: Global Authentication Using Public Key Certificates”, Proc. 1991 IEEE Computer Society Symposium on Security and Privacy, 1991, pp. 232–244 presents a roaming protocol that uses a server-authenticated channel to transmit a password to a credentials server for verification, and performs subsequent retrieval and decryption of the user's private key. The credentials server protects itself by limiting guessing attacks that it can detect, and the protocol prevents unobtrusive guessing of the password off-line.
When a credentials (authentication) server can determine whether a password guess is correct, it can prevent or delay further exchanges after a preset failure threshold.
Password-only protocols are another important related field. The EKE protocols disclosed by S. Bellovin and M. Merritt in “Encrypted Key Exchange: Password-based protocols secure against dictionary attacks”, Proceedings of the IEEE Symposium on Research in Security and Privacy, May 1992 introduced the concept of a secure password-only protocol, by safely authenticating a password over an insecure network with no prior act of server authentication required. A series of other methods with similar goals were developed, including “secret public key” methods disclosed by L. Gong, T. M. A. Lomas, R. M. Needham, and J. H. Saltzer in “Protecting Poorly Chosen Secrets from Guessing Attacks”, IEEE Journal on Selected Areas in Communications, vol. 11, no. 5, June 1993, pp. 648–656 and by L. Gong in “Increasing Availability and Security of an Authentication Service”, IEEE Journal on Selected Areas in Communications, vol. 11, no. 5, June 1993, pp. 657–662, the SPEKE method discussed by D. Jablon in “Strong password-only authenticated key exchange”, ACM Computer Communications Review, vol. 26, no. 5, October 1996, pp. 5–26, http://www.IntegritySciences.com/links.html\#Jab96, the OKE method described by S. Lucks in “Open Key Exchange: How to Defeat Dictionary Attacks Without Encrypting Public Keys”, The Security Protocol Workshop '97, Ecole Normale Superieure, April 7–9, 1997, the SRP-3 method described by T. Wu in “The Secure Remote Password Protocol,” Proceedings of 1998 Network and Distributed System Security Symposium, Internet Society, January 1998, pp. 97–111. and others, with a growing body of theoretical work in the password-only model such as are disclosed by S. Halevi and H. Krawczyk in “Public-key cryptography and password protocols”, Proceedings of the Fifth ACM Conference on Computer and Communications Security, 1998, M. K. Boyarsky in “Public-Key Cryptography and Password Protocols: The Multi-User Case”, Proc. 6th ACM Conference on Computer and Communications Security, Nov. 1–4, 1999, Singapore, V. Boyko, P. MacKenzie and S. Patel in “Provably Secure Password Authenticated Key Exchange Using Diffie-Hellman”, Advances in Cryptology—EUROCRYPT 2000, Lecture Notes in Computer Science, vol. 1807, Springer-Verlag, May 2000, and M. Bellare, D. Pointcheval and P. Rogaway in “Authenticated Key Exchange Secure Against Dictionary Attack”, Advances in Cryptology—EUROCRYPT 2000, Lecture Notes in Computer Science, vol. 1807, pp. 139–155, Springer-Verlag, May 2000. Most of these papers stress the point that passwords and related memorized secrets must be conservatively presumed to be either crackable by brute-force or, at best, to be of indeterminate entropy, and this warrants extra measures to protect users.
The SPEKE method developed by the assignee of the present invention is an authenticated form of the well-known Diffie-Hellman (DH) exponential key exchange, where the base of the exchange is determined by a password. In a commonly used form of SPEKE, the password provides for mutual authentication of the derived Diffie-Hellman session key. Variations of SPEKE have also been described in pending U.S. patent application Ser. No. 08/823,961, and in a paper by D. Jablon entitled “Extended Password Protocols Immune to Dictionary Attack”, Proceedings of the Sixth Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET-ICE '97) Enterprise Security Workshop, IEEE Computer Society, Jun. 18–20, 1997, pp. 248–255, and in a paper by R. Perlman and C. Kaufman, “Secure Password-Based Protocol for Downloading a Private Key”, Proceedings of 1999 Network and Distributed System Security Symposium, Internet Society, Feb. 3–5 1999.
A SPEKE-enabled system that uses split keys among multiple machines, herein referred to as SK1, was presented by D. Jablon on Jan. 21, 1999 at a talk entitled “Secret Public Keys and Passwords” at the 1999 RSA Data Security Conference 99, with slides published at <http//www.IntegritySciences.com/rsa99/index.html>. In this system, Alice's private key U is split into three shares, a password-derived share P, and two large shares X and Y. The shares are combined using the ⊕ function (bitwise exclusive-or) with U=P ⊕X⊕Y. The SK1 enrollment and key retrieval processes are summarized below.
In SK1 enrollment, Alice selects the three shares {P,X,Y }, combines two of the shares to create a secret private key S (where S=P⊕X), memorizes P, and stores X on her machine. Then, using a form of the B-SPEKE protocol, of which several other forms are described in the June, 1997 Jablon paper, she constructs a verifier {g,V} corresponding to the password-derived values {P,S } as follows: g=hash(S) using a cryptographic hash function, and V=P using exponentiation in an appropriate group. She then sends {g,V,Y} to Bob, who stores them as her secrets.
In SK1 key retrieval, Alice (at some later time) obtains her private key U from Bob. Alice uses the B-SPEKE protocol to prove her knowledge of P and S to Bob, and to negotiate an authenticated session key K with Bob. After Bob verifies Alice, using his knowledge of {g,V}, he sends her Y symmetrically encrypted under K. Alice then decrypts to retrieve Y and re-constructs U. A brute force attack on the password-derived value P in this system requires the attacker to have access to both shares X and Y, which are stored on two different machines.
A:S = P ⊕ XA:g = hash(S)A→B:A, QA = g2 RAB→A:QB = g2 RBA:K1 = QBRAA:K2 = QBPB:K1 = QBRAB:K1 = V2 RBA→B:hash(hash( K1, K2 ))B→A:encrypt(K, Y)A:U = P ⊕ X ⊕ YTo enable a brute force attack on P in SK1, the attacker needs access to both X and Y.
The roaming model and password-only methods were combined in the Perlman and Kaufman paper to create protocols based on both EKE and SPEKE. These authors showed that simple forms of password-only methods were sufficient for secure roaming access to credentials. Other roaming protocols are discussed in the Gong, Lomas, Needham, and Saltzer papers, the Wu paper, the S. Halevi and H. Krawczyk paper “Public-key cryptography and password protocols”, Proceedings of the Fifth ACM Conference on Computer and Communications Security, 1998, and the P. MacKenzie and R. Swaminathan paper “Secure Network Authentication with Password Identification”, submission to IEEE P1363 working group, http://grouper.ieee.org/groups/1363/, Jul. 30, 1999, all of which are designed to stop off-line guessing attacks on network messages, to provide strong software-based protection when client-storage of keys is impractical.
Modified Diffie-Hellman is a method described by Eric Hughes in a paper entitled “An encrypted Key Transmission Protocol”, presented at a session of CRYPTO '94, August 1994, and in a book entitled “Applied Cryptography Second Edition” by B. Schneier, published by John Wiley & Sons, 1996, at page 515. In ordinary Diffie-Hellman, both client and server know a fixed g, and jointly negotiate K=gxy. However, in modified Diffie-Hellman, the client (Alice) sends gx, receives gxy, and retrieves K=gy from a server. Her secret exponent x is used as a blinding factor. The key is retrieved by raising the server's value to the power of x inverted, as in K:=(gxy)1/x. Unlike the ordinary Diffie-Hellman method, a static value for K can be precomputed, even before the server receives the client's (Alice's) gx.
A form of SPEKE that is based on Modified Diffie-Hellman, herein referred to as Modified SPEKE, is described in U.S. patent application Ser. No. 08/823,961. In Modified SPEKE, Alice derives a key K that is based on her knowledge of P, a value derived from her password, and a random value y known to Bob. Alice raises the password to a secret random power x, and sends the resulting value (QA) to Bob. Bob raises QA to the power y, and returns the result (QA) to Alice. Alice computes K by raising QB to the power of the inverse of x, with the result that K=Py.
AliceBobQA := Px→←QB := QAyK := QB(1/x)
It is recommended that P be a cryptographic hash function of the password, such as SHA1 or MD5, that results in a generator of a suitable large finite group. Further details may be found in U.S. patent application Ser. No. 08/823,961 and in the other papers on SPEKE.
To distribute the risk of stolen verification data, the data can be split among multiple machines. Care must be taken in the design of a network protocol that uses multiple machines to avoid exposing new avenues of attack on the password-derived data.
Another approach to a system that protects dictionary attacks is a technique called cryptographic camouflage (herein referred to as CC), as discussed by D. Hoover & B. Kausik in “Software Smart Cards via Cryptographic Camoflauge”, Proceedings of 1999 IEEE Symposium on Security and Privacy. The CC system was introduced to reduce the vulnerability of client-stored private keys, in contrast with the present goal to reduce the vulnerability of server-stored verification data.
At least one CC system has been modified to support roaming users, which provides a two-server roaming solution. Before the two-server roaming version is discussed, the basic CC system will be discussed, which includes a user, a client machine, and a CC authentication server (CCAS).
CC uses public key techniques in a way that apparently violates some of the principles of public key technology, but with a set of restrictions designed to achieve the goals discussed in the Hoover and Kausik paper. The terminology “CC private key” and “CC public key” are used to designate two components of the system.
Note that the use of “private” and “public” in the terms “CC private key” and “CC public key” have a special meaning that is not consistent with the ordinary use of the terms “private key” and “public key”. In particular, the CC model presumes that the CC public key is not publicly known.
The CC private key is stored on the client, but in a way that is hidden with a small personal identification number (PIN). It uses a special construction that makes decryption of the stored private key with any possible PIN equally plausible. (This issue is discussed later when considering human-chosen versus machine-generated PINs, an enemy's partial knowledge of PINs, and related threats.)
Only trusted parties must know the CC public key. If a CC-encrypted private key is stolen, possession of the CC public key gives the thief the ability to mount a dictionary attack on the PIN. To meet the security goals, the CC public key must be known only to the CCAS. In practice, the client may store the CC public key in a form encrypted under the CCAS public key, such that only the CCAS can read it. The client sends this encrypted package to the CCAS whenever it authenticates to the CCAS.
Similarly, any digital signatures that were signed with a user's CC private key must be visible only to the CCAS. Thus, messages are first signed with the CC private key, and then sealed with the CCAS public key.
Note that Hoover and Kausik state a more liberal restriction, that any messages encrypted under the CC private key containing verifiable plaintext must be known only by the CCAS. It is noted that in practice it is exceedingly difficult to create useful messages that are free from verifiable plaintext. The presumption that plaintext is not verifiable leaves open questions about the security of the system that depends on the ingenuity of the attacker to create a suitable verification method. Open questions such as these are not acceptable in the present model, hence a tighter statement of this restriction is used. Following is a summary of the CC system:
In CC enrollment, the client chooses PIN P, private key EC, and corresponding public key DC. There are also further restrictions that CC imposes on the construction of EC and DC, that need not concern us here. The client stores:                CC(P, EC ) the camouflaged private key        DS the public key of the CCAS        DS(DC) the client public key sealed for the CCAS        
In CC authentication, the user enters P, and the client retrieves EC from CC(P, EC). The client sends to the CCAS:                DS(EC(message) ), and        DS(DC).        
The CCAS unseals both EC(message) and DC, and verifies the client's signature to authenticate the message.
One might presume that DC is accompanied by a suitable set of certificates that attest to the validity of the key for the purposes of the CCAS. Yet, given that DC must be remain a secret shared by the client and the CCAS, this would seem to preclude ordinary methods for certifying DC using third-party certificate authorities. Together these restrictions on the CC private key and CC public key make it clear that these are unsuitable for use in any true public key system.
In CC, the PIN is effectively divided into two shares,                CC(P,EC) which is known only to the client, and        DC which is known only to the server.        
In general, all further messages signed under EC or sealed under DC must also be sealed under DS. In accordance with the present invention, this can be transformed into a two-server roaming solution.
The CC system handles password entry errors in a special way, with a feature that detects typographical errors in password entry on the client. The client compares a small amount of the hashed password against a locally stored reference data on the client. However, this mechanism has a significant cost, in that it leaks a few bits of information about the password to an enemy that gets access to the stored client data. The consequence is that passwords in the CC system must be somewhat larger or more complex, and thus harder to memorize, for a given level of security against on-line guessing attacks. The CC error handling feature was designed with the premise that people can work a little harder to ease the computing burden on the system, in that early detection of errors on the client prevents unnecessary work on the server. An alternative philosophy is that computers should be used to save people from having to work. Systems should work hard to tolerate user mistakes, without penalizing the user, and still maintain the highest possible barrier against password guessing attack. A system for optimally handling innocent mistakes in password entry is included in a preferred embodiment of the present invention.
Roaming solutions have been designed and deployed by a number of companies that provide public key infrastructure (PKI), for the purpose of authenticating “roaming” users, principally where smart-cards are unavailable. In such systems users securely retrieve stored user profiles (typically containing user private key) from a remote credentials server discussed by J. Kerstetter, “Entrust, VeriSign clear path to PKI”, PC Week, Jun. 14, 1999, available at <http://www.integritysciences.com/PKI50.html>. Other work on credentials servers use variants of EKE and SPEKE as described in the Perlman and Kaufman paper.
The cryptographic camouflage system described above presumes a single CC server, and presumes user key information is stored on the client. One can transform this into a two-server roaming solution, which is referred to as CC Roaming, by moving the client stored data to a CC roaming server (CCRS) and giving the CCRS an additional verification data for a second user password.
The user authenticates to the CCRS using a password, and an appropriate password-authentication protocol that results in the ability for the CCRS to securely return information to the client. Preferably, this would use a verifier-based or so-called “extended” zero-knowledge password protocol. However, alternative constructions are possible, including sending the clear-text password through a server-authenticated SSL (secure socket layer) or TLS channel to the CCRS for verification. SSL is discussed by A. Frier, P. Karlton, and P. Kocher in “The SSL 3.0 Protocol”, Netscape Communications Corp., Nov. 18, 1996, and TLS is discussed by T. Dierks and C. Allen in “The TLS Protocol Version 1.0”, IETF RFC 2246, http://www.ietf.org/rfc/rfc2246.txt, Internet Activities Board, January 1999. As discussed above, for full security, the CCAS PIN and the CCRS password must be chosen to as completely independent values. If they are related values, an attack on the password may lead to disclosure of the PIN.
In essence, CC Roaming is a two-password system, with one password for the CCRS, and another (the PIN) for the CCAS. One obvious limitation of CC Roaming is that the first server gets the opportunity to mount a brute-force attack on the first password. Two other related issues are the efficiency of using client entropy, and the independence of the PIN and password.
When using CC Roaming with password-thru-SSL to authenticate to the CCRS, there are added risks both from server-spoofing attacks, as discussed by E. Felton, D. Balfanz, D. Dean and D. Wallach in “Web Spoofing: An Internet Con Game”, 20th National Information Systems Security Conference, Oct. 7–10, 1997, Baltimore, Md., and at http://www.cs.princenton.edu/sip/pub/spoofing.html, and from related password/PIN attacks described below. In all of these cases, it is possible to lose some of the benefits of the two-server approach if just one of them gets attacked. Using a zero-knowledge password-proof to authenticate to the CCRS can eliminate the server-spoofing attacks, and make related PIN/password attacks much more difficult.
The CC roaming system makes inefficient use of the entropy inherent in user passwords, which must be considered to be a valuable and scarce resource. Recall that a primary goal of password methods is to minimize the amount (number of significant bits) of information that a user must remember to achieve a given level of security. The CC Roaming system wastes these bits and introduces several points of vulnerability. First, Hoover and Kausik suggests using some bits of the PIN to pre-qualify the PIN before running the protocol. They do this by additionally storing a specially constructed small hash of the PIN in the client. An attacker can use the hashed PIN to reduce the space of candidate PIN's. Considering the already reduced size of the PIN, this is may be dangerous. If user-chosen PINs are allowed, this may reduce the space of valid PINs to a dangerous level.
A CC Roaming system may also be sensitive to the relationship between two passwords, the AS PIN and the RS passwords, which artificially divides the entropy of the user's secrets. Hoover and Kausik make it clear that the camouflaged AS PIN must not be used for any other purposes. Yet, if the user accidentally incorporates the AS PIN in the RS password, or if there is some correspondence between these secrets that might be known to an attacker, the underlying CC system in CC Roaming becomes vulnerable to attack.
A related password/PIN attack is possible when the user chooses them as the same values, or as distinct values with an obvious or perhaps subtle-but-discernable relationship. There is also a risk for a user with truly independent PIN and password who mistakenly types one in place of the other.
If one considers broad-based attacks on multiple users, one can reduce the target space by first cracking the users with weak RS passwords, and then focus the effort on the set of cracked RS users to crack their corresponding AS PINs. This leakage of information about the RS password can reduces the overall work for a potential attacker. In contrast, if the same RS passwords and AS PINs are combined, an ideal two-server system (which will be described) has a cracking work factor that depends on both the passwords and the PINs, requiring a much larger overall work factor.
The artificial partitioning of the secret into two factors for the CC Roaming system thus introduces new vulnerabilities and weakens the model. In the description of the preferred embodiments, simpler alternatives to CC and CC Roaming are described that are more powerful, less complex, and at least equally protective of the user's memorized secrets.
All single-server roaming solutions have inherent limitations. In all of the above roaming methods, a single credentials server maintains data that could be used to verify information about a user password. If it is presumed that hashed-passwords can be broken by brute-force, the password-verifying server essentially represents a single point of failure for this sensitive user data. Systems that address this problem will now be discussed.
Multi-server roaming represents a further advance. While single server systems prevent guessing attacks from the client and the network, they do not stop guessing based on password-verification data that might be stolen from the server. Multi-server roaming systems can prevent such attacks to an amazing degree. At the cost of using n related credentials servers to authenticate, multi-server systems extend the scope of protection to the credentials server database. In these systems, an enemy can take full control of up to n−1 servers, and monitor the operation of these servers during successful credential retrieval with valid users, and still not be able to verify a single guess for anyone's password, without being detected by the remaining uncompromised server.
In June 2000, VeriSign issued a press release entitled “VeriSign Introduces New Technology to Enable Network-Based Authentication, Digital Signatures and Data Privacy” in May 2000, and published a web page located at http://www.verisign.com/rsc/wp/roaming/index.html that described, at a high level, a multi-server roaming feature. A paper by W. Ford and B. Kaliski entitled “Server-Assisted Generation of a Strong Secret from a Password” was presented at the IEEE Ninth International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises in Gaithersburg Md., on Jun. 14, 2000. A revised version of this paper was published in the proceedings in September 2000 (W. Ford and B. Kaliski, “Server-Assisted Generation of a Strong Secret from a Password”, Proceedings of 9th International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, IEEE, September, 2000). These papers describe methods that use multiple servers to frustrate server-based password cracking attacks. It is noted that the methods detailed in the Jun. 14, 2000 Ford & Kaliski paper rely on a prior server-authenticated channel. This dependency on a prior secure channel for password security introduces an unnecessary and potentially risky security assumptions. The removal of this dependency is one of the advantages of the present invention.
One of the methods disclosed in the Jun. 14, 2000 paper uses the Modified SPEKE protocol, as described above, which is used in a preferred embodiment of the present invention. In the various methods described in the Jun. 14, 2000 paper, two or more servers share the burden of keeping the password verification data secret. Each server stores a secret, for the user. A client obtains the password from the user and interacts with each server in a two-step process. First the client obtains an amplified key based on the password from each server, and then creates a “master” key that is used to authenticate to each of the servers. (Note that the term “amplified key” (which comes from the Bellovin and Merritt, paper) is used herein instead of the “hardened key” terminology of Ford and Kaliski.) Each amplified key represents one share of the user's master key. The client uses any standard technique, such as the use of a hash function, to combine the amplified key shares to create the master key.
To obtain each amplified key, Ford and Kaliski suggest two basic methods. One is the Modified SPEKE method described above, and the other is a variant that uses blind RSA digital signatures in the style described by D. Chaum in “Security without Identification: Transaction Systems to Make Big Brother Obsolete”, Communications of the ACM, 28 (1985), 1030–1044. These methods are compared below.
The servers keep track of invalid access attempts, and, according to Ford and Kaliski, if there are “significantly more” steps of password amplification than there are successful authentications, the server takes action to limit or lockout further requests. The system however does not distinguish between mistakes made by a valid user, and other events that might better reflect a true attack on the password. As such, their system for dealing with unsuccessful attempts here doesn't optimally handle common human errors. Unsuccessful amplifications due to simple typographical errors or other corrected mistakes should not be counted as other unsuccessful attempts. The present invention provides a better way to deal with these cases.
The methods in the Jun. 14, 2000 paper further rely on previously secured server-authenticated channels to the servers. This reliance on prior authentication creates an unnecessarily complex security model, and thus has added risks of attack. These risks may be incompatible with the goal of eliminating single points of failure.
RSA blind public-key signatures are used in one of the password amplification methods described the Jun. 14, 2000 paper. This technique accomplishes the same effect as the Modified SPEKE method—both amplifying a password into a large key based on a random component known to another party. However, the RSA method introduces added complexity. Details of the RSA blinding method are not included in the Ford and Kaliski papers, but a method based on their reference to Chaum-blinding appears to work as follows.
Instead of a single value y, the server maintains for each user the values {n, e, y}. None of these values are derived from the password. The value n is equal to p·q, where p and q are prime numbers suitable for an RSA modulus, e is the RSA public key and y is the corresponding private key, such that e=1/d mod (p−1)(q−1). Essentially, the amplified key is KB, which is an RSA signature on the password. The use of x makes this a blind public-key signature scheme as described in the Chaum paper, where the blinding factor x prevents the server from knowing the data (P) that it is signing.
In the Jun. 14, 2000 paper it is proposed that SSL be used to authenticate the channel to insure the authenticity of KB for the client. The paper also notes an advantage of Modified SPEKE over RSA blinding, in that the RSA blinding client has no good (i.e. efficient) way to validate the values of n and e, other than by pre-authenticating the server.
Ford and Kaliski further present a “special case” protocol, where a “password hardening server” is used to amplify a password into a key K1 that is used to authenticate to a (conventional) credentials server. The value of K1 is completely determined by the password hardening server's response and the password. Any attacker who controls the first exchange must be unable to determine whether the K1 value computed by the user is equal to candidate K1′ values that he constructs based on his guesses for P′. So the attacker must never see K1. Similarly, if a message encrypted with K1 contained verifiable plaintext, the attacker could validate guesses from that as well. To counter this threat, the authors suggest that the credentials server communication channel be integrity protected. The Jun. 14, 2000 paper does not specify an explicit method, but it suggests the use of SSL. (Note that their September, 2000 paper does discuss how to avoid the need for a pre-secured channel.) If it is presumed that a common set of root certificates in the client can validate both servers, one or more single points of failure are introduced into the system. There is a single point of failure at each location where a private key resides for the root or any subordinate certificate authority. This is significant, as a primary goal of this model is to eliminate single points of failure.
The aforementioned attack can be achieved by compromising any single system that has a key to create a valid-looking certificate chain for the two servers in question. Furthermore, as described above, an attack on the SSL/web browser model of security can trick the user into using “valid” SSL connections to malicious servers, or into not using SSL at all.
Dependency on prior server authentication is a serious limitation. In a consumer environment, such as a public kiosk, or a private desktop that can connect to a wide variety of servers in different domains, the user must locate and connect to the appropriate server. The dependency on users to securely connect to the appropriate server, to maintain security of the password, introduces problems. Ford and Kaliski discuss how the client can use SSL to create a secure server-authenticated channel. This implies that the client must (somehow) validate the identity of the server to maintain password security. It also implies, in typical implementations, that the client stores server-specific state information to authenticate the communication channel to the server. To establish a secure channel to the server, an SSL solution typically requires having a pre-installed root key for one or more certificate authorities that are installed in the client. It also requires access to a chain of certificates that have been constructed that associate the root key with the public key of the named server. The client must include certificate validation software and policy enforcement to validate a certificate of the appropriate server (or servers) selected by the user. Ultimately, the user must insure that the server name binding is correct. This may require significant action and attention by the user. A reliance on SSL, especially as used in the browser model, is unnecessarily risky. The user can be tricked into using “valid” SSL connections to malicious servers, or tricked into not using SSL at all. This process is subject to several types of failure that might be attributed to “human error”. The problem is believed to be an overall system design error based on unrealistic expectations of the human participant.
The dependency on a prior server-authenticated channel for password security is unnecessary. Preferred alternative methods of the present invention work in a simpler model that is better adapted to normal human behavior in an environment of multiple roaming users and multiple servers.
The preferred security model is similar to that in the Jun. 14, 2000 paper, in that both use multiple servers. However, in the preferred model of the present invention, it is not presumed that a pre-authenticated secure channel exists between the client and servers. The preferred model frees users from having to carefully validate the identity of the server in order to maintain security of the password. In this model the user is free to locate the server, casually, through any insecure mechanism, such as those commonly used on the Internet. These methods include manually typed (or mistyped) URLs, insecure DNS protocols, untrustworthy search engines, “clicking” on items in a collections gathered by an unknown source. All of these methods and more provide a robust multitude of ways to establish a connection to the right server, but none of these have to guarantee against the chance of connecting to an imposter. The worst threat posed in this simple model is one of denial of service—which is a threat that is always present anyway in the more complex model. The added benefit of the preferred model is that the password is never exposed, even when the client is connected to an imposter, by whatever means.
Most of the prior art handles erroneous password entry very simply. In the Jun. 14, 2000 paper, it is suggested that each server can keep track of the number of “password hardening” steps, and to reconcile this with the number of successful authentications. If there are “significantly more” hardenings than authentications, then the account would be locked out. This is typical for password-based systems. However, it is noted that unsuccessful logins may be quite common. Passwords are frequently mistyped, and users may often enter the wrong choice of multiple passwords, before finally getting it right. If a long-term fixed limit is place on such mistakes, valid clumsy users might be locked out. On the other hand, if the system tolerates a three-to-one attempt-to-success ratio, an occasional guessing attack by an enemy over the long term might remain undetected.
To address this error handling problem, the concept of “forgiveness” is introduced in the present invention. A system should be forgiving, and not account for transient mistakes by a valid user in the same way as invalid access attempts by unknown users. This problem is addressed in preferred embodiments of the present invention.
It is therefore an objective of the present invention is to reduce the threat of server-based attack as much as possible in a simple security model in a password-based client/server authentication system. It is also an objective of the present invention to provide for remote user authentication using multiple servers. It is also an objective of the present invention to authenticate from a client to a server using just a small password. It is also an objective of the present invention to provide for authentication that does not require stored keys or certificates on a client machine. It is also an objective of the present invention to use multiple servers for fault-tolerance. It is also an objective of the present invention to provide for remote password authentication that is secure even with total active compromise of any individual server. It is also an objective of the present invention to better handle common human errors in password entry.