1. Technical Field
Disclosed embodiments relate to the field of computer processing and communications. In particular, systems and methods are disclosed for authenticating an electronic communications partner.
2. Description of the Related Art
Communication among electronic devices is widespread and can take many forms. In some cases a client computer communicates with a server computer to enter into a transaction. The transactions may be sensitive in nature and may involve accessing a password protected account on the server. For example, a user may use an electronic device to connect to a server in order to access a bank account and conduct online banking transactions. In other cases, peer devices may communicate with each other to share files, chat, or conduct voice over IP (VoIP) telephone calls.
In electronic communication, a danger exists of a third party impersonating one of the communicating parties. If a third party is able to successfully impersonate one of the communicating parties, then the third party may be able to access private information, such as bank account passwords, credit card information, or any other private information that is electronically communicated.
FIG. 1 illustrates a system 100, in which a third party is able to access private information in electronic communication. System 100 includes sender 102, intended receiver 104, and impersonating receiver 106. Sender 102, intended receiver 104, and impersonating receiver 106 are computing devices that are electrically or optically connected to each other, for example, by a computer network.
Sender 102 may be a client computer attempting to login to intended server 104, which may be a server at a bank that can perform bank transactions, for example. As such, sender 102 sends a communication 108 to intended receiver 104. In the absence of impersonating receiver 106, intended receiver 104 would receive intended communication 110. Intended communication 110 is shown as a dotted line in FIG. 1, because it may never reach intended receiver 104, and is intercepted by impersonating receiver 106.
Impersonating receiver 106 receives intercepted communication 112 from sender 102. Impersonating receiver 106 establishes a bidirectional communication link 114 with sender 102 by pretending to be intended receiver 104. Intended receiver 104 may not know that sender 102 attempted to communicate with it.
For example, if a user of sender 102 was logging into her bank account, she may direct her browser to go to the web address of her bank, which should enable her to access intended receiver 104. Impersonating receiver 106 may intercept that communication, and respond with a webpage, which looks similar to the web page that intended receiver 104 would normally provide. The user at sender 102 may then provide her user name and password information to impersonating receiver 106, mistakenly thinking that she is providing this information to intended receiver 104. Impersonating receiver 106 may then capture the user name and password information, and then would have full access to the users bank account.
One solution that has been proposed, is for the user and intended receiver 104 to agree on an authenticating symbol at registration. This way, when the user accesses intended receiver 104, intended receiver 104 sends back the agreed upon authenticating symbol. By contrast, if impersonating receiver 106 intercepted the communication and sent a webpage to sender 102, the webpage would not include the authenticating symbol, because impersonating server 106 would have no knowledge of the authenticating symbol. If the webpage received at sender 102 does not include the authenticating symbol, then the user knows that its communication partner cannot be trusted, and she can refrain from providing her sensitive information. In this way, the user can authenticate that she is communicating with intended receiver 104 and not impersonating receiver 106.
There are at least two standard solutions from the Cryptography literature. The first one is based on first setting a public-key infrastructure (PKI) and then using certificates released by a Certification Authority (CA). For instance, when a client visits a website with a computer, the client is often guaranteed that the website she is visiting is authentic (as opposed to being a counterfeit copy from an impostor) by the fact that the client browser verified the website's certificate, released by a trusted CA (e.g., Verisign).
Such techniques are considered very secure but are also well known to rate poorly in terms of usability, as they are hard to deploy (not all networks can afford to setup a PKI), hard to maintain (if not periodically managed, the above verification won't work), and such verifications are often ignored by users who visit the website even after being notified that the verification was not successful (i.e., if the website's certificate expired).
Browser phishing filters detect whether a web site being visited has features similar to known “phish” web site; meaning a web site that are put up by an impostor rather by the entity claimed in the web site. Such method perform relatively well in terms of usability as not much is needed by a user to maintain such filters, but are well-known to rate poorly in terms of security, as skilled impostors understand how to overcome such filters. A well-known example is the E-bay toolbar using the Account Guard method.
Recent techniques making a huge step towards solving the problem include Bank of America's SiteKey system and variants of it, which work as follows: the user provides the server with a shared secret, such as an image or passphrase, in addition to her regular password. The server shows this shared secret to the user, who is asked to recognize it before providing the server with her password. The biggest weakness of this scheme is that the server must display the shared secret in order to authenticate itself to the user. If the secret is observed or captured, the image can be replayed by an impostor which would then be able to fool the user. Still, such schemes are today used by essentially anyone having on-line access to her bank account. Other shortcoming of these schemes are discussed in the paper “Phish and HIPs: Human Interactive Proofs to Detect Phishing Attacks,” by Dhamija et al.
One drawback with the Bank of America solution is the possibility of impersonating receiver 106 learning of the authenticating symbol. This could happen at sender 102, if someone sees the authenticating symbol on a display screen of sender 102, known as a “spying attack” or a “shoulder attack,” Alternatively, impersonating receiver 106 may monitor communication between sender 102 and intended receiver 104 over time, to determine the authenticating symbol.