The recent massive growth in the availability and use of software applications, commonly referred to as “apps”, particularly for smartphones, tablets, and other portable, wireless devices, has provided an opportunity for criminals and others with malicious intent. Apparently trustworthy apps may be used to install malware onto user devices. In order to protect against such threats, operating system vendors such as Google™®, Apple™® and Microsoft® generally implement a certificate based authentication mechanism in their operating systems. In order for its apps to work with a given operating system, an app developer must first obtain a code signing certificate form the operating system vendor. Such a certificate may be, for example, a private key of a public-private key pair according to the RSA public key cipher algorithm.
According to one such approach, the code forming the app, or possibly an abbreviated hash of that code, is encrypted by the app developer using its private key to generate a signature. The signature may also be taken over identity data such as the developer's name. The plain text code, signature, and optionally the developer's public key and claimed identity, are then distributed to end user devices. The operating system on the device typically comprises an app installer which uses the signature to authenticate the code. If the developer's public key is not provided by the developer, the app installer first obtains the developer's public key from the operating system vendor (or its agent) using the developer's claimed identity. If the public key is provided by the developer, the authenticity of the public key may be confirmed by performing a check with the operating system vendor. The app installer generates a hash of the code if required. It decrypts the received signature using the developer's public key, and compares the result with the code or hash. If the two match, then the app installer assumes that the app is validly provided by the approved developer and continues with the installation process. If not, then installation is stopped and, if required, an alert generated.
Additional authentication mechanisms may be employed to increase security. For example, Apple™® apply a further signature to apps available on their app store. Apple only apply this signature after they have themselves verified the identity of the app developer and confirmed the proper operation of the app. However, such an approach is not appropriate for all operating system vendors, particularly where a more flexible app distribution approach is desirable.
A further use of certificates is to minimise the load on client device antivirus (AV) scanners. Scanning a particular app using signature and heuristic scanning engines to detect malware can be relatively computationally intensive. An AV scanner may therefore maintain a database of trusted sources and their respective public keys (or links to those keys). When an app to be scanned is identified as being accompanied by a certificate, before commencing an intensive scan, the AV engine will first check if the certificate is associated with a trusted source. If so, then a detailed scan may not be considered necessary. Of course, the use of certificates to minimise AV scanning requirements may be applicable not just to apps, but to files in general. Such a “whitelisting” approach to AV scanning is extremely useful in order to reduce computational load and improve device performance.
Important uses of certificates is for cyber security end point protection, automated forensic and cyber defence products. Modern cyber defence relies very much on application developer identification, which means that anyone who can steal a certificate will be able to bypass a lot more than an antivirus scanner. For example, nuclear power plants and other critical systems very often identify every binary by using some form of a whitelist and code signing certificates are one of the most important whitelists used nowadays. This means that an efficient detecting of malware in any critical systems is very dependent on authenticity of the code signing certificates.
In the absence of an additional security mechanism such as that employed by Apple™®, the conventional approaches (to both AV scanning and app installation) are only secure if app developers prevent their code signing certificates from falling into the hands of attackers. Furthermore, the conventional approaches cannot by itself protect against rogue developers who validly obtain a code signing certificate.
Further, in order to gain speed security vendors are required to trust thousands of code signing certificates. This means that it is trivial for an attacker to compromise one developer among thousands and steal the certificate. One solution to this problem is to send metadata about the signed files to upstream and then compare whether the metadata matches with the one of the developer. However, this solution does not scale to tens of millions of users because local certificate databases are supposed to prevent such queries.