Review of Paper “Brubaker et al “Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS”

SSL/TLS is the most widely deployed security protocol and probably the most important protocol used on the internet today. It is commonly implemented to protect HTTP communication from network attacks such as man-in-the-middle attacks. SSL/TLS is layered on top of HTTP to support end-to-end confidentiality, integrity of data and client/server authentication.

In this paper by Brubaker et al “Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS”, they describe their novel approach for finding vulnerabilities in various open-source SSL/TLS implementations. They focus specifically on finding vulnerabilities in the handshake protocol, which is the only place where server authentication is performed. This process relies entirely on verifying the public-key certificate presented by the server to the client. Correct validation of certificate subsequently verifies the identity and is used to authenticate the server.

The researchers point out they had to tackle two challenges to successfully perform large-scale testing of SSL/TLS to find undiscovered vulnerabilities. The first challenge was finding a method to generate valid certificates that were both unique in their combination of features, extensions and constrains, and generate certificates that were syntactically valid and parsable as well formed certificates, but did not conform to the X.509 semantics. Manual generation of certificates would have been possible, but would have also been incredibly slow and thus not feasible.

Attempts to randomly generate valid certificates using techniques such as random fuzzing, would have been futile and unlikely to yield enough valid certificates. This is because x.509 certificates are formed using a complex data structure with intricate semantic and syntactic constraints.

The second challenge was finding a method to correctly interpret the results of the tests. Simply recording whether a certificate has been accepted or rejected by a certain SSL/TLS implementation would not say anything about the implementation, and whether or not it got the validation wrong. The certificate validation process could be re-implemented from scratch and used to compare the results, but this again would be impractical and the re-implementation would most likely contain bugs of its own.

The ability to quickly identify discrepancies between the results of SSL/TLS implementations would be advantageous in understanding the handling of certificates by each implementation. Thus interpreting the test results would need some form of Oracle to validate certificates.

Their novel approach to finding vulnerabilities in SSL/TLS implementations tackled both these challenges. First they tackled the problem of generating enough input data by creating an innovative certificate generator called the “adversarial input generator” which produced certificates called “Frankencerts”.

This worked in principal by breaking down real certificates in to different parts and then using an algorithm they created to randomly assemble different combinations of these parts to make up unique valid certificates. They also injected synthetic parts and combined these parts with the real parts to create certificates that were syntactically valid but did not conform to x.509 semantics.

The different parts of the real certificates were taken from a total of 243,246 certificates gathered from the internet and combined with synthetic artificial parts to create 8,127,600 Frankencerts.

Every certificate produced included at least one permutation of every extension and value found in X.509 standard, including rare values hardly used in certificates. They also contained certificates that were made up of abnormal combinations of extensions, extension values and key usage constraints that did not conform to the X.509 semantics, but were parsable as well formed certificates.

Using certificates that did not confirm to X.509, allowed the researchers to determine the behaviours of the various SSL implementations upon processing certificates that did not adhere to strict x.509 semantics.

The second challenge was overcome by using Frankencerts to perform differential testing of SSL/TLS implementations, and using the results as an Oracle to detect flaws. Essentially the results generated from testing were compared against each other, and any discrepancies found indicated a difference in a given SSL/TLS implementation. This is because all implementations would have come to the same conclusion after validating a given certificate if they were implemented correctly. For example if an improper certificate created a discrepancy amongst the implementations, the implementations that successfully validated the certificate could be identified as having a vulnerability of some kind.

The researchers performed tests on OpenSSL, PolarSSL, GnuTLS, CyaSSL, MatrixSSL, NSS and OpenJDK SSL/TLS implementations. Testing these open source implementations meant that they could look into the source code to identify the root causes of vulnerabilities found. They also extend their testing to cover browsers such as Firefox, Chrome, Internet Explorer, Safari, and Opera.

Out of the 8,127,600 certificates generated for testing, 62,022 Frankencerts triggered 208 distinct discrepancies between SSL/TLS implementations. These discrepancies were attributed to 15 distinct root causes. These 15 distinct causes can be grouped under 8 fundamental problems.

1. Incorrect checking of basic constraints, e.g. accepting un-trusted Version 1 and 2 certificates or ignoring path length constraints.

2. Incorrect checking of name constraints e.g. accepting certificates from CA’s not authorised to issue certificates for server’s hostname.

3. Incorrect checking of time, e.g. ignoring notBefore timestamp field and accepting certificates that are not yet valid.

4. Incorrect checking of key usage, e.g. allowing keys to be used to perform actions other than their intended use, like using a key generated for code signing for authentication.

5. Other discrepancies in extension check e.g. arbitrary discrepancies in extension checks or accepting unknown critical extensions.

6. No checking of certificate chains, e.g. code incomplete for checking chain of trust.

7. Security problems in error reporting e.g. high-risk warning can be hidden behind a low-risk warning.

8. Other checks e.g. CRL not verified by default, short keys and weak cryptographic hash functions are accepted.

An adversary does not need to confirm to normal operating boundaries, therefore scenarios which are unexpected or irregular need to be created to understand the resulting behaviour. Without using Frankencerts researchers would not have been able to analyse this behaviour. Having said that differential testing of Frankencerts is not the silver bullet to finding out all vulnerabilities, since it suffers from false negatives. This is because if all implementations make the same mistake, there would be no discrepancy. Vulnerabilities found in the implementation could be attributed to the fact SSL/TLS is a large and complex protocol loosely defined in various RFC’s, which makes it difficult to implement properly.

Interestingly, the initial collection of certificates that were used for seeding the Frankencerts flagged up a number alarming statistics. Only 65% of certificates were realistically valid, since 23.5% were already expired, 10% were version 1 certificates and the rest were versions that didn’t exist or not yet valid. What is more worrying is that several of the SSL/TLS implementations tested were certified FIPS 140-2 standard, which shows us that just because an application is approved by government, doesn’t necessarily mean it’s safe for use.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>