The ISP Column
A column on things Internet
Other Formats: PDF   TXT  


Measuring ECDSA in DNSSEC – A Final Report
August 2018


Geoff Huston

Back in 2014 I wrote on the use of the elliptical curve cryptographic algorithm in generating digital signatures for securing the DNS (DNSSEC). The conclusion at the time was hardly encouraging: “Will ECDSA ever be a useful tool for DNS and DNSSEC? As good as ECDSA is in presenting strong crypto in a smaller number of bits, it is still an alien algorithm for much of today’s Internet. So, sadly, I have absolutely no idea how to answer that question as to when it may become genuinely useful for use in DNSSEC.”
 
Four years later, let’s see if we can provide an updated answer this question and hopefully put the matter to rest.

Public key cryptography relies on the construction of a particular class of whole number problems where the problem is relatively simple to construct, but incredibly challenging to solve quickly. One of the better-known examples of this class of problems is that of discovering the prime number factors of certain very large composite numbers, and this particular problem is the foundation of the most common cryptographic algorithm in use in the Internet today.

RSA Cryptography

Erastosthenes of Cyrene was a Greek of some astounding learning, becoming the Chief Librarian at that fabled wonder of the ancient world, the library of Alexandria, in the third century BCE. Not only has he been credited as the first person to calculate the circumference of the earth, but for the purposes of this article his claim for posterity is based on his work in number theory, and in particular his invention of the Sieve of Erastosthenes as a means of identifying prime numbers. His enumeration method is complete and accurate, but for very large numbers the process is inordinately slow. And we haven’t really made it much faster in the intervening 2,200 years!

The related problem, that of computing the prime number factors of a very large integer formed by multiplying two prime numbers, can also take a long time because at its heart lurks the same basic enumeration algorithm that grows as the size of the number grows. It’s not an impossible problem, but even with the most powerful computers available to us today, it may take many years to perform this operation depending on the size of the number.

For practical purposes we’ll ignore the disruptive quantum computing storm clouds on the horizon that lurk in our future! That’s another topic for another day!

It is on this foundation that we have constructed much of the security infrastructure of today’s computers and the Internet itself. The widely used RSA (Rivest–Shamir–Adleman) algorithm uses this principle of the difficulty of prime number factorization as its corner stone. The basic principle behind RSA is the observation that it is relatively efficient for a computer to find three very large positive integers e, d and n such that with modular exponentiation for all m:

(me)dm (mod n)

Reverse engineering this relationship, and in particular, even when the values of e, n (and even m) are known, finding the value of d can be computationally far more expensive than generating these three numbers in the first place. In terms of order of magnitude, its computational expense is not dissimilar to the problem of prime number factorization. This leads to the ability for computer algorithms to generate a public key cipher that is relatively fast to generate and extremely difficult to break.

This relationship leads to an asymmetric key relationship that can be used to construct a complementary key pair where a code generated by one key can only be decoded by its complementary key.

Given a public key of the form (n, e), and a message m, the message can be encrypted to a cyphertext c using:

cme (mod n)

The complementary private key is (n, d) and it can be used to decode the message through computing:

cd ≡ (me)dm (mod n)

If the original message M has been padded to produce the value m which is less than n, then m (mod n) = m. The original message M is generated by applying the reverse of the padding protocol to the value m.

At the heart of this relationship is the value of n, which is the product of two (large) prime numbers. e is often set to the value 216+1 = 65537.

What is “computationally expensive” is a relative term depending on when you make the call. Tasks that were computationally infeasible a couple of decades ago may well be commonplace tasks using current computing capabilities. This means that as computing capabilities improve over time, the size of the numbers used in RSA encryption need to increase in size. That holds right up to the point when quantum computing enters the fray.

“As of 2010, the largest factored RSA number was 768 bits long (232 decimal digits). Its factorization, by a state-of-the-art distributed implementation, took around fifteen hundred CPU years (two years of real time, on many hundreds of computers). No larger RSA key is publicly known to have been factored. In practice, RSA keys are typically 1024 to 4096 bits long. Some experts believe that 1024-bit keys may become breakable in the near future or may already be breakable by a sufficiently well-funded attacker (though this is disputed); few see any way that 4096-bit keys could be broken in the foreseeable future. Therefore, it is generally presumed that RSA is secure if n is sufficiently large. If n is 300 bits or shorter, it can be factored in a few hours on a personal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999 when RSA-155 was factored by using several hundred computers and are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011. A theoretical hardware device named TWIRL and described by Shamir and Tromer in 2003 called into question the security of 1024 bit keys. It is currently recommended that n be at least 2048 bits long.”
 
https://en.wikipedia.org/wiki/RSA_(cryptosystem)

For secure systems that use RSA as their cryptographic algorithm, we’ve seen pressure to continuously increase the size of the keys. This is true across the entirety of application of RSA-based cryptography, including the area of security in the DNS, DNSSEC.

Using RSA in DNSSEC

These days 2,048-bit RSA keys are considered to be a decent size for good security. But if a 2,048-bit key is good, wouldn’t a 3,072-bit key be even better? Why don't we all just use massive keys in RSA all of the time? One constraining factor is the cost to generate the keys. The larger the key size the higher the computational load to generate the keys and subsequently generate the encrypted payload. At some point increasing the key size increases the burden placed on the encryptor without substantially altering the security properties of the result. Also, in some areas of application there are very real size constraints placed on the key, and supporting arbitrarily long RSA keys is considered infeasible, irrespective of the cost of generating the keys in the first place. The DNS is one of those bounded areas where there are some practical constraints on key size in DNSSEC.

The original DNS specification used a maximum payload of 512 octets for UDP messages (RFC 1035). DNS servers were required to truncate longer responses, and a DNS client receiving a truncated response was expected to requery using TCP.

A subsequent extension to the DNS protocol, EDNS(0) (RFC 6891), allowed a client to specify that it could handle a larger UDP response size than 512 octets, and in theory at any rate it might be possible for a client to assert that it could handle up to 65,535 octets in size. But of course, theory and practice can diverge markedly, and while the IP and transport level protocols can be coerced into supporting arbitrarily large datagrams, the capabilities of the network are far more mundane. IPv4 fragments work reasonably well, but by no means universally. The story in IPv6 is far more dismal than that, and fragmented IPv6 datagrams appear to be discarded in many places within the IPv6 Internet. If you want to reliably receive large blocks of information across any network path, where “large” is any payload greater than 1,240 octets, over either IPv4 of IPv6, then UDP datagrams are definitely not the answer! If you are willing to tolerate some loss, then a payload size of 1,440 octets is the next boundary point, as larger payloads typically require IP level fragmentation, and at that point the packet loss rate starts to rise sharply.

What can we do for the digital signatures used in DNSSEC? If we want to continue to use RSA as the cryptographic algorithm, then we need to look at deploying ever larger key sizes to ensure that the RSA keys cannot be readily cracked using current computational capabilities. Larger keys with DNSSEC implies larger DNS payloads. Larger DNS payloads inevitably pass across the 1,500-octet threshold and IP fragmentation is invoked if we want to continue to use UDP as the transport protocol for DNS queries and responses. But IP packet fragmentation incurs a raised risk of packet loss along the network path. If we want DNSSEC to operate as efficiently as possible with larger RSA keys then we need to avoid this potential packet loss issue, which implies that we need to look at the evolution of DNS from datagram transactions into one that uses a streamed transport protocol, such as TCP. However, this is a proposition not without its serious implications. We have grown used to the DNS operating in a lightweight datagram transaction model. There are some concerns that we might not be able to support the same performance profile for DNSSEC-signed responses and operate within the same cost and service parameters if we have to use TCP for all these DNS transactions.

We appear to be wedged between a rock and a hard place here. Today we need to use RSA with 2,048-bit keys in order to ensure an adequately robust cryptographic profile, but that has a consequence in increasing the size of certain DNSSEC responses, which, in turn, may have a much higher loss probability in the network when the increased response size causes the IP datagram to be fragmented.

Thankfully, that's not the only option we have. Another response is to try and reduce the size of the DNS response by changing the crypto algorithm, and it’s here that elliptical curve digital signature algorithm, ECDSA, has something to offer.

ECDSA Cryptography

ECDSA is a digital signature algorithm that is based on a form of cryptography termed Elliptical Curve Cryptography (ECC). This form of cryptography is based on the algebraic structure of elliptic curves over finite fields.

The security of ECC depends on the ability to compute an elliptic curve point multiplication and the inability to compute the multiplicand given the original and product points. This is phrased as a discrete logarithm problem, solving the equation bk = g for an integer k when b and g are members of a finite group. Computing a solution for certain discrete logarithm problems is believed to be difficult, to the extent that no efficient general method for computing discrete logarithms on conventional computers is known (outside of potential approaches using quantum computing of course). The size of the elliptic curve determines the difficulty of the problem.

The major attraction of ECDSA is not necessarily in terms of any claims of superior robustness of the algorithm as compared to RSA, but in the observation that Elliptic Curve Cryptography allows for comparably difficult problems to be represented by considerably shorter key lengths. If the length of the keys being used is a problem, then maybe ECC is a possible solution.

“Current estimates are that ECDSA with curve P-256 has an approximate equivalent strength to RSA with 3072-bit keys. Using ECDSA with curve P-256 in DNSSEC has some advantages and disadvantages relative to using RSA with SHA-256 and with 3072-bit keys. ECDSA keys are much shorter than RSA keys; at this size, the difference is 256 versus 3072 bits. Similarly, ECDSA signatures are much shorter than RSA signatures. This is relevant because DNSSEC stores and transmits both keys and signatures.”
 
RFC 6605, “Elliptic Curve Digital Signature Algorithm (DSA) for DNSSEC”, P. Hoffman, W.C.A. Wijngaards, April 2012

We are probably justified in being concerned over ever-expanding key sizes in RSA, and the associated implications of the consequent forced use of UDP fragments for the DNS when packing those longer key values into DNSSEC-signed responses. As already noted, if UDP fragmentation in the DNS is unpalatable, then TCP for the DNS may not be much better, given that we have no clear idea of the scalability issues in replacing the stateless datagram transaction model of the DNS with that of a session state associated with each and every DNS query. The combination of these factors makes the shorter key sizes in ECDSA an attractive cryptographic algorithm for use in DNSSEC.

Using ECDSA in DNSSEC

If it’s so attractive, then why aren't we all using ECDSA already?

Well, there’s one very relevant question that should be answered before you all head off to use ECDSA to sign your DNS zones. Is the ECDSA algorithm as widely supported by DNSSEC-Validating resolvers as the RSA algorithms?

There are reasons why we should ask this question. Elliptical Curve Cryptography is not without its elements of controversy. As Wikipedia explains:
 
"In 2013, the New York Times stated that Dual Elliptic Curve Deterministic Random Bit Generation (or Dual_EC_DRBG) had been included as a NIST national standard due to the influence of NSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve. RSA Security in September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG. In the wake of the exposure of Dual_EC_DRBG as "an NSA undercover operation", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves, suggesting a return to encryption based on non-elliptic-curve groups."
 
http://en.wikipedia.org/wiki/Elliptic_curve_cryptography

A similar perspective on Dual_EC_DRBG was the topic of an earlier 2007 essay by Bruce Schneier:

"If this story leaves you confused, join the club. I don't understand why the NSA was so insistent about including Dual_EC_DRBG in the standard. It makes no sense as a trap door: It's public, and rather obvious. It makes no sense from an engineering perspective: It's too slow for anyone to willingly use it. And it makes no sense from a backwards-compatibility perspective: Swapping one random-number generator for another is easy. My recommendation, if you're in need of a random-number generator, is not to use Dual_EC_DRBG under any circumstances. If you have to use something in SP 800-90, use CTR_DRBG or Hash_DRBG. In the meantime, both NIST and the NSA have some explaining to do."
 
https://www.schneier.com/essay-198.html

But let me hasten to note that Dual_EC-DRBG is not ECDSA P-256, and no such suspicions of exposure have been voiced for ECDSA P-256 or its related cousin ECDSA P-384 (so far!). However, there is still a lingering perception issue with certain variants of ECC random bit generation, even though that does not mean that it is justified to believe that all Elliptical Curve Cryptography is similarly tainted with suspicion.

If we are able to discount such suspicions, and assert that ECDSA P-256 and ECDSA P-384 are robust in terms of cryptographic integrity, are there any other problems with the use of ECDSA?

ECDSA has a background of patents and IPR claims, particularly, but not entirely, associated with the entity Certicom, and for some time this IPR confusion was considered sufficient reason for many distributions of crypto libraries not to include ECDSA support (http://en.wikipedia.org/wiki/ECC_patents). OpenSSL, the most widely adopted open crypto library, added ECDSA from version 0.9.8 in 2005, but a number of software distributions took some further time to make the decision that it was appropriate to include ECDSA support (such as Red Hat Fedora, where the distribution’s inclusion of ECDSA support was apparently delayed until late 2013, probably due to these IPR concerns).

Taking all this into account, it’s not surprising that folk have been cautious with their approach to ECDSA, both to use it as a signing algorithm and to support its use in various crypto validation libraries.

Is that caution still lingering, or should we now be confident in using ECDSA on the Internet?

The ECDSA Measurement

The question posed here is: what proportion of the Internet’s end users use security-aware DNS resolvers that are capable of handling objects signed using the ECDSA protocol, as compared to the level of support for the RSA protocol?

At APNIC Labs, we’ve been continuously measuring the extent of deployment of DNSSEC for a couple of years now. The measurement is undertaken using an online advertising network to pass the user’s browser a very small set of tasks to perform in the background that are phrased as the retrieval of simple URLs of invisible web “blots”. The DNS names loaded up in each ad impression are unique, so that DNS caches do not mask out client DNS requests from the authoritative name servers, and the subsequent URL fetch (assuming that the DNS name resolution was successful) is also a uniquely named URL so it will be served from the associated named web server and not from some intermediate web proxy.

The DNSSEC test uses three URLs: