-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default RSA key bitlength should be 3072 #2080
Comments
I would even suggest 4096 (I do it on almost all of my systems). Thanks, |
I would suggest also 4096. But some months ago, they disagree with 4096 by default: #489 I prefer a compromise (= RSA with 3072 bit-modulus) compared to no security at all by default. |
Suggest 4096 too. And there is no acceptable argument for 2048/3072 vs 4096 bits (only a very small speed overhead). Worst, argument of the 90 days key regeneration to justify 2048 bits key usage is not good :
|
Another +1 for 4096, that's what I always use these days. |
See #489 for a lot of discussion about this issue. |
With LE's default settings, RSA 2048 bits, 90 days, try to crack it? I suggest to close this issue. |
Question is not « proof you can crack it » (or stay at 1024 bits, nobody crack it since now) but « all recommendation / state of the art ask for at least 3072 bits » (NSA, NIST, ANSSI, FIPS, Qualys and more). (And I repeat, even with 90 days key renew, if no PFS used, you have a practical key validity of plenty decades even if technical validity is only 90 days, and 90 days key renew breaks a lot of other TLS stack (TLSA, HPKP, cert pinning), so sane sys admin MUST disable it and reuse a key for at least 120 days and 1 year in practice) |
However, in the Alexa top 1,000,000 websites, over 89% of SSL/TLS enabled websites use RSA 2048 bits, including many online banking websites. |
Yep, bank have also SSLv3 and MD5 if you want https://imirhil.fr/tls/#Banques%20en%20ligne TLS configuration are VERY bad in the wild, this is not a plea to continue to do craps. |
@devnsec-com So they are not secure. Even if IPv6 and DNSSEC are good technologies, they are not massively deployed yet. It happens the same for RSA 4096 bits. We have to move forward instead of treading water. |
The only reason to use RSA 3072 or 4096 (or even longer) is future proofing, but with a 90 days only certificate, do all users really need RSA longer than 2048 by default? If you are really paranoid, go for longer keys, there is a parameter and you can change it. |
All user need 3072+ key by default. People really knowing what they do can eventually reduce the size, but by default, Let’s encrypt must provide state of the art compliant configuration. |
Remember, we are talking about a SSL/TLS certificate, you can have PFS support on your server, and it can be revoked if necessary. We are not talking about an OpenGPG key or SSH key which you could use them for 20 years or even longer. And many years later, when RSA 2048 is actually being proved no longer secure, we can change LE defaults to RSA 4096, nearly all users will be changed to RSA 4096 within 90 days. |
Since I've mentioned OpenGPG, I'd say that even OpenGPG is default to RSA 2048. |
You can. You can have not PFS too. And if 2048 bits key, you’re not secure. Even with 90d renew. And if no PFS, key destruction is useless, your datas are already in nature and should be decrypted in few decades. And only certificate can be revoked, not key. |
GPG have real drawback to use 4096 keys (only few smartcard support more than 2048, mobile device…). On TLS, there is none. And on GPG, everybody currently generate 4096 bits keys, all tutorials and recommandations ask for it. |
You have just made a point, instead of arguing default to RSA 2048 or longer, we should talking about default to enable PFS. Because no matter how long the key is, it will be decrypted soon or later, RSA 2048 will, RSA 4096 will too. |
Yep, but you have no way to ensure PFS with Let’s Encrypt (and worse only PFS ciphers, you can negociate a PFS cipher suite but server can support no-PFS or EXPORT…). And currently, there are major drawback to use only PFS cipher suites (browser compatibility, see https://tls.imirhil.fr/suite/EECDH+AES:EDH+AES+aRSA). In all cases, this is not the job of a CA. |
We are not talking about CA here, but a client software, right? |
No. TLS is very complicated, with a lot of stack. You have no way to guess if your user have DNSSec or TLSA, must ensure IE compatibility or not standard user-agent, what version of OpenSSL is used, etc. |
No, when LE uses RSA 3072 or 4096 by default, someone will say he wants RSA 8192 by default for all users, this is an endless arms race. |
@devnsec-com No. NSA, NIST, ANSSI, FIPS, Qualys don't recommend RSA 8192 but at least RSA 3072. |
No, if somebody ask for 8192, you can say 4096 is currently safe and recommanded everywhere. |
I can also say Yubico, Cisco, and many companies are recommending RSA 2048 at the time we are speaking. The longer the key length the safer they are, but business companies are more pragmatic on this topic. Again, RSA 2048 is enough at current stage as a default, as long as we support longer key when needed. |
Good resource for this conversation - keylength.com |
@devnsec-com Yubico and Cisco position are valid, because they have hardware issue (HSM, PKI and smartcard don’t support 4096 bits very well). From Yubico : « This is not a constraint from Yubico, but rather a hardware limitation of the NXP A700x chip used within the YubiKeys » And this is not because all the other sheeps are running towards the cliff that we must follow thew… |
@devnsec-com I add too that Cisco documentation is about PKI and CA management, and in this field (CA cert, not end user cert), CA/B-forum recommends at least 2048 bits. And there is internal debate about 4096 inclusions too, since 2014.
But seems hardware routers don’t support roots more than 2048 (Cisco)
|
@devnsec-com Another internal comment on CA/B-forum.
|
Hello guys, It would be great to keep this thread going and hear back from LE staff. |
+1 for RSA 3072bit by default. As has been mentioned, RSA 2048bit is only equivalent to 112bits of symmetric key protection. That falls below the widely-held minimum of 128bit used today. RSA 3072bit is only 128bit equivalent, so sufficient for right now (and the bare minimum recommended by the NSA as of January, 2016). Note that all previous estimates of longevity for key sizes should be taken with healthy skepticism, because they generally assume a linear improvement in key-breaking, but new attacks can decrease security much more quickly. |
Folks, 4096 bit crypto is not free. It is massively slow in low-power CPUs. Even modern phones suffer when using it. Are you sure going from uncrackable crypto to even more uncrackable crypto is worth ruining everyone's performance and battery life? |
4096bit keylength is a huge problem in IoT because this devices often have very limited CPU power. The speed of private RSA tanks from 3.6 to 0.6 for going from 2048 to 4096 on one of our older embedded device. |
As mentioned before, IoT and other devices without hardware acceleration can use ECDSA (#6492). |
Is there hardware acceleration of RSA in many devices or is that still generally handled via software? |
Hardware acceleration is available on some MCU, but not the majority of them (it is more expensive). Also it is often limited to 3072, as it is the recommendation (https://www.keylength.com/) |
OPENSSL_TLS_SECURITY_LEVEL=3 requires at least 3072 bits. |
Why is this issue still open? It's rather clear that the users that are requesting it seem to misunderstand the relevance in TLS, especially in 2020. TL;DR
For the majority of deployments, you should be ensuring PFS cipher suites only, your RSA certificate has no benefit here beyond 2048-bit, you're just reducing the amount of handshakes the server can support concurrently due to the additional overhead. Context of advice needs to be taken into consideration, not just parroted blindly. Use DHE / ECDHE for key exchange only, with web servers you'll find that since 2015 browsers have progressively been removing support for DHE as well. Safari did so as a response to Logjam in 2015, Chrome dropped DHE support in 2016 (Chrome 53 (Aug 2016)) because of how many servers still negotiated with 1024-bit DH groups, and Firefox finally joined in this year (Firefox 78 (June 2020)). So basically for web, you're only dealing with ECDHE, unless you're actually aware of clients that can't support that for some reason, web browsers have supported ECC for a long time. What possible concern are you left with to worry about now? Your certs are being renewed/replaced within 3 months, the only security concern with RSA certs is identity, that some attacker can impersonate you, but for that to happen, the attacker needs to be successful within that <90 day window. In 2020, the record so far is breaking 829-bit RSA (10 years since we were aware of 768-bit being breakable), the previous record of 795-bit RSA cite having better algorithms and hardware available to achieve their results in about half the compute time. Just look at the 900-2000 compute years these achievements were equivalent to in their parallel computing power (which for 768-bit was reduced down to 2 years). 512-bit RSA is roughly around 50 bits of symmetric key strength in security. 1024-bit RSA is known as 80-bits, and 2048-bit RSA, 112-bits, whilst 3072-bit as noted in this issue is 128-bits. Right, fantastic so 768-bit RSA (which is somewhere between 50-80 bits of symmetric security) in 2010 could be broken over the span of 2 years and a tonne of money, a decade later we've managed to do a few symmetric equivalent bits more. For anyone where this is sounding like jibberish, each additional bit in symmetric key strength is doubling the amount of work to calculate if you're to brute force all possibilities (on average you'd have success about half way through all that processing though). Now compare 512-bit RSA to 1024-bit, there's roughly 30-bits difference in symmetric security, 2^30 is about 1 billion, that is 50-bits (1,125,899,906,842,624 potential keys) multiplied by 1 billion (1,125,899,906,842,624,000,000,000), or put simply, 1 billion times more effort/time to break. With RSA 1024-bit vs 2048-bit, you've got 32 bits difference, 2^32 is a little over 4 billion. So we've got (4,503,599,627,370,496,000,000,000,000,000,000). Now, bumping up to 3072-bit is a more modest 16-bits of extra security, 2^16 == 65,536... yeah that'll make it more "secure", not quite the same leap as our 4 billion multiplier, but we already have a significant increase in difficulty with 2048-bit RSA. Now let that soak in and think. 768-bit RSA in 2010 took 2 years of parallel processing equivalent to 2000 years from a single machine, and a decade later we can do a little better with all those advancements since. Still a long way from 1024-bit RSA unless someone with a tonne of money and time to waste wants to break your RSA cert, it's not like shared DH groups, the value is considerably smaller of a return, unless you're in possession of some data that's ridiculously valuable and can't be obtained through cheaper means such as physical torture / blackmail. Even if someone had such resources to waste on breaking your RSA certificate, it's not 1024-bits, the difficulty of 2048-bit is 4 billion times that, it's not going to happen without some major breakthrough, likely a weakness where the key length isn't a concern (just look at all the other vulnerabilities in TLS that have worked around such). If you went beyond 3072-bit RSA and that was used for your encryption which happened to use 128-bit keys, then your RSA cert doesn't matter, and the 128-bit encryption key for AES can just be attacked instead afaik. All that aside though, as stated we're just talking about RSA only having relevance for identity if you're using only PFS cipher suites, whatever your resources are, there are better ways to compromise your server or users than the cost to break 2048-bit RSA cert in less than 90 days... |
And this has relevance why? There's a level 5 there mandating a minimum RSA key length of 15360 bits, why stop at 3072 bits with that logic? As the linked document states, the default is |
@schoen Chrome in 2016 dropped DHE support, they note 95% of DHE connections was with 1024-bit DH params. Not that it should matter much now since modern browsers don't support DHE cipher suites anymore. For other servers like mail, it might be useful still (although ECDHE support has been pretty good for sometime, with some Linux distros like RHEL not having support until 6.5 around 2014). DHE should use the official FFDHE groups from RFC 7919, which have a minimum of 2048-bits. None of the old clients with the 1024-bit limit should matter, for niche situations where it does the sysadmin would be in a paid position to address it, but there's a host of security issues that can be run into at that point including potential chain of trust problems due to expired root certs on a system.
When you made this suggestion, Safari and Chrome no longer offered DHE afaik. This would have the opposite outcome, reducing DHE, assuming that users are even able to upgrade the browsers / clients on devices that are running outdated software. Some old OS or software is locked to old TLS implementations (like macOS / iOS Secure Transport), which affects TLS version support, supported cipher suites (macOS lacked AES-GCM until late 2015 release IIRC). Regarding anyone who thinks 2048-bit RSA is insufficient for identity protection for 90 days, ignoring other safeguards like OCSP and CT logs, are you aware of chain of trust? Your certificate is verified against the LetsEncrypt root and intermediate certs, if someone wants to attack a certificate, it's more valuable to compromise those than one issued to a specific site. If the cert is being used only for authenticity and not being compromised to decrypt any recorded traffic, then you can increase your websites key length all you like, but what are you gaining in protection from this theoretical attack where you're safer than an earlier cert in the chain of trust being compromised? (which use 2048-bit RSA, |
Seems you don't understand X.509 (and not TLS 🤣) better… 🤣 You count also wrong for the RSA-2048 strength. You have to calculate not only the time the best current way to break a given key but also the future one at the point in the future your key is no more meaningfull. In case of not ECDHE/DHE (which is, i repeat, not so quite rare), it's not 90d but 10-30y. And it's not only the time to be take into account, but also cost, technology evolution and crypto weakness/trick discover. And it's specially visible in your explanation case. This is why entity like ANSSI or NIST base their recommandations not only on the current state of the art computer abilities or cryptography knowledges, but also on future improvements expected to occur in the timeframe of the key usage/meaningfullness |
So your advice to improve security here is to encourage using a larger key size instead of encouraging PFS only cipher suites? Why?
Please elaborate further. In what case are you aware of in the past decade has this been wrong. There is no known case of 1024-bit RSA being factored/broken, let alone 2048-bit which has 2^32 bits more of entropy when looking at it's symmetric key strength.
We could compare 1024-bit as breakable within 1 second, then 2048-bit is equivalent of 136 years still ( With that in mind, 1024-bit RSA being around 2^28 (~250k) to 2^30 (1 billion) more times difficult (I don't have exact numbers sorry, but seems to be around that), and current academics still being far off from that, I'm not sure how you think 1024-bit RSA is anywhere near being computed that fast, which it'd need to be before 2048-bit RSA is even something to worry about being broken in this manner. The usual approach here is GNFS (General Number Field Sieve), which grows more infeasible with required resources the larger the key size. There's also this nice chart of RSA factoring history, and a related answer that pinned 2048-bit RSA at being factored by 2048 based on that, and 1024-bit by 2015-2020, which academically has not been achieved yet. Considering the above, I'd love you to back up how 3072-bit RSA is offering much more of an added security advantage. If 2048-bit RSA is attacked in any other way or advances at such a pace technologically, then I don't see how 3072-bit RSA would offer much additional protection?
I like how you claim it's "not so quite rare", how common is it that you encounter your handshake uses an RSA key exchange instead of DHE or ECDHE these days? Do you even have one example?
I believe I was quite clear in stating 90 day renewal period for the certificate when it's purpose was only serving authentication, not key exchange.
Yes, cost. Resources I've linked you to cite 1024-bit RSA requiring Terabytes of RAM to perform GNFS on, with the linear reduction step being the mostly costly/difficult part to handle. That's the thing with exponential difficulty, something that's cheap/easy can rise in difficulty very fast.
I've already linked to Github project that can perform the 512-bit factoring on AWS. I don't know exact amount of symmetric entropy bits of 256-bit RSA, perhaps around 30-bits? (so roughly 1,048,576 times diff between 512-bit RSA). The 512-bit paper you link to mentions 432 CPUs (12 instances with 36 vCPUs + 60GiB RAM each) and total of 2770 CPU hours with 1.5-2GB of memory or storage for the matrix (I'm not too familiar with this computation, if you are feel free to chime in). My math is probably bad here, but lets try compare the two systems compute power based on that information, 2770 hours to minutes would be In 1991, 330-bit RSA was factored within days, a later single Intel Core2Duo CPU could handle it in a little over an hour. 256-bit RSA would have been less than that then, lets give it an optimistic 60 minutes and say that within ~30 years it's progressed down to ~1 minute, a grand reduction of 60 times. That's equivalent to say 2^6 (64)... 6 bits. As pointed out earlier, just considering time wise, not resources like increased RAM requirements, that's still way off from being a valid threat. Knock off 6 more bits in the next 30 years, and you've managed what 1 second factoring of 256-bit RSA, congrats, long way to go to get to 2048-bit RSA in a reasonable time frame. Even the 15 years for 512-bit RSA improving from 6 months to 4 hours is
Where are these expected times to compute being sourced from? RSA-155 (aka 512-bit RSA) was factored in 6 months back in 1999, RSA-120 (397-bit RSA) and RSA-130 (430-bit RSA) likewise are likely only about 1-2-symmetric bits different? So an increase of approx 4 times is to be expected...?
You've not provided anything substantial that indicates 2048-bit RSA being anywhere near weak, even with historical context where notable improvements have been made, that doesn't look like it's all that relevant with the exponential increase. You seem to be taking a somewhat linear view? I would hope you're aware of why 128-bit AES is considered secure (apparently may be weak to a Quantum Computer one day using Grover's algorithm) and that 256-bit AES is said to require the entire energy of our solar system to be exhausted to crack by bruteforce. Furthermore, take into account what data is sensitive that you're trying to protect it without considering best practice like using only PFS-capable cipher suites, where this data is also considered problematic to potentially be decrypted many decades from now (quite a bit of sensitive data isn't as important decades from now as it is in the present). TL;DRIt comes down to this pretty much. Breaking 2048-bit, even if decades from now, is going to still be ridiculously expensive, but lets be optimistic and say 1 million dollars and 10 years (or 1 if you like).
Sooo basically... Encourage PFS cipher suites (DHE / ECDHE) not minor improvements to bandaid bad security practices |
Because LE has the hands on the default key size, not on the httpd config. And LE already take action in the past (https disabling on HTTP-01, removable of TLS-SNI-01…) instead of enforcing httpd config.
Sure. And simple see the benchmark of cado-nfs. Time for cracking is not the expected theorical time.
Yeah, it seems to be huge number and impractical… At the time of the first 512 bits break (1999), it was considered as unpractical because requiring super-computer nobody have. But in fact, the corresponding AWS bill for such infrastructure in 2015 is… $100 for 4h and IS practical few years later, largely quicker than the billion of billions of billions gap expected in 1999. |
What... no. I've been very clear about this to you in my posts that the difference between 256-bit and 512-bit is not 2^256(your base 10, 10^77), but more like 2^20 or less (in the comparison I did of your two cited factoring examples it was about 2^17). I'm not sure how you're claiming a "2 times gap" either? Same power and costs? The 256-bit RSA was factored in roughly 1 minute, with the 512-bit RSA being equivalent to 166,200 minutes (math was cited in my earlier post, feel free to correct me if I made a mistake).
You're literally going to be fine with 2048-bit RSA, 3072-bit RSA has "marginal" improvement in security, I've already explained this, you seem to have skipped it or misunderstood? The only way 2048-bit RSA becomes insecure in a much shorter span of time is when your 3072-bit RSA is going to be threatened from the rate of advancement. If you want to be safer, don't use RSA for key exchange. I have asked you to cite one website where RSA key exchange is required ((EC)DHE not supported/negotiated for whatever reason), why have you not been able to answer this, you claimed it's such a common occurrence that it justifies upping the default RSA key length? Please back that statement up. You may find servers that offer RSA as a key exchange, they'll likely have server cipher suite selection enabled however and it won't be negotiated by any client that can support PFS cipher suites.
The cost of the hardware is considerably higher than $100, not ridiculous amounts, but yes the availability to easily rent/access computing resources like that has made it more accessible. Note the hardware used, it wasn't cheap/small EC2 instances, these were fairly big machines when it comes to renting compute resources, we have had 768-bit RSA factored for over a decade now, yet you don't see FaaS being able to cheaply handle that? 2048-bit RSA is a long way to go. Do you have a source of anyone claiming 512-bit required over a billion to compute? Pretty sure the hardware resources in 1999 that factored in a mere 6 months was a long way from even billions in cost. Where is this price coming from? From more recent statements about higher bits of security? That's due to the exponential increase in difficulty, which not only requires more computation time, but other resources like RAM. Those 512-bit RSA factoring AWS instances had 60GB of RAM, each (720GB total)... Quick look at some processing insights: RSA-250 (829-bits RSA) that was factored in 2020 was achieved within 3 months with tens of thousands of computers and using CADO-NFS that you like to bring up, it seems that they've achieved the factoring in Here's the paper on the 2010 factoring of 768-bit RSA, look at the RAM requirements:
And lets contrast that to 512-bit RSA factoring (2009, not the AWS one in 2015):
That should give a rough scale of RAM requirements scaling as the key length is increased. I'm sure there are probably other bottlenecks to be aware of with the processing, especially when doing so in a distributed fashion. Feel free to look into that further and identify how much RAM would be required for attacking 2048-bit RSA. Who are you actually trying to protect yourself from that is equipped to target you or your users traffic specifically to record it and perform an attack decades from now? Not some script kiddy waiting on $100 AWS solution, they'd have moved on to better things by then and compromising your network to record traffic would cost them much more, otherwise it's some malware where no one specific was targeted, congratulations your recorded data segmented into 90 day periods of traffic per RSA key is now in a sea of many others, thousands, millions perhaps, $100 AWS even if possible would not be very cheap to attack all that, especially when the data has unknown value, there would be a lot of junk to throw money away on. No, if someone wants your secrets there's more affordable ways. For some reason, you have the belief that 4 billion times stronger security of 1024-bit to 2048-bit RSA is inadequate, but 65k times more security from 2048-bit to 3072-bit RSA is adequate. Your argument is that the security theory of 2048-bit being safe computationally is invalid, but some how that same logic for 3072-bit RSA doesn't apply and it's that much safer? Is a glass door safer because I add bigger and more secure locks? TL;DRPlease go over the previous TL;DR, but I'll re-iterate..
Who in their right mind is going to do that? That's only going to be practical on a high value target where no other means of getting that information are viable. Said target is somehow in possession of such valuable secrets but doesn't bother to pay a professional or consider best practices in securing them (they apparently can't setup a server with copy/paste of Mozilla's cipher suite advice). This is not very convincing. This is not LetsEncrypt, you're assuming a user with valuable secrets uses Certbot along with RSA only key exchange cipher suites supported by their server (no default config does this), and that 2048-bit RSA is so much weaker than it is? Whomever this user is, their security practices elsewhere are probably weak enough to compromise them directly, why wait 20-30 supposed years to decrypt data when you can perform an active attack and get all the juicy data in the present? The correct solution is to setup cipher suites properly, and this project Certbot does exactly that if you're inexperienced in the area and want to rely on it taking care of that for you. But for some reason, that doesn't apply to this discussion? |
https://cryptcheck.fr/https/bankofamerica.com (yep, you read it correctly) I count 68 non PFS domains on my CryptCheck v2 over 7221 handshakes analysis (~0.9%) and 2091 over 349961 handshakes analysis (~0.6%) on my v1. |
https://www.bankofamerica.com/ Protocol: TLS 1.2 https://cryptcheck.fr/https/bankofamerica.com is supposedly identifying additional server IPs for the same domain that don't provide the ECDHE key exchange? Protocol: TLS 1.2 I have no clue what this website is about or what sensitive secrets it has to exploit? Protocol: TLS 1.2 I was redirected to this from your weird looking domain name that I can't imagine anyone would visit directly and consider it legit/trustworthy of a name. This like many others following is clearly a government website. They're often lagging behind and need to be accessible by a wide audience, RSA shouldn't be the default key exchange though, but again, what communications are happening here that you're wanting to protect yourself from? The government themselves don't seem to be too concerned obviously. https://pfisng.dsna-dti.aviation-civile.gouv.fr/jts/auth/authrequired Protocol: TLS 1.2 Again, no clue what this is other than a government service with login. Is there any sensitive information beyond the login details? What are the concerns here regarding access to sensitive information 20-30 years from now? That I use the same username and password elsewhere and still continue to do so in 20-30 years time? Wouldn't a phishing scam be just as effective if login details were desired (and the value of the data on accounts). Could that be achieved in a shorter time or smaller budget than attacking the RSA certificate? Over one minute spent trying to load/resolve the URL, I could not access this website. https://protecpo.inrs.fr/ProtecPo/jsp/Accueil.jsp Protocol: TLS 1.2 This appears to be a website for some software you can use to help make a purchasing decision. What sensitive information is at risk here? Protocol: TLS 1.2 A news site? Again what sensitive information is of concern with interacting with this website? SummaryThere's nothing from any of these results that indicate a need to protect sensitive communications beyond the estimated 20-30 years of 2048-bit RSA being compromised. These websites would still require an effective MitM attack to capture that traffic inbetween the client and server which isn't likely a valid concern for any of them? None of these use LetsEncrypt issued certificates. Making a change for Certbot is only beneficial if you know for sure those websites are using Certbot specifically as an ACME client to those CAs (assuming they have third-party ACME support). Thank you for actually pointing out some websites where RSA is the negotiated key exchange (barring Bank of America which seems to be incorrect and has only been validated as cipher suite to public facing website, not any actual sensitive exchanges).
Ok, so your "not rare" is less than 1% of 7,221 and 349,961 websites you've scanned with CryptCheck? Now consider the audience and actual measurable value offered by using 3072-bit RSA instead to secure those communications, is it actually securing anything worthwhile and would it be anymore secure against cheaper alternatives to compromise security? (as in, was 2048-bit weaker/cheaper than any other alternative targeted attack, where 3072-bit raises the lower bound of attack) How many of the Alexa top 1 million sites are negotiating RSA key exchange? How many of those found use LetsEncrypt as the CA? |
I calculated symmetric bits for 512-bit and 256-bit RSA (based on equation from NIST), and we get 40-bits (256-bit RSA), and 57-bits (512-bit RSA). Interestingly, that matches the 17-bit difference cited earlier when comparing the two compute wise with recent advancements (it appears the NIST document I referenced was last updated in 2020). It also reveals that the distance to 1024-bit RSA is only 23-bits of equivalent symmetric key strength, 8,388,608. And we can see that current progress for 768-bit and 795-bit RSA or the current record of 829-bit RSA is:
I've just realized I've made a mistake earlier when citing 829-bit RSA factoring and comparing it to the AWS 512-bit RSA factoring time. They both cite ~2700 compute units of time, but I tripped up with 829-bit RSA being years, and 512-bit RSA being hours. That appears to be a difference of about 8,760 ( 256-bit RSA 1 minute compute time vs 829-bit RSA 2700 years compute time = Not the greatest information, but enough that we get a rough idea of the rate of progress and improvements.
1024-bit is still going to remain out of reach for some time, and 2048-bit even more so based on historical progress covered above. There's little evidence that 3072-bit RSA with it's additional 2^16 (65k) to 2^22 (4 million) bits of extra difficulty will amount to much more of a security advantage than the 2^30 (1 billion) to 2^32 (4 billion) 2048-bit has over 1024-bit RSA. (not that it doesn't add notable additional difficulty, but based on the assumption that 2048-bit RSA itself won't hold up sufficiently when it should) Based on current 829-bit RSA effort, and that there's roughly 38-bit difference to the security level of bits for 2048-bit RSA. If we take the constant computing power that factored 829-bit RSA in 3 months (4 three month durations in a year), we get 68,719,476,736 years ( If you still think 3072-bit as a minimum is worthwhile despite my (possibly bad?) math, cost-benefit analysis, and common sense reasoning... then I don't think I have anything else I can contribute to convince you otherwise. Just remember that you're advocating for a change with Certbot, not LetsEncrypt enforcing a minimum to issue. Certbot already helps users by configuring proper cipher suites which avoid the whole problem anyway, so who's really going to benefit here - while everyone else (using Certbot) who doesn't have RSA being used as the key exchange is adding more load and bandwidth on their servers (even if minor for most). Issue should be closed. |
You already contributed that information 4 years ago in this thread. It doesn't appear that you understand the context of the advice from that website and how it applies here. If you'd feel more comfortable with 3072-bit RSA or larger key lengths for your certs, explicitly opt-in for such as you already can, there is no justified reason when caring about pragmatic security to adopt 3072-bit as a new default. 2048-bit RSA is sufficient. |
You don't understand more. ANSSI advice is for example
And so is applicable in France for quite ANY PURPOSE AND ANY MEANS, and is not restricted at all to sensible context like military purpose. And one of the advice is "It is recommended to use modules of at least 3072 bits, even for a use not to exceed 2030". And 3072 bits is no more an advice but is mandatory if usage/impact expected after 2030. Even the CNIL, french entity about privacy & good practice, use the ANSSI paper in it standard recommendation for any website: https://www.cnil.fr/fr/securite-securiser-les-sites-web
(And is also in opposition with Let's Encrypt because stating that you have to The NIST advices are not bound to sensibility of convey information, but is general advices applicable in quite all case you have to manage key or cert too. It's not the first time LE use default configuration in opposition with state of the art advice from government agency or other… |
Oh, and the BSI review it advices this year. It states than
|
@aeris even if France were imposing 3072-bit RSA as mandatory, that is not law elsewhere. Do note advice and recommendation do not equate to mandatory requirement either. I would be surprised if they also suggest using RSA key exchange in cipher suites as best practice for security/privacy. If you're wanting to handle security and privacy correctly, adopt PFS only cipher suites, your RSA certificate will be irrelevant from that point onwards as it only plays a role in authentication then. Anyway, I feel I am repeating myself here and I'm not interested in an endless back and forth. If you have to comply with certain security authorities/advisories, then do so by being explicit about it. I've already detailed rather well that 2048-bit is secure and shall remain so for the foreseeable future, if it were not to be, 3072-bit RSA isn't anymore likely to be secure either. As you're rather passionate about this topic please try and grasp what I've relayed to you, instead of reciting NIST / ANSSI / BSI / etc, it's in their best interest to advise a bit higher than needed, similar to how AES-128 was introduced as the lowest/weakest security level for AES but considerably secure in it's own right. All you're gaining from 3072-bit is a false sense of "more secure", due to 2048-bit being more than enough that if it were not, there's no reason to think 3072-bit is going to be anymore secure since it's additional 16-bits of security isn't going to keep up with such advances. Do the smart thing and don't keep using RSA as a key exchange... if you only rely on a larger RSA key length as your sense of being secure instead of applying security practices elsewhere, you're fooling yourself. If you want to continue arguing for the sake of compliance rather than actual security benefit, I'll leave you to it. Personally it's not a worthwhile change to default to, and for majority of Certbot users is likely unwanted due to drawbacks and no pragmatic gain in actual security.
Probably because they know better, despite what you might think. Even with all the information presented to you earlier, you don't seem interested or open to it at all. While it probably won't convince you any further, this is what 2^128-bit symmetric strength requires to attack successfully:
256-bit symmetric encryption keys btw would exceed all the energy available within our solar system to crack:
Now all those quotes are focused on 128-bit keys like 3072-bit RSA, but the 110-112 bits of 2048-bit RSA is still quite considerable. However it is much cheaper to just take the XKCD advised route :) |
Looks like the arguments have shifted from security to compliance. |
Or not. This is just state of the art advices from well known entities related to security ecosystem about what to use by default to avoid shooting in your feet as we see regularly in TLS/X.509 world since at least a decade. |
We've made a lot of changes to Certbot since this issue was opened. If you still have this issue with an up-to-date version of Certbot, can you please add a comment letting us know? This helps us to better see what issues are still affecting our users. If there is no activity in the next 30 days, this issue will be automatically closed. |
The default hasn't changed, go away, please. |
According to NSA and ANSSI, RSA with 3072 bit-modulus is the minimum to protect up to TOP SECRET.
We should not be in the red line of cryptography. This security by default will bring the needed protection for all the Internet users that pass by HTTP webservers powered with Let's Encrypt.
The text was updated successfully, but these errors were encountered: