Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenPGP: S2K uses small c/bytecount, inconsistent suite of S2K-KDF-SHA1 (160b) and AES-256 #139

Open
koto opened this issue Dec 16, 2014 · 23 comments

Comments

@koto
Copy link
Member

koto commented Dec 16, 2014

From coruus@gmail.com on July 05, 2014 23:47:38

Is this report about the crypto library or the extension?

Crypto library. javascript/crypto/e2e/openpgp/encryptedcipher.js

What is the security bug? The library assumes that the hash algorithm used for the Iterated and Salted S2K specifier type is SHA1. (This is also a functionality bug; it makes it impossible to interoperate with implementations that require > 128-bit security for private keys.)

  1. The count field is specified as c=69. This results in a bytecount of 65536, or 3277 SHA1 compression function evaluations. At a rate of 2^30 SHA1 compressions per second (Fermi), this allows ~ 327660 password guesses per second. (The rate on ASIC is much, much higher -- SHA1 is small in silicon.) This is plainly inadequate for typical users' passphrases.

Fix: Set c=255, bytecount=65011712, or 3m SHA1 compression function evaluations. Use SHA2-512 instead (it is the largest RFC 4880 hash function in hardware).

  1. AES-256 is always used to encrypt the payload. But using SHA-1 as the S2K KDF results in passwords with > 160 bits of entropy being compressed to < 160 bits of entropy. And probably rather less security-strength than that.

Fix: Use a hash function with at least 256 bits of output instead. SHA2-512, again, would be preferable.

How would someone exploit it? Brute force.

(Note that SHA2-256 is smaller in hardware than SHA2-512; they are distinct functions.)

Original issue: http://code.google.com/p/end-to-end/issues/detail?id=102

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From koto@google.com on July 10, 2014 12:46:35

Just a note here. Our defaults are the same as GnuPG's for s2k algo and iteration count. We use S2K to:

  • encrypt a message to a passphrase - e2e.openpgp.packet.SymmetricKey.construct
  • encrypt the whole keyring - e2e.openpgp.KeyRing.prototype.encrypt_

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From coruus@gmail.com on July 11, 2014 10:11:03

  1. GnuPG's defaults haven't been updated in years. Please don't use GnuPG doing something stupid as an excuse to do the same thing. (Question: Does Google store its users' passwords with such weak protection?)

And you aren't reproducing GnuPG's behavior either. Use a tag 9 packet and CAST5 if you want to do that. See attached pgpdumps.

(Note: Various Linux distributions have corrected GnuPG's defaults differently...)

2a. Having had the chance to play around with large iteration counts in the extension, it's really really slow for SHA2-512. E.g., SHA2-512 at c=255 takes about 25s (99.97% in updateChunk). That seems like a performance bug. (libgcrypt, the snail of crypto libraries, is an order of magnitude faster.)

2b. SHA2-256 with c=255 takes very roughly 4s in E2E. This seems fine; libgcrypt doesn't do very much better. The usual advantage of SHA2-512 is that it is often faster / as fast in software, but higher-area in hardware.

(By the way: could you un-restrict this? The extension's behavior is a security mistake, but not a "vulnerability".)

Attachment: pgpdump.txt

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on July 17, 2014 17:10:09

we need to check how long c=255 takes in slow hardware, but if it works out, then I agree that's the right solution

Status: Accepted
Labels: Component-Logic

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on July 18, 2014 14:09:20

Labels: -Restrict-View-CoreTeam

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on July 18, 2014 14:11:06

it's worth noting we would also like to use scrypt KDF for some of the computation, but have been a bit too busy to try and get it standardized.

I havent had time to do the perf tests, but I'll post the results here when I do. I'll try to run c=255 with SHA-256 in a CR-48.

thai/frank/marius do you guys know why it could be that sha-512 is so slow?

Cc: f...@google.com tha...@google.com mschil...@google.com

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From coruus@gmail.com on July 20, 2014 07:39:18

Actually, simple explanation for SHA2_64b: Defined on 64-bit uints. As I did not realize until looking at the code, JS doesn't have these. (Not really a JS person...)

So...the code in Closure has to be cross-browser; some improvements possible, but it's already doing the important things (carry-save etc.)

For E2E, does V8 support uint64s? And does anyone know the status of Intel's SIMD proposal for it? (The issue was closed sans comment.)

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From f...@google.com on July 20, 2014 09:28:26

To answer the question. SHA-512 is slower than SHA-256 because it works on 64-bit quantities rather than 32-bit quantities, and JS doesn't support 64-bits. Most of the operations can be emulated using pairs of 32-bit integers, but it is still awkward.

Mark Heffernan (meheff@google.com) and I spent a bit of effort trying to optimize both SHA-256 and SHA-512 for JS. We were able to get much better improvements out of SHA-256.

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From coruus@gmail.com on July 21, 2014 01:27:42

f...: Just so you're aware, some patches to Closure library in Issue 113 . I was seeing a speedup of about 20-25% for long messages for SHA1 and SHA2_32b on Chrome 35 in the perf tests. (Reduced GC activity by using length 16 array for message scheduling; port of sha1.js fast-path to sha2.js.)

I'm planning to pull out the message-schedule preconputation code (unless you think there would be other users?) and submit the rest upstream (I guess, to you and Mark). (I strongly suspect that the changes won't reduce performance on any Closure target, but to answer that obviously requires testing.) https://code.google.com/p/end-to-end/issues/detail?id=113

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on July 21, 2014 17:10:57

Cc: meh...@google.com

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on July 21, 2014 17:11:28

Owner: evn@google.com

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on August 11, 2014 22:22:32

Still need to check in a cr-48 the performance hit of using c=255. Sorry for the delay.

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From coruus@gmail.com on August 11, 2014 23:08:42

No problem; the performance improvements to Closure in issue 113 are still waiting to be merged. (So you may want to wait on that anyways.)

(The pull request to Closure is here: google/closure-library#322 ; meheff has reviewed.)

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on August 11, 2014 23:22:38

Good point :-)

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From adhintz@google.com on August 12, 2014 15:48:39

With c=255, sha1, 40bytes out, my desktop takes 2872ms. My CR-48 takes 34360ms.
With c=96, desktop 10ms, CR-48 53ms.
With c=128, desktop 20ms, CR-48 169ms.
With c=150, desktop 45ms, CR-48 421ms.
With c=196, desktop 250ms, CR-48 2749ms.

c=150 is about as high as we can go without impacting users with slower devices.

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on August 12, 2014 15:57:31

maybe we can also use another algorithm?

if we implement the pw bruteforcing thing we can also just make this a bit lower for extension users.

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From coruus@gmail.com on August 13, 2014 04:28:44

Drew: Is that SHA-256 with the patches?

(If changing this is a priority, I can, for the moment, put the Closure pre-scheduling bits into E2E's sha2.js)

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on August 13, 2014 09:45:58

thanks coruus, we can wait for closure.

the improvement from your changes are probably 20% right? maybe we can further improve things with typed arrays, or maybe just use webcrypto.

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From coruus@gmail.com on August 13, 2014 13:55:56

With prescheduling, SHA256 is about a factor of 7x better than SHA1 is currently. Which makes E2E faster than GnuPG (built via Homebrew, at least).

See here: https://plus.google.com/+DavidLeonGil/posts/1GWdqjDAmUG?pid=6041494128350371042&oid=110685422985125996027 (Typed arrays already being used.)

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on August 13, 2014 16:48:47

c=196 might be fine if it's 7x faster.. wdyt?

@koto
Copy link
Member Author

koto commented Dec 16, 2014

From evn@google.com on August 13, 2014 16:50:27

also, these is for encrypted cipher.. which only is ran during key import, which is probably very rare..

we need to figure out what is the most lag/pain we can get away with. Maybe we can add an animation and make it a bit longer? the magic of design.

@koto
Copy link
Member Author

koto commented Jun 10, 2015

evn@, ping.

@sirdarckcat
Copy link
Member

I think we can leave the lockable storage implementation as-is for now until we refactor things to use context2.

@sirdarckcat sirdarckcat reopened this May 3, 2016
@sirdarckcat
Copy link
Member

(will close once #389 is merged)

@sirdarckcat sirdarckcat changed the title S2K uses small c/bytecount, inconsistent suite of S2K-KDF-SHA1 (160b) and AES-256 OpenPGP: S2K uses small c/bytecount, inconsistent suite of S2K-KDF-SHA1 (160b) and AES-256 May 3, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants