You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm curious about how benchmarks progress over time. With the right data (and I think you have it), you can examine items like Moore's Law and Koomey's Law with respect to performance on specific tasks. One thing I'd like to be able to do is to plot MD5 hashing rates over time and use that to project necessary password complexity to make a robust password.
There are graphs like those of Hive, that show crack times migrate from e.g. "34k years" to "3 years" in a single year (easier to reference by tweet, but here's the article and methodology) due to a change in methodology (one node was upgraded to eight) and a failure to account for expected hardware improvements.
Right now, I have just a few data points: an old hashcat benchmark from 2012 (23 GH/s, for 23 billion hashes per second) on a Radeon HD 2990 (released 2011-03-08), Hive's 2021 benchmark of a 37 GH/s GeForce RTX 2080 (released 2018-09-20), and Hive's 2022 benchmark of a 69 GH/s GeForce RTX 3090 (released 2020-09-24).
In 2016, Koomey updated his projection to power efficiency doubling every 2.6 years. This appears the best estimate available, but it's ... not great. It would project the 2012 test's 23 GH/s would become 293 GH/s 9.55 years later when the 2022 test's card was released, which is quite a bit higher than the expected 69 GH/s. It projects the 2021 37 GH/s rate would grow to 63 GH/s 2 years later, which is close.
If you can provide single-node performance over time, excluding or else segregating special rigs like the Zotac card that wins the hashcat-1.1.x MD5 test, we could then calculate a best-fit log plot for it. Assuming the resulting formula is plausible (that's unfortunately a big conditional!), we'd be able to do much better at projecting how much time it takes to break a password when accounting for regular hardware upgrades.
The text was updated successfully, but these errors were encountered:
I'm curious about how benchmarks progress over time. With the right data (and I think you have it), you can examine items like Moore's Law and Koomey's Law with respect to performance on specific tasks. One thing I'd like to be able to do is to plot MD5 hashing rates over time and use that to project necessary password complexity to make a robust password.
There are graphs like those of Hive, that show crack times migrate from e.g. "34k years" to "3 years" in a single year (easier to reference by tweet, but here's the article and methodology) due to a change in methodology (one node was upgraded to eight) and a failure to account for expected hardware improvements.
Right now, I have just a few data points: an old hashcat benchmark from 2012 (23 GH/s, for 23 billion hashes per second) on a Radeon HD 2990 (released 2011-03-08), Hive's 2021 benchmark of a 37 GH/s GeForce RTX 2080 (released 2018-09-20), and Hive's 2022 benchmark of a 69 GH/s GeForce RTX 3090 (released 2020-09-24).
In 2016, Koomey updated his projection to power efficiency doubling every 2.6 years. This appears the best estimate available, but it's ... not great. It would project the 2012 test's 23 GH/s would become 293 GH/s 9.55 years later when the 2022 test's card was released, which is quite a bit higher than the expected 69 GH/s. It projects the 2021 37 GH/s rate would grow to 63 GH/s 2 years later, which is close.
If you can provide single-node performance over time, excluding or else segregating special rigs like the Zotac card that wins the hashcat-1.1.x MD5 test, we could then calculate a best-fit log plot for it. Assuming the resulting formula is plausible (that's unfortunately a big conditional!), we'd be able to do much better at projecting how much time it takes to break a password when accounting for regular hardware upgrades.
The text was updated successfully, but these errors were encountered: