Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive keyspace to reach ideal chunk duration & optimize workload #729

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

0xVavaldi
Copy link
Contributor

@0xVavaldi 0xVavaldi commented Oct 19, 2021

Addresses: #551.
A more ideal way of rewriting this would be to have a modifier field added to the assignment table which holds the modifier value.
This modifies the benchmark instead based on the difference in time to what the current chunk duration should be.

If the duration of the chunk is within 10% of the target, the % will not be adjusted. This means that only larger differences will cause the benchmark to be modified.

@0xVavaldi
Copy link
Contributor Author

#545

@0xVavaldi
Copy link
Contributor Author

0xVavaldi commented Oct 20, 2021

Re: previous commits: Discovered a bug in the logic of determining the new benchmark while testing and pushed a fix as well as changed the naming scheme of one variable to match the rest of the code.

@0xVavaldi 0xVavaldi changed the title Adaptive chunks Adaptive keyspace to reach ideal chunk duration & optimize workload Oct 20, 2021
@s3inlc
Copy link
Member

s3inlc commented Dec 21, 2021

Thanks for working on this improvement. There are a few things which should be tackled before merge.

  • There are multiple issues which could arise when automatically shrinking chunks, that's why we would prefer not having an automatic reducing of the chunk size, only increasing. It can happen that it ends in a loop shrinking more and more and Hashcat always taking too long because of a task not being well suited for distribution (as just because a chunk takes longer as expected does not necessarily mean that it was too much keyspace to work on).
  • In order to try to keep it at least a bit organized, the handling of this chunk adjustment could be put into a function (e.g. into the TaskUtils class).
  • People may prefer to have a certain control over this feature. Preferably, on one side there should be a global option (in the server config) to enable/disable any dynamic chunk resizing. And on the other side an option (when having it globally enabled), to disable it selectively for a specific task (if the user knows for sure that he does not want any dynamic adjustments).
  • The adjustment of the benchmark for a better sized chunk is handled correctly for the speed benchmark, but the runtime benchmark type is not handled at all. This will cause issues for any tasks which have the runtime benchmark. The resizing should be handled for both cases.

In case there is anything unclear, please let me know. Also let me know if there is any assistance needed with the adjustments.

@0xVavaldi
Copy link
Contributor Author

0xVavaldi commented Dec 30, 2021

Re: point 1, I think the margin of error taken is reasonable. Currently it takes about 10% margin both ways before it considers changing the benchmark. Also seen in code:

// Not static chunks & difference in chunk time > 10%
if($task->getStaticChunks() === 0 && ($differenceToChunk < 0.9 || $differenceToChunk > 1.1)) { 

These changes should never cause the benchmark to be negative due to the extra if statement and the multiplication should ensure that even if a task was completed in 1 second, the end result should be relatively close to the intended value. In practice I've learned that it takes a few chunks to get the adjustment right though because hashcat works different under different workloads, sometimes faster, usually slower.

Re: point 2, agreed

Re: point 3, could be a nice enhancement but I don't believe it's required for this feature to be implemented, since we already have a chunk time to tweak and any agent should (attempt) to adhere to that chunk time by having the correct chunk size, this change just enforces that chunk time better.

Re: point 4, I don't believe I had any issues while I was using the runtime benchmark but perhaps I didn't see that correctly. In which ways could it go wrong and what needs to be taken into account in order to prevent that?

@s3inlc
Copy link
Member

s3inlc commented Feb 23, 2022

Regarding point 1, I'm not sure if I was able to make clear what I wanted to explain. The issue with the shrinking of chunks is not an issue in general, but quite often there are corner cases. Depending on the task which is executes, it can happen that Hashcat is not able to use the full potential due to a small keyspace part it got for a chunk (e.g. if there are fairly large rules). Such small chunks are issued if hashcat gives benchmark results (speed format: :) with a time which is way over what the normal chunk size is intended to be (for example like 50'000'000 ms). In such a case the chunks resulting from this benchmark are having a fairly small keyspace chunk size.
And if we have now a task which such a non-ideal keyspace range causes hashcat to have way longer than the normal chunk size. For example it may just run everything on one GPU or their utilization is not very high. So if now this chunk resize therefore decides to shrink the chunk, because the last one took too long, it will worsen the problem. With an even smaller chunk size, Hashcat will still take too long for the chunk and it would get shrinked to keyspace 1 or something like this. The only solution for such tasks, is that it must be approached differently (e.g. -S could solve issues with slow hashes), or should just be issued as a single chunk task.
I hope I was able to explain why the shrinking could cause more headaches than it would actually solve. That's why I still would prefer having only the increasing part.

Point 3: At least a global option is a must, as for sure there are people which prefer not to have features active which make it harder to control and plan tasks (and to follow the steps which are happening). As people are using Hashtopolis in various environments going from a single node to tens of nodes, it's better to allow users to control it in detail.

Point 4: It seems you fixed this issue in commit 330d958 by checking if there are two benchmark part, so this should be fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants