Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

possible race fix in Worker #2600

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

eaxeax
Copy link

@eaxeax eaxeax commented Sep 22, 2021

The variable m_count is being read and written concurrently, in theory it is necessary to fix this

@SChernykh
Copy link
Contributor

Not needed. Each worker runs in a separate thread, so there's only 1 thread writing to each m_count instance. Reading in parallel with writing will return either old or new value which is ok for hash counting (next value will be read on the next iteration).

@eaxeax eaxeax closed this Sep 22, 2021
@eaxeax
Copy link
Author

eaxeax commented Sep 23, 2021

This is UB, maybe then it's better to use memory_order_relaxed for store and load? I think that must be correct for cpuWorker, but its overhead for gpu mining. What do you think ?

@eaxeax eaxeax reopened this Sep 23, 2021
@SChernykh
Copy link
Contributor

There's a good rule in programming don't fix what's not broken. GPU worker also writes and reads m_count from different threads by the way. I explained already in my previous message what "UB" really means in this case, it works perfectly fine on both x86 and ARM. The only place that maybe needs fixing is CpuWorker<N>::hashrateData because it reads m_count twice and has 0.0000...0001% chance of reading different values. But even if it happens, it'll be fixed on the very next iteration, and it's only used to display hashrate. Atomic is not needed here. It's the main mining loop and it's better not to add unnecessary overhead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants