Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cuda scheduling #133

Open
mratsim opened this issue May 16, 2020 · 1 comment
Open

Cuda scheduling #133

mratsim opened this issue May 16, 2020 · 1 comment

Comments

@mratsim
Copy link
Owner

mratsim commented May 16, 2020

For numerical computing it would be interesting to schedule and keep track of Cuda kernels
on Nvidia GPUs with an interface similar to the CPU parallel API.

The focus is on task parallelism and dataflow parallelism (task graphs). Data parallelism (parallelFor) should be handled in the GPU kernel.

From this presentation https://developer.download.nvidia.com/CUDA/training/StreamsAndConcurrencyWebinar.pdf, we can use CudaEvent for synchronizing concurrent kernels:
image
image
(note there seems to be a typo in the code it should be

cudaStreamWaitEvent ( stream, event );       // wait for event in stream1 

At first glance an event seems to be fired when the stream is empty.

@mratsim
Copy link
Owner Author

mratsim commented May 16, 2020

Interesting concurrent queue for scheduling tasks on GPU, the broker queue:
https://arbook.icg.tugraz.at/schmalstieg/Schmalstieg_353.pdf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant