Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does share memory speed up inference? #7126

Open
NikeNano opened this issue Apr 17, 2024 · 6 comments
Open

How does share memory speed up inference? #7126

NikeNano opened this issue Apr 17, 2024 · 6 comments
Labels
question Further information is requested

Comments

@NikeNano
Copy link

Description
The docs state that:

Using shared memory instead of sending the tensor data over the GRPC or REST interface can provide significant performance improvement for some use cases.

But what it the reason for the increased performance? In my cases I still need to move the data to CPU for postprocessing and eventually send an event over kinesis. Is shared memory in terms of NVIDIA triton difference from CUDA shared memory? Since CUDA shared memory is very limited and for triton there seems to be no upper limit and completely separated from the actual hardware and the amount of CUDA shared memory it has? Tried to read the docs to get further information but could not find it. I have seen pref_analyzer to produce 5x the throughput using shared memory CUDA memory but fail to reproduce.

Triton Information
24.01, the official images.

To Reproduce

Can not share models :(

Expected behavior

@Tabrizian Tabrizian assigned Tabrizian and unassigned Tabrizian Apr 17, 2024
@Tabrizian
Copy link
Member

Is shared memory in terms of NVIDIA triton difference from CUDA shared memory?

Yes. CUDA Shared memory is a Triton terminology for transferring CUDA tensors between client and server without having to pass them over the network.

But what it the reason for the increased performance?

The reason for performance improvement is that you don't have to transfer the tensor over the network. The benefit would be more significant with larger tensors.

@Tabrizian Tabrizian added the question Further information is requested label Apr 19, 2024
@NikeNano
Copy link
Author

NikeNano commented Apr 19, 2024

Thank you for the answer @Tabrizian!

perf_analyzer -m defect-classifier -u triton:8500 -i gRPC --concurrency-range=1

Concurrency: 1, throughput: 184.18 infer/sec, latency 5425 usec

vs

perf_analyzer -m defect-classifier -u triton:8500 -i gRPC --shared-memory=cuda --concurrency-range=1

Concurrency: 1, throughput: 555.442 infer/sec, latency 1797 usec

It seems to make a huge difference in our use case. However we never seem to be able to get the same throughput for our own Python client. Is there any best practices in terms of client implementation in Python or C++ to achieve similar results as pref_analyzer. Does pref_analyzer throughput also include transfering data back to CPU when cuda share memory is used?

@Tabrizian
Copy link
Member

Does pref_analyzer times also include transfering data back to CPU when cuda share memory is used?

@matthewkotila / @tgerdesnv do know whether Perf Analyzer includes the time to copy data to CUDA shared memory?

@NikeNano For Python clients, did you also use CUDA shared memory?

@matthewkotila
Copy link
Contributor

@NikeNano: Does pref_analyzer throughput also include transfering data back to CPU when cuda share memory is used?

I'm not sure if I understand. The calculation for throughput simply counts how many inferences (request-response sets) were completed during a period of time, and divides by the period of time. Everything that has to happen in order for the inference to complete (including CUDA shared memory, CPU transfers, etc) is inherently included in that measurement.

@NikeNano
Copy link
Author

NikeNano commented Apr 23, 2024

@NikeNano For Python clients, did you also use CUDA shared memory?

Yes, we are trying to reimplement it in c++ as well but our feeling now is that we somehow are bottlenecked and are very far from the pref_analyzer performance.

@NikeNano
Copy link
Author

@NikeNano: Does pref_analyzer throughput also include transfering data back to CPU when cuda share memory is used?

I'm not sure if I understand. The calculation for throughput simply counts how many inferences (request-response sets) were completed during a period of time, and divides by the period of time. Everything that has to happen in order for the inference to complete (including CUDA shared memory, CPU transfers, etc) is inherently included in that measurement.

Questions for clarification when using pref_analyzer with shared cuda memory(based upon the python example):

  • Do we allocate cuda memory once, one call to cudashm.create_shared_memory_region or reallocate for each request?
  • Do we move data from CPU to the GPU region for each request cudashm.set_shared_memory_region(shm_ip0_handle, [data]) for each request?
  • Do we move the data back from GPU to CPU for each request cudashm.get_contents_as_numpy?

Based upon your previous answer @matthewkotila , I understand that the answer is Yes.

Thanks for the help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Development

No branches or pull requests

3 participants