Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AMD support on linux debian + Ollama #164

Open
Potentiated opened this issue Dec 12, 2023 · 2 comments
Open

AMD support on linux debian + Ollama #164

Potentiated opened this issue Dec 12, 2023 · 2 comments

Comments

@Potentiated
Copy link
Contributor

Currently I have been working on finding a way around the issue of Ollama only supporting nvidia drivers so that people can run Morpheus on more machines.

However the AMD support on linux, in general, is catastrophic to say the least, there is no support for the current graphics card I am trying to run, which is a RYZEN 7 5800X .

If anyone does have an AMD card they want to test I can guide you through the process to see if we can get it working, they will just need to make sure there is rocm HIP sdk support for that specific card.

@Potentiated
Copy link
Contributor Author

So from what I have surmised from testing, the rocm drivers only support AMD GPUs 7900 pro and one other, for some reason their page is down as I think they are releasing a new rocm driver 6.0

This is because they only support ROCM HIP SDK on those GPUs, while no support for any earlier models from what I can find

That said the fix would involve testing the rocm hip sdk and one of those GPUs, then unpacking it so ollama can recognise it, then having linux debian switch over to recognising the rocm drivers for those GPUs instead of whatever it reads at that time, you would need to overwrite.

Then I believe you would need to install ollama and MOR node 0.0.4 as 0.0.5 has ollama already configured,
to be able to read the driver correctly with ollama, then serve ollama and upload the model

It is hypothesis after the rocm hip installtion part as I do not have a AMU GPU to test that has support

@DavidAJohnston
Copy link
Contributor

Thanks for the notes here. Given the Ollama code is changing a lot lately, might be worth re-visiting this as the time for the Compute phase 2 gets closer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants