-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Knowledge] ROCm (AMD GPU) Support on Linux Guide #868
Comments
I forgot to mention, but make sure that you run:
Before starting the server every time or it will crash and be unresponsive. |
Great work! |
I can confirm this setup works with my 5500 XT on Linux; however, this card uses an older GFX version (10.1.x), which this build of PyTorch apparently doesn't like.
|
Hmm, I'm a little confused about what you mean by this. It works if you run all the commands (e.g. the export and pip install for your 5000 series card, meaning you install torch 1.13.1) but normally it wouldn't? |
What I mean is that by modifying the environment variable Furthermore, when activating the voice changer from the terminal and starting it for the first time from the web browser, not just simply stopping and restarting it, I get this warning:
However, my package manager (pacman) says it appears to be installed system-wide. I don't know how to remove the warning, as the instructions target Ubuntu and Ubuntu-based distributions. Arch does have this package, and installing it doesn't remove the warning.
|
I've been in touch with TheTrustedComputer, and I might have been misleading due to my lack of prior research. I did some more digging though, and here's the deal : The issue with AMD's compute stack (Which might just be a packaging issue) is that they tend to build binaries that only target some cards, but can only use a given GPU if both pytorch & the local ROCm libraries were built for it. As an example, whatever Arch ships with for ROCBLAS (one of the ROCm libraries) doesn't have that many targets, meaning that for most cards you have to override to the closest target the library's been built for. I suppose this is the case for more packages. On my setup, with a gfx1035 card, I can't run pytorch as either my local installation of the ROCm libraries or pytorch didn't build for it. They did build for gfx1030 though, and since my card is close enough to it, This is why the override is required, most installations don't ship "fat" binaries that support all targets, unlike CUDA which has a different mechanism allowing it to support all of it's targets easier. Finding the right override can be a pain though, I'm not sure how to document it well Edit: It's weird that the "Unable to find code object" errors only show up in some cases, and only segfaults in others, but when EDIT: It seems like the torch ROCm package is self-sufficient, and that the host libraries don't have much to do with it |
Is this process similar to windows? I would love to get ROCm working on my 7800xt. Would a translation layer be necessary? |
You need to wait for a pytorch ROCm backend to land for windows. Most of the ROCm libraries are already ported, but there's still some left (I think there's MIOpen?) before pytorch can work there |
Where me find Mmvcserversio.py? |
You might be able to use WSL to get this to work but I have not tried it myself. Let me know how it goes if you give it a shot - I might be able to help if you run into any issues.
It is located within the "server" folder. |
This folder is located in Windows version? |
|
For Polaris users(RX 580, RX 590, etc.) on Arch Linux, HIP binaries provided by official repositories don't work. Here's a small guide for users of older cards:
>>> import torch
>>> torch.cuda.is_available() If you see True there, you're good to go.
Note: onnxruntime may give out errors like |
Thank you, this worked, although I had to install some python modules manually since the current requirements.txt file will downlaod nvidia stuff. Also on ArchLinux i compiled Python3.10 since 3.11.5 is incompatible with onix. |
I can't record the mic successful because of this issue: |
I have an input sample-rate of 48k and the models are 40k, do i miss some modules? |
It was an Firefox issue |
Description
Hello,
I managed to get my GPU to display within the web interface by executing the following commands after setting up my Conda environment. Your mileage will vary, as this was only tested on a RDNA 2/Navi 2 AMD GPU and more specifically, it was tested with my 6700 XT. I would love to know if this works for anyone else, so please let me know so that I may open a pull request.
This was tested on Arch Linux but might work on other distributions. If not, you can always try with distrobox.
Before running any commands, make sure that you are cd'd into the cloned repository and have the environment activated, i.e. via
conda activate vcclient-dev
:If you are running a 7000 series GPUs, the last pip install command will look like this instead:
and if you are on the older Navi (5000 series) cards, it will be this:
Make sure to only run one of the pip install commands - the one that is for your particular GPU. Running it will uninstall the previous version of torch and some other modules and replace them with the ROCm ones.
Then finally, run as normal:
And if you want to pipe your audio through a virtual audio device with PipeWire, you can create a "virtual audio cable" with the following command:
This will create "Audiolink Speaker" and "Audolink Mic" devices. You will pipe the voice-changer audio through the speaker and set the microphone as the input device in your application, e.g. Discord.
Credits
stable-diffusion-webui for
pip install
ROCm commandsaudiolink
The text was updated successfully, but these errors were encountered: