-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduced and now seemingly removed support for Mali? #2274
Comments
Thank you for input. We love to support mali, the new link can be found here https://blog.mlc.ai/2024/04/20/GPU-Accelerated-LLM-on-Orange-Pi I think it was due to we updated the post for llama3 and changed a date, so the blogpost link get updated. If you find there are more places for improvements, we are all ears |
I also added a redirection so the old link also works. Thanks @federicoparra for spotting it |
Could you build pip versions for Orange Pi 5? |
Also, since I got your attention :) @tqchen could answer me this #2244 namely is there a way with MLC compilation to run an iterative optimization for one's specific harware akin to what AutoTVM does? |
On pip packages, MLC pip packages are automated nightly via https://github.com/mlc-ai/package/, we use github actions. I think one main barrier for orange pi(ARM64 build) is to have GH action(which runs on x86) to be able to cross build for that. We do already have Mac arm64 runner. So if docker can work in that env, maybe that is another route. |
❓ General Questions
I've noticed that your documentation is not thorough with regards to the Mali GPU, and now even the blog has been removed.
This is in my opinion such a bad idea :(
Perhaps the very best argument to move from llama.cpp to MLC was the support for this very cheap computer (the Orange Pi).
The text was updated successfully, but these errors were encountered: