You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to see what can run on an 8GB Raspberry Pi 5, and it occours to me that your approach might scale down really well. Any tips for replicating what you did with something like TinyLlama or trying for an 8 bit quantization of LlaVA-Phi? I'd love to try training some sort of student model as an experiment from the more successful models you've trained.
The text was updated successfully, but these errors were encountered:
For what it's worth 4 bit quantizations of LLaVA 1.6 work quite well even in the limited context of a Raspberry Pi. I'll try quantizing MOE-LLaVa soon. Let me know if this is interesting.
Question
I'm trying to see what can run on an 8GB Raspberry Pi 5, and it occours to me that your approach might scale down really well. Any tips for replicating what you did with something like TinyLlama or trying for an 8 bit quantization of LlaVA-Phi? I'd love to try training some sort of student model as an experiment from the more successful models you've trained.
The text was updated successfully, but these errors were encountered: