You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your question.
Yes, supporting local deployment on low-resource device is definitely a must-do future work. We are currently focusing on training and releasing larger and better models, though. Later, we will explore the techniques in low-resource deployment for LLMs.
Thanks for your work! I find that the inference scripts in UltraChat/UltraLM/inference_cli.py at main · thunlp/UltraChat · GitHub is still a vanilla one. Do you plan to provide deployment scripts on low-resource devices such as MacBook?
The text was updated successfully, but these errors were encountered: