[Request] llama对话过程能否保活? #2260
-
🥰 需求描述我是docker部署的ollama,机器是i-3600k+rtx4090 🧐 解决方案能否在启动对话后保活模型?或者有其他方案? 📝 补充信息No response |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments
-
🥰 Description of requirementsI deployed ollama with docker, and the machine is i-3600k+rtx4090
🧐 SolutionIs it possible to keep a model alive after starting a conversation? Or are there other options? 📝 Supplementary informationNo response |
Beta Was this translation helpful? Give feedback.
-
这可能要问下 ollama 的社区了 |
Beta Was this translation helpful? Give feedback.
-
You might have to ask the ollama community about this. |
Beta Was this translation helpful? Give feedback.
-
看Ollama 部署的文档,我这边 PR 过对应的环境变量可以让模型保活时间变长 |
Beta Was this translation helpful? Give feedback.
-
Looking at the Ollama deployment documentation, I PRed the corresponding environment variables to make the model stay alive longer. |
Beta Was this translation helpful? Give feedback.
看Ollama 部署的文档,我这边 PR 过对应的环境变量可以让模型保活时间变长