Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Serving] Example to chat from command line #74

Open
carsonwang opened this issue Jan 22, 2024 · 0 comments
Open

[Serving] Example to chat from command line #74

carsonwang opened this issue Jan 22, 2024 · 0 comments

Comments

@carsonwang
Copy link
Contributor

Current the examples just send request to the serving server via http request, OpenAI SDK, etc. We can only demo chat from the web UI. It will be useful to support chat from the command line too.

zhangjian94cn pushed a commit to zhangjian94cn/llm-on-ray that referenced this issue Feb 4, 2024
* fix bugs and update codes for enabling workflow on borealis

* update

* update

* update

* update

* update

* add file

* Update README.finetune.gpu.md

* update

* update

* add gpu workflow yml

* up

* up

* up

* up

* up

* just disable for debugging

* update

* update

* update

* [common] add device option for TorchConfig (intel#126)

* add device option for TorchConfig

* update

* update

* update

* fix bugs and update codes for enabling workflow on borealis

* update

* update

* update

* up

* update

* update

* udpate

* update

* update

* fix comments

* update

* update

* update

* update

* update

* update

* udpate

* update
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant