Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: prompt_logprompt from endpoint #4747

Open
basma-b opened this issue May 10, 2024 · 0 comments
Open

[Usage]: prompt_logprompt from endpoint #4747

basma-b opened this issue May 10, 2024 · 0 comments
Labels
usage How to use vllm

Comments

@basma-b
Copy link

basma-b commented May 10, 2024

Your current environment

I want to get the logprobs from a vLLM endpoint on the prompt + answer in order to evaluate the LLM on selective task. How can I do that?

curl --location URL/v1/chat/completions \
--header "Content-Type: application/json" \
--data '{
	"model": "model_name",
	"echo": true,
	"messages": [
		{
			"role": "user", "content": "hello"}
		],
	"logprobs": true,
	"top_logprobs": 1
}'

I am using this code but I get only the logprobs of the answer. Can anyone help please?

How would you like to use vllm

I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.

@basma-b basma-b added the usage How to use vllm label May 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use vllm
Projects
None yet
Development

No branches or pull requests

1 participant