-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Models: REQUEST_VALIDATION_ERROR when adding custom models #33
Comments
@maxjacu Thank you for your feedback. We'll try to reproduce and fix it asap if needed. |
@maxjacu I'm the litellm maintainer - thanks for using us. Can we hop on a call to better understand your problem ? |
@jameszyao # when properties field is null
docker (master) ✗ curl 'http://localhost:8080/api/v1/models' \
-H 'Authorization: Bearer {key}' \
-H 'Connection: keep-alive' \
-H 'Content-Type: application/json' \
--data-raw '{"name":"llama","model_schema_id":"custom_host/openai-function-call","credentials":{"CUSTOM_HOST_ENDPOINT_URL":"http://custom_url:8080","CUSTOM_HOST_MODEL_ID":"cpp","CUSTOM_HOST_API_KEY":"no-key"},"properties":null}' \
--compressed
{"status":"error","error":{"code":"REQUEST_VALIDATION_ERROR","message":"Model properties are required for openai-function-call"}}%
# when properties field is not empty
docker (master) ✗ curl 'http://localhost:8080/api/v1/models' \
-H 'Authorization: Bearer {key}' \
-H 'Connection: keep-alive' \
-H 'Content-Type: application/json' \
--data-raw '{"name":"llama","model_schema_id":"custom_host/openai-function-call","credentials":{"CUSTOM_HOST_ENDPOINT_URL":"http://custom_url:8080","CUSTOM_HOST_MODEL_ID":"cpp","CUSTOM_HOST_API_KEY":"no-key"},"properties":{"function_call":false,"streaming":false}}' \
--compressed
{"status":"error","error":{"code":"REQUEST_VALIDATION_ERROR","message":"Properties are not allowed for this model schema."}}%
# when properties field is empty
docker (master) ✗ curl 'http://localhost:8080/api/v1/models' \
-H 'Authorization: Bearer {key}' \
-H 'Connection: keep-alive' \
-H 'Content-Type: application/json' \
--data-raw '{"name":"llama","model_schema_id":"custom_host/openai-function-call","credentials":{"CUSTOM_HOST_ENDPOINT_URL":"http://custom_url:8080","CUSTOM_HOST_MODEL_ID":"cpp","CUSTOM_HOST_API_KEY":"no-key"},"properties":{}}' \
--compressed
{"status":"error","error":{"code":"REQUEST_VALIDATION_ERROR","message":"Properties are not allowed for this model schema."}}% |
+1 |
+1 遇到了同样的问题 |
We have resolved this issue in the latest version (will be released soon). Additionally, we separate local model providers like LM Studio and Ollama into individual providers. This allows for the local model integration with TaskingAI. Custom Host will still remain a valuable option for those wishing to use any provider not explicitly listed by us. |
{ it seems like this model_schema is wrong |
Describe the bug
When attempting to add custom model property error is raised. This happens with any combination of properties, toggled off, on etc.
{
"status": "error",
"error": {
"code": "REQUEST_VALIDATION_ERROR",
"message": "Properties are not allowed for this model schema."
}
}
To Reproduce
Steps to reproduce the behavior:
2.Custom Model
Expected behavior
Creates a custom hosted model. In my case litellm endpoint.
Screenshots
Desktop (please complete the following information):
Additional context
I would like to access models behind a litellm proxy i.e. llama2_70b text completion model, but can't get it to work on my or colleagues machine.
The text was updated successfully, but these errors were encountered: