You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The trained model stats incorrectly calculates the required_native_memory_bytes field.
Since #98139 calculating the model's required memory has taken into account the number of allocations as each allocation uses extra memory. The memory requirements for each deployment are different depending on the number of allocations. The bug is that the total number of deployed allocations is used to calculate the required memory rather than the deployment's number of allocations.
required_native_memory_bytes should be calculated per deployment using the correct number of allocations
The bug only affects the Stats API output it does not affect deploying a model
Steps to Reproduce
Deploy any 2 NLP model in machine learning. Change the number of allocations for the first model and observe the required memory change for second model.
Logs (if relevant)
No response
The text was updated successfully, but these errors were encountered:
Elasticsearch Version
8.13.0
Installed Plugins
No response
Java Version
bundled
OS Version
any
Problem Description
The trained model stats incorrectly calculates the
required_native_memory_bytes
field.Since #98139 calculating the model's required memory has taken into account the number of allocations as each allocation uses extra memory. The memory requirements for each deployment are different depending on the number of allocations. The bug is that the total number of deployed allocations is used to calculate the required memory rather than the deployment's number of allocations.
required_native_memory_bytes
should be calculated per deployment using the correct number of allocationsThe bug only affects the Stats API output it does not affect deploying a model
Steps to Reproduce
Deploy any 2 NLP model in machine learning. Change the number of allocations for the first model and observe the required memory change for second model.
Logs (if relevant)
No response
The text was updated successfully, but these errors were encountered: