Skip to content
This repository has been archived by the owner on Nov 25, 2020. It is now read-only.

RAM memory allocation and multiple models Issue #18

Open
makman7 opened this issue Jun 7, 2019 · 0 comments
Open

RAM memory allocation and multiple models Issue #18

makman7 opened this issue Jun 7, 2019 · 0 comments

Comments

@makman7
Copy link

makman7 commented Jun 7, 2019

1 - GarpePipe using around 7GB of RAM memory after the first request (model size is 500mb)
2- I didn't saw any function for multiple models server. For multi-models, running different servers on different port leads to a lot of RAM memory allocation.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant