Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve insertion object batching logic #1631

Open
gjedlicska opened this issue Jun 15, 2023 · 0 comments
Open

Improve insertion object batching logic #1631

gjedlicska opened this issue Jun 15, 2023 · 0 comments
Assignees
Labels
enhancement New feature or request question

Comments

@gjedlicska
Copy link
Contributor

Currently, the insertion object chunking logic at Splits the inserted objects into arrays of 500 and closures to 1000. It doesn't take into account, the size of the objects in the array.

This results in potentially big ~ 50 MB stringified array, when the insert the values into the DB.
Insertion object this big arriving in a short period of time during a big-ish ( but not unreasonably big) send operation, causes the server to use excessive memory and the garbage collection of knex internal insertion objects is causing CPU bottlenecks to a point, where the whole send operation can time out due to the server not being able to respond in the request timeout (60 seconds).

The batching logic should cut off the batches at ~ 10 MB as an initial target

@gjedlicska gjedlicska added enhancement New feature or request question labels Jun 15, 2023
@gjedlicska gjedlicska self-assigned this Jun 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question
Projects
None yet
Development

No branches or pull requests

1 participant