You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the enhancement: Recreating this as the customer requested the request to be public.
Describe a specific use case for the enhancement or feature:
Link to Github issues (if available):
** Use Case**: We trying to injest avery large log line data and are getting memory issues due to file beat allocating the necessary buffer before performing further steps.
Their use case is as follows:
As for our use-case and the impact of this current issue, let me explain. We collect many log types on all of our hosts, and specifically on our Postgres DB hosts, we collect postgres logs. For some instances, at times users may have more verbose logging enabled which includes logging INSERT statements for either auditing or debugging purposes. Some of these insert statements can be extremely massive, reaching millions of characters, and in the earlier example I mentioned I observed an INSERT statement that was a total of approx. 42 million characters long, which would be equivalent to approx. 42MB in size. Even having 5 of these log entries generated simultaneously, I assume that would result in at least roughly 200MB of memory required on top of the current filebeat memory footprint. This also doesn't include the overhead of the periodic scan of all files matching the glob patterns to determine which logs files are to be consumed . Some might say that we could simply increase the memory limit for our filebeat cgroup although at this point we're not considering that option as we need a more reliable mechanism to limit memory usage in order to guarantee filebeat won't exceed it's standard 200MB limit which usually always been more than enough.
Describe the enhancement: Recreating this as the customer requested the request to be public.
Describe a specific use case for the enhancement or feature:
Link to Github issues (if available):
** Use Case**: We trying to injest avery large log line data and are getting memory issues due to file beat allocating the necessary buffer before performing further steps.
Their use case is as follows:
Product Metadata: Product - Beats, Component - Filebeat, Subcomponent -
** Products/Versions**: Beats 8.12
** - Workarounds (if any)**:
Customer Details - Feature Request: We are requesting a feature so that the buffer size is dynamic and the performance of the deployment is improved.
The text was updated successfully, but these errors were encountered: