Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log pushed using s3 output plugin throws error "2020-06-16 13:23:35 +0000 [warn]: #0 [out_s3] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=44.62332443147898 slow_flush_log_threshold=20.0 plugin_id="out_s3"" #334

Open
shilpasshetty opened this issue Jun 16, 2020 · 4 comments
Labels
help wanted Need help from users

Comments

@shilpasshetty
Copy link

shilpasshetty commented Jun 16, 2020

Hi team,
Below is my config for td-agent ::

Include config files in the ./config.d directory

@include config.d/*.conf

<match. *>
@type s3
@id out_s3
@log_level debug
aws_key_id "xx
aws_sec_key "xx
s3_bucket "xx
s3_endpoint "xx
s3_region xx
s3_object_key_format %Y-%m-%d-%H-%M-%S-%{index}-%{hostname}.%{file_extension}
store_as "gzip"

time_key time
tag_key tag
localtime false
time_format "%Y-%m-%dT%H:%M:%SZ"
time_type string


@type json


@type file
path /var/log/fluentd-buffers/s3.buffer
timekey 60
flush_at_shutdown true
timekey_wait 10
timekey_use_utc true
chunk_limit_size 10m

I am using in_Tail plugin to parse and output plugin s3 ,its consuming 100%CPU. When I checked log I am getting below error .Could anyone please let me know what I am missing here.
2020-06-16 13:24:11 +0000 [warn]: #0 [out_s3] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=35.5713529381901 slow_flush_log_threshold=20.0 plugin_id="out_s3"
Version of fluentd "'fluent-plugin-s3' version '1.3.2':"
I tried with below options as well.It dint help me

@type file
path /var/log/fluentd-buffers/s3.buffer
timekey 60
flush_interval 30s
flush_thread_interval 5
flush_thread_burst_interval 15
flush_thread_count 10
timekey_wait 10
timekey_use_utc true
chunk_limit_size 6m
buffer_chunk_limit 256m

@repeatedly
Copy link
Member

2020-06-16 13:24:11 +0000 [warn]: #0 [out_s3] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=35.5713529381901 slow_flush_log_threshold=20.0 plugin_id="out_s3"

This means uploading data to S3 took 35 seconds.
I'm not sure why... but if you see this log frequently, check your network or something.
Basically, 35 or 44 seconds are very slow.

@shilpasshetty
Copy link
Author

Yeah but when I tried with multiple worker instance for tail .The CPU issue is solved and also problem solved ,so I am just wandering..

@smiley-ci
Copy link

We see similar issue logs are put to S3 with delayed time..can you tell us how you used the worker concept .Tail supports only one worker .Worker 0-2 does not supported.

@github-actions
Copy link

github-actions bot commented Jul 6, 2021

This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days

@github-actions github-actions bot added the stale label Jul 6, 2021
@kenhys kenhys added help wanted Need help from users and removed stale labels Jul 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Need help from users
Projects
None yet
Development

No branches or pull requests

4 participants