Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fluency - emit() failed due to buffer full. Flushing buffer. Please try again. #67

Open
fly7632785 opened this issue Jun 14, 2022 · 0 comments

Comments

@fly7632785
Copy link

Hi, I got this problem, how to deal with it, please.
企业微信截图_16549236787939
It is hitting these logs a lot all the time.
image
image
image

Then checked source, maybe it append the data which its size is more than MaxBufferSize?
How can it append the data which is more than MaxBufferSize(256M) one time ?

Looking forward to getting your reply

Thank you

Here is config.
`

<appender name="FLUENCY_SYNC" class="ch.qos.logback.more.appenders.FluencyLogbackAppender">
    <!-- Tag for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->
    <!-- 微服务名 -->
    <tag>${applicationName}</tag>
    <!-- [Optional] Label for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->

    <!-- Host name/address and port number which Fluentd placed -->
    <remoteHost>${fluentdAddr}</remoteHost>
    <port>24224</port>

    <!-- [Optional] Multiple name/addresses and port numbers which Fluentd placed
   <remoteServers>
      <remoteServer>
        <host>primary</host>
        <port>24224</port>
      </remoteServer>
      <remoteServer>
        <host>secondary</host>
        <port>24224</port>
      </remoteServer>
    </remoteServers>
     -->

    <!-- [Optional] Additional fields(Pairs of key: value) -->
    <!-- 环境 -->
    <additionalField>
        <key>env</key>
        <value>${profile}</value>
    </additionalField>

    <!-- [Optional] Configurations to customize Fluency's behavior: https://github.com/komamitsu/fluency#usage  -->
    <ackResponseMode>true</ackResponseMode>
    <!-- <fileBackupDir>/tmp</fileBackupDir> -->
    <bufferChunkInitialSize>2097152</bufferChunkInitialSize>
    <bufferChunkRetentionSize>16777216</bufferChunkRetentionSize>
    <maxBufferSize>268435456</maxBufferSize>
    <bufferChunkRetentionTimeMillis>1000</bufferChunkRetentionTimeMillis>
    <connectionTimeoutMilli>5000</connectionTimeoutMilli>
    <readTimeoutMilli>5000</readTimeoutMilli>
    <waitUntilBufferFlushed>30</waitUntilBufferFlushed>
    <waitUntilFlusherTerminated>40</waitUntilFlusherTerminated>
    <flushAttemptIntervalMillis>200</flushAttemptIntervalMillis>
    <senderMaxRetryCount>12</senderMaxRetryCount>
    <!-- [Optional] Enable/Disable use of EventTime to get sub second resolution of log event date-time -->
    <useEventTime>true</useEventTime>
    <sslEnabled>false</sslEnabled>
    <!-- [Optional] Enable/Disable use the of JVM Heap for buffering -->
    <jvmHeapBufferMode>false</jvmHeapBufferMode>
    <!-- [Optional] If true, Map Marker is expanded instead of nesting in the marker name -->
    <flattenMapMarker>false</flattenMapMarker>
    <!--  [Optional] default "marker" -->
    <markerPrefix></markerPrefix>

    <!-- [Optional] Message encoder if you want to customize message -->
    <encoder>
        <pattern><![CDATA[%-5level %logger{50}#%line %message]]></pattern>
    </encoder>

    <!-- [Optional] Message field key name. Default: "message" -->
    <messageFieldKeyName>msg</messageFieldKeyName>

</appender>

 <appender name="FLUENCY" class="ch.qos.logback.classic.AsyncAppender">
    <!-- Max queue size of logs which is waiting to be sent (When it reach to the max size, the log will be disappeared). -->
    <queueSize>999</queueSize>
    <!-- Never block when the queue becomes full. -->
    <neverBlock>true</neverBlock>
    <!-- The default maximum queue flush time allowed during appender stop.
         If the worker takes longer than this time it will exit, discarding any remaining items in the queue.
         10000 millis
     -->
    <maxFlushTime>1000</maxFlushTime>
    <appender-ref ref="FLUENCY_SYNC"/>
</appender>

`

I am sure there is not a so big log data ( more than 256M),I think it maybe that the unsent data are accumulated into a big data, so it is keeping failing. Is it possible about that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants