New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logback appender attaches to local Grpc Context, leading to cancellation failures. #537
Comments
After discussing the problem with @jsuereth we come to the following action items:
|
FYI - I think this is actually a bug. Users of gRPC are unable to successfully use the log-appender without risk of losing logs due to cancellation issues. I was unable to configure sync writing via the logback.xml, so at a minimum that's missing. |
Execution action items has the following results:
The follow up actions planned include:
|
Any update on this issue? Continuously seeing this issue happened on my service. |
Hello! We are also experiencing this issue; it took us quite a while to understand what was going on. Is there any plan to fix it? Thanks! |
We verified that setting writeSynchronicity=SYNC stops the flood of CANCELLED exceptions, but are worried about performance of synchronous logging |
@eypher are you running on Google Cloud? In that case you can leverage support for logging to We are working to resolve this problem but the changes in core components (like gax and grpc) take time to do. |
@jsuereth right now we can only substitute another context in the java-logging-logback/src/main/java/com/google/cloud/logging/logback/LoggingAppender.java Lines 253 to 256 in 0509ffd
by enclosing the write() call within:
io.grpc.Context loggingContext = io.grpc.Context.current().fork();
io.grpc.Context prevContext = loggingContext.attach();
try {
getLogging().write(Collections.singleton(logEntry), defaultWriteOptions);
} finally {
loggingContext.detach(prevContext);
} However, we do not have a set of tests to validate all possible behaviors in a case of the asynchronous log writing which batching multiple write requests together. |
Confirmed with the reporter that the issue does not reproduce with the latest versions of the logging library:
The fix comes from gax after they implemented internal thread pool for managing rpc communication channels. |
Currently, the logback appender is attaching its gRPC calls to the thread-local GRPC context. This means that, as a user of gRPC, if I have a tight deadline for my production RPCs, it's likely the gRPC context gets cancelled for my telemetry-plane logs RPCs.
This can be reproduced with the following code:
Which throws something like the following:
This should be fixable via wrapping all call to the logging API in a new grpc Context (similar to what gRPC does for its own control plane calls).
Context prevContext = context.attach();
w/ corresponding detach after sending rpc.Context ctx = Context.current().fork();
I'm not sure where you'd like to fix it, but happy to submit a PR with a fix.
The text was updated successfully, but these errors were encountered: