-
Notifications
You must be signed in to change notification settings - Fork 862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JAVA-3051: Memory leak #1743
base: 4.x
Are you sure you want to change the base?
JAVA-3051: Memory leak #1743
Conversation
We explored several alternatives for unit testing this memory leak fix but decided to give up testing it, as we could not find a reliable way to test the garbage collector's behavior. The unit test we wrote and considered:Lines 118 to 142 in cb61eb8
This test:
We checked that
We considered thatAccording to this post, We think workarounds like generating a huge amount of garbage to trigger garbage collection may not be worth it, either. Therefore, we concluded that no test may be the best choice for now, and the checks we perform above may be sufficient. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change will allow entries to be be dropped from LoadBalancingPolicyWrapper#distances
but I'm not entirely convinced there aren't other places where strong references to the Node
object will remain.
In DefaultLoadBalancingPolicy
there are responseTimes
and upTimes
maps which use Node
as the key with a ConcurrentHashMap
and I don't see where entries are ever removed so this likely will continue to hold these references (although with upTimes
it doesn't look like items are ever added).
ControlConnection
maintains two weak hash maps - lastDistanceEvents
and lastStateEvents
- where both the value types DistanceEvent
and NodeStateEvent
hold a hard reference to the Node
, per the WeakHashMap
docs it looks like this will prevent the entries being cleaned up:
The value objects in a WeakHashMap are held by ordinary strong references. Thus care should be taken to ensure that value objects do not strongly refer to their own keys, either directly or indirectly, since that will prevent the keys from being discarded
Likely there are more places too.
I think it could be good to set up at least a one-off test which reproduces the leak from the original ticket and confirm that this change (and possibly the others mentioned above) successfully prevent the leak before marking this one as completed.
...src/main/java/com/datastax/oss/driver/internal/core/metadata/LoadBalancingPolicyWrapper.java
Outdated
Show resolved
Hide resolved
...ain/java/com/datastax/oss/driver/internal/core/loadbalancing/DefaultLoadBalancingPolicy.java
Outdated
Show resolved
Hide resolved
...ain/java/com/datastax/oss/driver/internal/core/loadbalancing/DefaultLoadBalancingPolicy.java
Outdated
Show resolved
Hide resolved
...ain/java/com/datastax/oss/driver/internal/core/loadbalancing/DefaultLoadBalancingPolicy.java
Outdated
Show resolved
Hide resolved
...ain/java/com/datastax/oss/driver/internal/core/loadbalancing/DefaultLoadBalancingPolicy.java
Outdated
Show resolved
Hide resolved
...ain/java/com/datastax/oss/driver/internal/core/loadbalancing/DefaultLoadBalancingPolicy.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Couple of minor pieces of feedback, most importantly I think we want to avoid logging at info
when ignoring events which have lost their node reference as this might end up creating a lot of unnecssary log churn
return true; | ||
AtomicLongArray array = responseTimes.getIfPresent(node); | ||
if (array == null) return true; | ||
else if (array.length() == 2) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: consider collapsing the null check into this conditional so there is only one irregular state return value (return true
)
policy.onUp(event.node); | ||
DefaultNode node = event.node.get(); | ||
if (node == null) { | ||
LOG.info("[{}] Node for this event was removed, ignoring: {}", logPrefix, event); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Info-level log might be a bit high for this notice as there isn't really action the user should take when this happens. Consider dropping to debug/trace
if (event.newState == NodeState.UP) { | ||
policy.onUp(event.node); | ||
DefaultNode node = event.node.get(); | ||
if (node == null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: do we need to re-perform the null for every policy? Is there a good reason not to pull this out of the loop?
@@ -53,10 +54,10 @@ public static NodeStateEvent removed(DefaultNode node) { | |||
*/ | |||
public final NodeState newState; | |||
|
|||
public final DefaultNode node; | |||
public final WeakReference<DefaultNode> node; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Since we're changing the type here I'm wondering if it might be cleaner to provide a @nullable getter for DefaultNode, rather than exposing the weak reference directly. Same comment for DistanceEvent.node
.
context.getNodeStateListener().onRemove(event.node); | ||
DefaultNode node = event.node.get(); | ||
if (node == null) { | ||
LOG.info( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think info-level is too high here
@@ -119,15 +120,22 @@ public NodeMetricUpdater newNodeUpdater(Node node) { | |||
} | |||
|
|||
protected void processNodeStateEvent(NodeStateEvent event) { | |||
DefaultNode node = event.node.get(); | |||
if (node == null) { | |||
LOG.info( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think info-level is too high here
@@ -121,16 +122,22 @@ public NodeMetricUpdater newNodeUpdater(Node node) { | |||
} | |||
|
|||
protected void processNodeStateEvent(NodeStateEvent event) { | |||
DefaultNode node = event.node.get(); | |||
if (node == null) { | |||
LOG.info( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I think info-level is too high here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Jira link for my own reference: https://datastax-oss.atlassian.net/browse/JAVA-3051
It would be helpful to have logs from the repro that the user submitted, specifically the logs from com.datastax.oss.driver.internal.core.metadata.NodeStateManager.SingleThreaded#setState
that look like "[{}] Transitioning {} {}=>{} (because {})".
This seems like a bug in AWS Keyspaces, since each node includes itself in system.peers
, which is unexpected to the driver according to the user's report:
[s0] Control node has an entry for itself in system.peers: this entry will be ignored. This is likely due to a misconfiguration; please verify your rpc_address configuration in cassandra.yaml on all nodes in your cluster.
I left a few comments but otherwise these seem like generally positive changes. I agree that it's difficult to write unit tests for memory leaks like these, especially without any scaffolding around heapdump capture or parsing. I'm a bit concerned that there may be paths where a node event may not have any strong reference and is then garbage-collected and ignored by handlers, rather than surviving long enough to serve its purpose.
@@ -82,6 +82,7 @@ public void markReady() { | |||
consumer.accept(event); | |||
} | |||
} finally { | |||
recordedEvents.clear(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reasoning for this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ReplayingEventFilter works like a buffer. It holds a list (queue) of Events since its state is STARTED, and consumes all of them when its state becomes READY all at once. However, the list of the events is never cleared. They leak strong references for the nodes.
clearMetrics(); | ||
cancelMetricsExpirationTimeout(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reasoning for this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a lambda for timeout. Even after the timeout and lambda triggered, the Timer object is not collected and it still holds a reference to this, until it's canceled.
long threshold = now - RESPONSE_COUNT_RESET_INTERVAL_NANOS; | ||
long leastRecent = array.get(0); | ||
return leastRecent - threshold < 0; | ||
} else return true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style nit: Invert the condition and use an early-return if response rate is insufficient, so you don't have else return true
protected final Map<Node, Long> upTimes = new ConcurrentHashMap<>(); | ||
private final boolean avoidSlowReplicas; | ||
|
||
public DefaultLoadBalancingPolicy(@NonNull DriverContext context, @NonNull String profileName) { | ||
super(context, profileName); | ||
this.avoidSlowReplicas = | ||
profile.getBoolean(DefaultDriverOption.LOAD_BALANCING_POLICY_SLOW_AVOIDANCE, true); | ||
CacheLoader<Node, AtomicLongArray> cacheLoader = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style nit: use a separate class for the cache value here, rather than using AtomicLongArray as a generic container. Seems like it can be something like NodeResponseRateSample
, with methods like boolean hasSufficientResponses
. I see this was present in the previous implementation, so not a required change for this PR, just something I noticed.
Very much agreed that the underlying issue here appears to be an issue with AWS Keyspaces @aratno; that's being addressed in a different ticket. The scope of this change is around preventing the (potentially indefinite) caching of Node instances within an LBP. |
return array; | ||
} | ||
}; | ||
this.responseTimes = CacheBuilder.newBuilder().weakKeys().build(cacheLoader); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should add a RemovalListener here.
If a GC happens and response times for a Node are purged, then we'll end up treating that as "insufficient responses" in isResponseRateInsufficient
, which can lead us to mark a node as unhealthy. I recognize that this is a bit of a pathological example, but this behavior does depend on GC timing and would be a pain to track down, so adding logging could make someone's life easier down the line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your review. Would you please explain more about this?
If GC collects a node, that means the node is gone. If the node is gone, why do we care about whether it's treated as healthy or not?
Anyway, for RemovalListener
, do you mean sth like this?
this.responseTimes = CacheBuilder.newBuilder().weakKeys().removalListener(
(RemovalListener<Node, AtomicLongArray>) notification ->
LOG.trace("[{}] Evicting response times for {}: {}",
logPrefix, notification.getKey(), notification.getCause()))
.build(cacheLoader);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aratno Hi Abe, thank you for your review. Is there any update?
No description provided.