New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up remaining streams in TcpConnPool dtor #34065
Conversation
Signed-off-by: Roelof DuToit <roelof.dutoit@broadcom.com>
@@ -26,6 +26,7 @@ class TcpConnPool : public Router::GenericConnPool, public Envoy::Tcp::Connectio | |||
Upstream::ResourcePriority priority, Upstream::LoadBalancerContext* ctx) { | |||
conn_pool_data_ = thread_local_cluster.tcpConnPool(priority, ctx); | |||
} | |||
~TcpConnPool() override { cancelAnyPendingStream(); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as said in the issue can we ENVOY_BUG if there's streams remaining?
/wait
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could not see a way to do that given that upstream_handle_
does not provide information about the streams.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry this was unclear, I meant if there was an actual reference that was getting cleaned up - if upstream_handle_ is non-null.
AFIK the pool should be getting onPoolFailure before getting torn down. If you have a repeatable case where that's not happening not using in-house code I'd be interested in understanding the lifetime issue. Otherwise if we're going to avoid the UAF we should indicate it's a bug that we're not being shut down as expected.
@@ -26,6 +26,10 @@ class TcpConnPool : public Router::GenericConnPool, public Envoy::Tcp::Connectio | |||
Upstream::ResourcePriority priority, Upstream::LoadBalancerContext* ctx) { | |||
conn_pool_data_ = thread_local_cluster.tcpConnPool(priority, ctx); | |||
} | |||
~TcpConnPool() override { | |||
cancelAnyPendingStream(); | |||
ENVOY_BUG(upstream_handle_ == nullptr, "upstream_handle not null"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't a no-op to call the ENVOY_BUG after the cancel? I think it needs to be called before to be useful
/wait
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤦
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
argh also just noticed cancelAnyPendingStream is virtual.
Can you move the code to a non-virtual cancelAnyPendingStreamImpl and call that here, to avoid any issues with virtual code from destructor?
Hopefully the last pass!
Signed-off-by: Roelof DuToit <roelof.dutoit@broadcom.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome!
Commit Message: Fix for heap-use-after-free issue in Envoy::Tcp::ConnPoolImpl::onPoolReady (see #34055)
Risk Level: Low