Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up remaining streams in TcpConnPool dtor #34065

Merged
merged 4 commits into from May 15, 2024

Conversation

roelfdutoit
Copy link
Contributor

@roelfdutoit roelfdutoit commented May 9, 2024

Commit Message: Fix for heap-use-after-free issue in Envoy::Tcp::ConnPoolImpl::onPoolReady (see #34055)
Risk Level: Low

Signed-off-by: Roelof DuToit <roelof.dutoit@broadcom.com>
@@ -26,6 +26,7 @@ class TcpConnPool : public Router::GenericConnPool, public Envoy::Tcp::Connectio
Upstream::ResourcePriority priority, Upstream::LoadBalancerContext* ctx) {
conn_pool_data_ = thread_local_cluster.tcpConnPool(priority, ctx);
}
~TcpConnPool() override { cancelAnyPendingStream(); }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as said in the issue can we ENVOY_BUG if there's streams remaining?
/wait

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could not see a way to do that given that upstream_handle_ does not provide information about the streams.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry this was unclear, I meant if there was an actual reference that was getting cleaned up - if upstream_handle_ is non-null.

AFIK the pool should be getting onPoolFailure before getting torn down. If you have a repeatable case where that's not happening not using in-house code I'd be interested in understanding the lifetime issue. Otherwise if we're going to avoid the UAF we should indicate it's a bug that we're not being shut down as expected.

Signed-off-by: Roelof DuToit <roelof.dutoit@broadcom.com>
@@ -26,6 +26,10 @@ class TcpConnPool : public Router::GenericConnPool, public Envoy::Tcp::Connectio
Upstream::ResourcePriority priority, Upstream::LoadBalancerContext* ctx) {
conn_pool_data_ = thread_local_cluster.tcpConnPool(priority, ctx);
}
~TcpConnPool() override {
cancelAnyPendingStream();
ENVOY_BUG(upstream_handle_ == nullptr, "upstream_handle not null");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't a no-op to call the ENVOY_BUG after the cancel? I think it needs to be called before to be useful
/wait

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤦

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

argh also just noticed cancelAnyPendingStream is virtual.
Can you move the code to a non-virtual cancelAnyPendingStreamImpl and call that here, to avoid any issues with virtual code from destructor?
Hopefully the last pass!

Signed-off-by: Roelof DuToit <roelof.dutoit@broadcom.com>
Signed-off-by: Roelof DuToit <roelof.dutoit@broadcom.com>
Copy link
Contributor

@alyssawilk alyssawilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome!

@alyssawilk alyssawilk enabled auto-merge (squash) May 15, 2024 18:06
@alyssawilk alyssawilk merged commit ece9170 into envoyproxy:main May 15, 2024
52 of 53 checks passed
@roelfdutoit roelfdutoit deleted the tcpconnpool_fix branch May 21, 2024 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants