New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS LB only creates in one availability zone #74527
Comments
same issue |
Any news on this issue? |
I don't think that's Kubernetes related. I was able to resolve this problem by creating public-facing subnets within AZ that has instances in private subnet with NAT gateway. So, instead of having this layout:
You should change it to this:
Here is the relevant AWS documentation: |
/sig aws |
/sig cloud-provider |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
any update? |
Our solution was to create two subnets in every AZ, one is private, and one is public with Nat gateway then it worked as expected. |
My issue is related to this issue #28586,
I see the same behavior in eks as well,
But when i look to the AZ that added to the load balancer, it just add the one that have access to outside ( the other two AZ have subnets without connection to gateway (internal subnets)
When i added them manually it working fine
To make the things clear:
I have 3 subnets, everyone in different AZ
subnet 1
subnet 2
subnet 3
subnet 1 and 2 can go out to the worldwide (via internet gateway) using NAT that connected to subnet3
All of them with the same tags with value 'shared' value
The text was updated successfully, but these errors were encountered: