Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS LB only creates in one availability zone #74527

Closed
1hanymhajna opened this issue Feb 25, 2019 · 9 comments
Closed

AWS LB only creates in one availability zone #74527

1hanymhajna opened this issue Feb 25, 2019 · 9 comments
Labels
area/provider/aws Issues or PRs related to aws provider kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider.

Comments

@1hanymhajna
Copy link

My issue is related to this issue #28586,
I see the same behavior in eks as well,

  • All the subnets have the same tag that equal to the cluster name with shared value
  • Added service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" as annotations

But when i look to the AZ that added to the load balancer, it just add the one that have access to outside ( the other two AZ have subnets without connection to gateway (internal subnets)

When i added them manually it working fine

To make the things clear:
I have 3 subnets, everyone in different AZ
subnet 1
subnet 2
subnet 3

subnet 1 and 2 can go out to the worldwide (via internet gateway) using NAT that connected to subnet3

All of them with the same tags with value 'shared' value

@1hanymhajna 1hanymhajna added the kind/support Categorizes issue or PR as a support question. label Feb 25, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Feb 25, 2019
@spender0
Copy link

same issue

@jralph
Copy link

jralph commented May 23, 2019

Any news on this issue?

@gree-gorey
Copy link

gree-gorey commented May 24, 2019

I don't think that's Kubernetes related.

I was able to resolve this problem by creating public-facing subnets within AZ that has instances in private subnet with NAT gateway.

So, instead of having this layout:

AZ-a: instance-1 --- subnet-1 (private) -x- LB
AZ-b: instance-2 ---- subnet-2 (public) --- LB
AZ-c: instance-3 ---- subnet-3 (public) --- LB

You should change it to this:

AZ-a: instance-1 --- subnet-1 (private) --- subnet-4 (public) --- LB
AZ-b: instance-2 ---- subnet-2 (public) ------------------------- LB
AZ-c: instance-3 ---- subnet-3 (public) ------------------------- LB

Here is the relevant AWS documentation:
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/

@randomvariable
Copy link
Member

/sig aws

@k8s-ci-robot k8s-ci-robot added sig/aws and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jun 28, 2019
@k8s-ci-robot k8s-ci-robot added area/provider/aws Issues or PRs related to aws provider needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. and removed sig/aws labels Aug 6, 2019
@nikhita
Copy link
Member

nikhita commented Aug 6, 2019

/sig cloud-provider

@k8s-ci-robot k8s-ci-robot added sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Aug 6, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 4, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 4, 2019
@justinsb justinsb added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 13, 2019
@debu99
Copy link

debu99 commented Jan 7, 2020

any update?

@1hanymhajna
Copy link
Author

1hanymhajna commented Jan 19, 2020

Our solution was to create two subnets in every AZ, one is private, and one is public with Nat gateway then it worked as expected.
Closing the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/aws Issues or PRs related to aws provider kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider.
Projects
None yet
Development

No branches or pull requests

10 participants