Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS LB only creates in one availability zone #28586

Closed
jensskott opened this issue Jul 7, 2016 · 22 comments
Closed

AWS LB only creates in one availability zone #28586

jensskott opened this issue Jul 7, 2016 · 22 comments
Assignees
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@jensskott
Copy link

jensskott commented Jul 7, 2016

When I create a service with a Loadbalancer in AWS it creates it with Cross-Zone Load Balancing: Disabled and only makes it available in one zone of two in us-west-1 for example.
Not sure if i need to extend the yaml file to enable cross zone load balancing.
skarmavbild 2016-07-08 kl 09 55 11

skarmavbild 2016-07-08 kl 09 55 26
@justinsb

@gleeb
Copy link

gleeb commented Nov 9, 2016

Experiencing the same problem... I added the second AZ myself, but thats really not the solution or something I can live with for very long (K8S 1.4.3)

@justinsb
Copy link
Member

How are you installing @gleeb and @jensskott ?

@rhenretta
Copy link

I'm having the same issue, should we be avoiding the loadbalancer type?

@justinsb
Copy link
Member

How are you installing @rhenretta ?

@rhenretta
Copy link

Using tectonic installer. V1.6.2. Currently set at 1 master and 2 workers across 4 availability zones. When it creates an elb it only has access to 1 master and 1 worker in the same az. I can manually add AZs to the ELB, but that defeats the purpose

@justinsb
Copy link
Member

Indeed it does defeat the purpose! The missing subnets sounds like a tectonic installer bug, I'm afraid. https://github.com/coreos/tectonic-installer is their issue tracker I believe.

We don't turn on cross-zone load balancing for "legacy reasons", i.e. I forgot to do it initially, and now I don't want to change behaviour silently. Instead we have this annotation you can set on your service with Type: LoadBalancer:

https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws.go#L109-L111

So you would add an annotation with:

service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"

If you're using yaml, don't forget to quote "true", because yaml.

@rhenretta
Copy link

rhenretta commented May 19, 2017

I added the annotation and recreated the service, but I have the same issue:

apiVersion: v1
kind: Service
metadata:
  name: taggenerator-staging
  namespace: taggenerator
  annotations: 
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
  type: LoadBalancer
  selector:
    app: TagGenerator-Staging
  ports:
    - name: web-http
      protocol: TCP
      port: 80
      targetPort: 80
    - name: service-http
      protocol: TCP
      port: 8080
      targetPort: 8080

@rhenretta
Copy link

image

@justinsb
Copy link
Member

The annotation only controls Cross-Zone Load Balancing (as in the original screenshot). Is that enabled now?

For the attached subnets, that issue needs to be raised with tectonic, at least until we determine it is a kubernetes issue. We'd ask the same if you were using kops (although admittedly in that case sometimes we'll just address it in this repo, because of the high degree of overlap between the people that write kops & maintain kubernetes on aws).

@rhenretta
Copy link

rhenretta commented May 19, 2017

Since I have nodes running in us-east-1d and us-east-1e, the load balancer should work with both these availability zones, and all 3 nodes should show in service. That isn't the case, since the ELB is created with only us-east-1e AZ, and so any nodes outside that AZ will be out of service.

So, with the annotation, yes cross zone load balancing is enabled, but without all the availability zones enabled on the load balancer, so the net result is just a single AZ being accessible.

@justinsb
Copy link
Member

Yes, the remaining issue is the installer issue I believe. That's at least the place to start.

I'll validate it works in kops to double-check.

@rhenretta
Copy link

yeah, let me know. This load balancer isn't being created by the installer, but by creating a service after the fact via kubectl. I would think the installer wouldn't come into play here.

@justinsb
Copy link
Member

Added an ELB to my kops cluster; k8s 1.6.1:

> aws elb describe-load-balancers
...
 "AvailabilityZones": [
                "us-east-1b", 
                "us-east-1c", 
                "us-east-1d"
            ], 
...

The installer makes a huge difference for kubernetes.

@k8s-github-robot
Copy link

@jensskott There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
(2) specifying the label manually: /sig <label>

Note: method (1) will trigger a notification to the team. You can find the team list here.

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 1, 2017
@1ambda
Copy link

1ambda commented Dec 20, 2017

Any update? I am using 1.8.5 but still got LB which enables only 1 AZ.

  annotations:
    "service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
    "service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled": "true"

I was able to add subnets for other AZs manually.

image

using the Edit Availability Zones menu.

@1ambda
Copy link

1ambda commented Dec 21, 2017

for AWS users.

  1. My k8s uses existing public subnets (multiple AZs) instead of kops generated subnets.
  2. Thus public subnets didn't have these annotations
  1. I added the annotations and then created internal load balancer type k8s service again. Now it works.

Please refer this comment as well.

# example terraform resource for public subnets. 

resource "aws_subnet" "shared_public" {
  count = "${length(var.public_subnets)}"

  vpc_id                  = "${var.vpc_id}"
  cidr_block              = "${var.public_subnets[count.index]}"
  availability_zone       = "${element(var.azs, count.index)}"
  map_public_ip_on_launch = "${var.map_public_ip_on_launch}"

  tags {
    Name = "${format("%s-shared-public-%s", var.name, element(var.azs, count.index))}"

    // https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/aws/tags.go#L34
    KubernetesCluster = "your.cluster.zepl.io"
    // https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/aws/tags.go#L51
    "kubernetes.io/cluster/your.cluster.zepl.io" = "shared"
  }
}

@iamsaso
Copy link

iamsaso commented Mar 2, 2018

You need to tag your subnets with the name of your cluster kubernetes.io/cluster/your.cluster.zepl.io and value shared. This will allow Kubernetes to select the zones you want to use.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 14, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@angegar
Copy link

angegar commented Nov 20, 2018

Hi all, i tagged the subnet as said by @Sasso but in my case the ELB which is in an Utility subnet( public subnet) still not see the nodes in the private subnets. If i add the az manually to the elb i works but i would like to do it with kops to automate. How did you solve the issue ?

@1hanymhajna
Copy link

Hi,
I see the same behavior in eks as well,

  • All the subnets have the same tag that equal to the cluster name with shared value
  • Added service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" as annotations

But when i look to the AZ that added to the load balancer, it just add the one that have access to outside ( the other two AZ have subnets without connection to gateway (internal subnets)

When i added them manually it working fine

To make the things clear:
I have 3 subnets, everyone in different AZ
subnet 1
subnet 2
subnet 3

subnet 1 and 2 can go out to the worldwide (via internet gateway) using NAT that connected to subnet3

All of them with the same tags with value 'shared' value

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests