New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS LB only creates in one availability zone #28586
Comments
Experiencing the same problem... I added the second AZ myself, but thats really not the solution or something I can live with for very long (K8S 1.4.3) |
How are you installing @gleeb and @jensskott ? |
I'm having the same issue, should we be avoiding the loadbalancer type? |
How are you installing @rhenretta ? |
Using tectonic installer. V1.6.2. Currently set at 1 master and 2 workers across 4 availability zones. When it creates an elb it only has access to 1 master and 1 worker in the same az. I can manually add AZs to the ELB, but that defeats the purpose |
Indeed it does defeat the purpose! The missing subnets sounds like a tectonic installer bug, I'm afraid. https://github.com/coreos/tectonic-installer is their issue tracker I believe. We don't turn on cross-zone load balancing for "legacy reasons", i.e. I forgot to do it initially, and now I don't want to change behaviour silently. Instead we have this annotation you can set on your service with Type: LoadBalancer: So you would add an annotation with:
If you're using yaml, don't forget to quote "true", because yaml. |
I added the annotation and recreated the service, but I have the same issue:
|
The annotation only controls Cross-Zone Load Balancing (as in the original screenshot). Is that enabled now? For the attached subnets, that issue needs to be raised with tectonic, at least until we determine it is a kubernetes issue. We'd ask the same if you were using kops (although admittedly in that case sometimes we'll just address it in this repo, because of the high degree of overlap between the people that write kops & maintain kubernetes on aws). |
Since I have nodes running in us-east-1d and us-east-1e, the load balancer should work with both these availability zones, and all 3 nodes should show in service. That isn't the case, since the ELB is created with only us-east-1e AZ, and so any nodes outside that AZ will be out of service. So, with the annotation, yes cross zone load balancing is enabled, but without all the availability zones enabled on the load balancer, so the net result is just a single AZ being accessible. |
Yes, the remaining issue is the installer issue I believe. That's at least the place to start. I'll validate it works in kops to double-check. |
yeah, let me know. This load balancer isn't being created by the installer, but by creating a service after the fact via kubectl. I would think the installer wouldn't come into play here. |
Added an ELB to my kops cluster; k8s 1.6.1:
The installer makes a huge difference for kubernetes. |
@jensskott There are no sig labels on this issue. Please add a sig label by: |
Any update? I am using 1.8.5 but still got LB which enables only 1 AZ. annotations:
"service.beta.kubernetes.io/aws-load-balancer-internal": "0.0.0.0/0"
"service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled": "true" I was able to add subnets for other AZs manually. using the |
for AWS users.
Please refer this comment as well. # example terraform resource for public subnets.
resource "aws_subnet" "shared_public" {
count = "${length(var.public_subnets)}"
vpc_id = "${var.vpc_id}"
cidr_block = "${var.public_subnets[count.index]}"
availability_zone = "${element(var.azs, count.index)}"
map_public_ip_on_launch = "${var.map_public_ip_on_launch}"
tags {
Name = "${format("%s-shared-public-%s", var.name, element(var.azs, count.index))}"
// https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/aws/tags.go#L34
KubernetesCluster = "your.cluster.zepl.io"
// https://github.com/kubernetes/kubernetes/blob/release-1.8/pkg/cloudprovider/providers/aws/tags.go#L51
"kubernetes.io/cluster/your.cluster.zepl.io" = "shared"
}
} |
You need to tag your subnets with the name of your cluster |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Hi all, i tagged the subnet as said by @Sasso but in my case the ELB which is in an Utility subnet( public subnet) still not see the nodes in the private subnets. If i add the az manually to the elb i works but i would like to do it with kops to automate. How did you solve the issue ? |
Hi,
But when i look to the AZ that added to the load balancer, it just add the one that have access to outside ( the other two AZ have subnets without connection to gateway (internal subnets) When i added them manually it working fine To make the things clear: subnet 1 and 2 can go out to the worldwide (via internet gateway) using NAT that connected to subnet3 All of them with the same tags with value 'shared' value |
When I create a service with a Loadbalancer in AWS it creates it with Cross-Zone Load Balancing: Disabled and only makes it available in one zone of two in us-west-1 for example.
Not sure if i need to extend the yaml file to enable cross zone load balancing.
@justinsb
The text was updated successfully, but these errors were encountered: