New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explain why not subnets are found for creating an ELB on AWS #29298
Comments
cc @pwittrock Not sure if this is support or an actual issue? |
@sdouche are you perhaps running out of free IP addresses in your subnet? Each ELB will also need to a separate network interface on (each) target subnet, and I believe the rule is either 5 or 8 free addresses in the subnet for ELB creation to be allowed. @cgag ran into this recently during some operational work here at coreos and told me about it. |
Hi @colhom, |
@sdouche could I see your diff that allows you to deploy to an existing subnet? I've been curious to see how folks are doing this- we use route tables and vpc peering heavily, so in our case we have no need to deploy to the same subnet. |
Or are you just modifying the |
Just modified the stack-template.json and removed the creation of network items (more details here: coreos/coreos-kubernetes#340) |
@sdouche Are those subnets private? A public ELB can't be created in private subnets. K8s will get all subnets tagged with correct KubernetesCluster, then ignore private subnets for public ELB. You can try to tag a public subnet with correct KubernetesCluster, then wait k8s to retry to create the ELB in that subnet. |
@qqshfox good point, it's a private subnet. Why private subnets are ignored? How to create a private cluster? |
You can create an internal ELB by using some magic k8s metadata tag. |
"some magic k8s metadata tag"? What are they? |
@sdouche You have to tag your subnet with the "KubernetesCluster" tag. I see you used kube-aws before, you can look at that for inspiration on how to to properly create your subnets. Also note that making a loadbalancer in a private subnet doesn't make much sense if you want to expose a service to the world (can't route in) |
Hi @pieterlange. Ok so, if I want a private cluster, how to expose services and pods w/o ELB? Do I need to route the 2 overlay networks? How to do that? I suppose with Flannel's aws backend. |
I do not understand what you're trying to accomplish so it's a little bit difficult to help. Issues like these (this is starting to look like a support request) are better solved through slack chat or stackoverflow as there's no actionable material for the developers here. I suggest closing the ticket and trying over there. |
You're right, sorry. Back to the initial request: I think it would be better to write: "could not find any public subnets for creating the ELB" (for public ELB of course, which is the default option). What do you think? |
@justinsb WDYT? |
How to create a private ELB with private subnets? |
Some information in #17620 about private ELBs. |
Has anyone gotten this to work recently? I can get it to create the internal/private ELB but none of the node machines are added to the ELB. If I manually add them everything works fine, so it is set up properly except for adding the ASG for the nodes or adding the nodes themselves. @justinsb Is there some annotation I need to use possibly to allow it to find the nodes it needs to add to the private ELB? I'm creating the cluster with kubeadm to join the nodes and the AWS cloud provider integration. The subnets, vpcs and autoscaling groups are all tagged with "KubernetesCluster" and a name. That does propagate to the ELB, but none of the node instances are picked up. I don't see anything specific in the code to add the node ASG to the ELB based on annotation... |
I have the same problem. I've got Kubernetes running in a private subnet. To explain it a bit further (this is AWS specific). Our infrastructure team has created specific requirements regarding security. We need to have three layers (subnets) in one VPC zone. Diagram:
For this to work I had to manually create a ELB in layer 1 (public subnet) and point them to the master nodes in layer 2 (private subnet 1). I also installed the dashboard and this works fine together with the kubectl command line tool. (Both are exposed to the internet) However when I deploy an app (e.g. nginx) I get the following error: Error creating load balancer (will retry): Failed to create load balancer for service default/my-nginx: could not find any suitable subnets for creating the ELB The Kubernetes dashboard says the service-controller is the source of this. And when I run:
it outputs:
Is there a way to tell the controller which subnet it should use to create the load balancer for the service? |
But no worries, kubernetes is built upon 15 years of experience of running production workloads at Google. Amazon will fix their ELBs sometimes soon. |
@cyberroadie How did you solve your problem? I am in the same situation and no idea to resolve problem. |
Manually creating the routes via the AWS web interface. |
@whereisaaron what is an "ownership value" in this case? |
@2rs2ts the ownership value for the You can read the code @2rs2ts to understand the process of finding a subnet.
This process is repeated for each AZ your cluster occupies. |
I have a bit of a problem with this design... it's not clear how to use the A couple problems:
|
IMHO I think @plombardi89 that kind of a mis-use of an autoscaler 😄 Since 'claimed' nodes are pets not cattle. However, if you want to go this way, then I can suggest you create the autoscaler to create t2.nano instances (somewhere), with a cloudinit script that uses a CloudFormation template to create a tiny subnet, with the one-node cluster and any subnet tags. When the t2.nano gets the scale-down or shutdown request, delete the CloudFormation stack to clean up the cluster and its tiny subnet. |
@whereisaaron I agree it's a bit of a misuse but it's not a pet vs. cattle distinction IMO. We use the autoscaler to always ensure there are single node instances of Kubernetes available to claimed. A claim request detaches the instance from the autoscaler and for hours it can be used by a developer or for automated testing. At the end of that period the instance is terminated and never heard from again. Using the autoscaler this way is nice because there is no code needed to manage the pool capacity. Most clusters once claimed are used for a handful of minutes before being discarded. The only thing shared by the claimed instances are VPC and subnets. It feels like there should be another way to tell Kubernetes "Hey these subnets are perfectly valid to deploy ELB's into" that doesn't rely on tags... maybe a configuration flag or using a Dynamo table to track this information. |
I think tags are the correct mechanism @plombardi89 you'll have to propose a patch for |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This is because for internal ELB auto subnet discovery, both tags are used - kubernetes.io/role/internal-elb 1 kubernetes.io/cluster/<cluster-name> shared Since as per code, first kubernetes.io/cluster/<cluster-name> is checked and then kubernetes.io/role/internal-elb is checked. If kubernetes.io/cluster/<cluster-name> is not mentioned, then internal ELB is created on Public Subnets. kubernetes/kubernetes#29298 (comment)
Relevant documentation in AWS: "Cluster VPC Considerations" https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html If using EKS, tagging of the VPC & subnets referenced in your EKS cluster appears to be automatic. However, it may be necessary to tag additional subnets. |
You are right. I tagged my public subnet with |
@whereisaaron |
I manage to restrict internal load balance to only intra_subet with the help of tags
|
I would like only to add that in pivotal container service (pks) you have to tag the elb subnet this way: |
Hi,
I created a Kubernetes cluster from coreos-aws (with existing VPC and subnets). I can't create an ELB on it.
The file:
The command:
I added manually the missing tag KubernetesCluster on the subnet w/o result. Can you add an clear message about what is missing?
The text was updated successfully, but these errors were encountered: