Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Creating cluster with an existing subnet #52

Closed
sdouche opened this issue Nov 10, 2016 · 28 comments
Closed

Creating cluster with an existing subnet #52

sdouche opened this issue Nov 10, 2016 · 28 comments

Comments

@sdouche
Copy link

sdouche commented Nov 10, 2016

Hello,
We can specify an existing VPC, but not an existing subnet. At work, I can't create network stuff. I have forked the file stack-template.json but It would be cool to have the option built-in.

Thanks.

@sdouche sdouche changed the title Creating cluster with existing subnet Creating cluster with an existing subnet Nov 10, 2016
@pieterlange
Copy link
Contributor

pieterlange commented Nov 10, 2016

Provisioning in existing subnets creates a number of weird edge cases that i do not feel like supporting.

This request keeps showing up though, so out of interest, can you clarify why it's not sufficient to use existing routetables and attaching those to the new subnets? A little bit more background info on what you're trying to do would also help to see why you need this feature.

At work, I can't create network stuff.

Do you mean in AWS, or at your office?

@sdouche
Copy link
Author

sdouche commented Nov 10, 2016

Hi @pieterlange. Thanks for your response. I don't understand:

Provisioning in existing subnets creates a number of weird edge cases

What do you talking about? Have you some examples? As far I can see, I use existing subnet w/o issues.

why it's not sufficient to use existing routetables and attaching those to the new subnets?

I can't create subnets into the AWS account of my company.

@mumoshu
Copy link
Contributor

mumoshu commented Nov 10, 2016

@sdouche Hi, would you mind letting me summarize!

  • We're just recommending not to mix up things coming from kube-aws and with ones from others, into the same subnets.
  • If you meant that your colleague(s) created subnets dedicated to kube-aws resources(including ec2, elb etc), it would work.
  • AFAIK, the weird edge cases @pieterlange mentioned happen only when you mix up kube-aws things with others into the same subnet(s).
  • We tend not to recommend the deployment to existing subnets just because of it i.e. if the existing subnets are dedicated to kube-aws, it's ok, IMHO. We should take extra care for our users not to abuse the possible deployment to existing subnets feature, though.

@cknowles
Copy link
Contributor

cknowles commented Nov 11, 2016

I wanted to share my experience of shared subnets. Having encountered a slight different use case but same end result of wanting to share network resources I did a pull request here for shared subnets. I ended up replacing that with shared route tables.

The main reason I ditched shared subnets is that k8s and kube-aws rely on some aspects of the subnets and to a lesser extent the route tables which meant that spinning up a new cluster was a little fragile. It's definitely related to what @mumoshu says above - dedicated subnets are ok but mixing them tends to case issues. I think it's definitely achievable still though with subnets, I chose the path of slightly lesser resistance. There would have to be detailed instructions on exactly what subnet and route table requirements there are across k8s and kube-aws.

@pieterlange
Copy link
Contributor

One of the side effects of not having the subnets managed by your deployment tool: kubernetes/kubernetes#29298

Opinion: organizations that cling on to old-world devision of labor with network teams and IT systems teams can do the necessary work for product integration themselves. (and deal with the edge cases themselves)

@sdouche
Copy link
Author

sdouche commented Nov 12, 2016

Hi everyone. I never talk about shared networks (I use dedicated subnets for k8s), only to use existing subnets (created for me). At work, we've a complex network topology (8 AWS accounts, 2 datacenters, etc), that's why the AWS network objects are managed only by the sysadmin teams.

EDIT: I'm ok if you don't want change the Cloudformation template but at least, please remove the subnet verification in cluster/cluster.go:

if err := c.ValidateExistingVPC(*existingVPC.CidrBlock, subnetCIDRS); err != nil {
              return fmt.Errorf("error validating existing VPC: %v", err)
}

@sdouche
Copy link
Author

sdouche commented Nov 12, 2016

@pieterlange wrong example here, you created a deployment tool for internet facing only. I don't want to expose our Jenkins cluster to the world.

@mumoshu mumoshu added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Nov 14, 2016
@pieterlange
Copy link
Contributor

pieterlange commented Nov 14, 2016

Okay, this is a fair use-case but i don't think we should break the experience for other users based on this very specific example.

I feel if we're going to support this, we shouldn't needlessly complicate the cluster.yaml further with toggles (i'm guilty myself of doing this) and instead put it under existing parameters.

So you instead of defining:

subnets:
- availabilityZone: us-east-1a
  instanceCIDR: "10.50.1.0/24"
- availabilityZone: us-east-1b
  instanceCIDR: "10.50.2.0/24"
- availabilityZone: us-east-1c
  instanceCIDR: "10.50.3.0/24"

You'd define the following:

subnets:
- availabilityZone: us-east-1a
  subnetId: "subnet-abcdef1"
- availabilityZone: us-east-1b
  subnetId: "subnet-abcdef2"
- availabilityZone: us-east-1c
  subnetId: "subnet-abcdef3"

Just an example/proposal. What do you think @sdouche @mumoshu @c-knowles ?

Maybe even keep the instanceCIDR in and only create the subnet if subnetId is not defined?

@sdouche
Copy link
Author

sdouche commented Nov 14, 2016

Hi @pieterlange. Seems good to me. I'm open to beta test this change :).

Thanks.

@aholbreich
Copy link

I need to be able to deploy k8n to existing subnet because that subnets are preconfigured with some concept in mind, regarding routing and IP ranges...

@mumoshu
Copy link
Contributor

mumoshu commented Nov 17, 2016

Hi @aholbreich.
FYI, you can modify stack-template.json by hand before bringing up a cloudformation stack to reflect your requirements. This can be done in today's kube-aws!

@cknowles
Copy link
Contributor

Also if you are interested in checking out some code changes to see something right now, I have a branch of the older repo with some changes in.

@sdouche
Copy link
Author

sdouche commented Nov 17, 2016

Not true @mumoshu, kube-aws does a check (see my comment below).

@mumoshu
Copy link
Contributor

mumoshu commented Nov 17, 2016

@sdouche Excuse me if I'm missing the point but I meant to use e.g. awscli to finally create stack from the stack template initially generated via kube-aws up --export and then modified by hand!

@sdouche
Copy link
Author

sdouche commented Nov 17, 2016

@mumoshu oh i see.

@aholbreich
Copy link

aholbreich commented Nov 17, 2016

@mumoshu maybe i do not understand is well. But isn't the whole puint of CLI tool like this to be able to crate and manage the whole cluster from cli? without touching any intermediate artefacts?

Otherways it is better to use something more declarative and define the whole set-up step by step.
Like Ansible with kubeadm.

@Camsteack
Copy link

@aholbreich Not necessarily it can be used to generate cloudformation templates that you customize for your needs and store in a version control system. We need to deploy in existing private subnets and so far we have been using the cli tool to generate the template and be able to track changes in git.
But it's true it would be nice to be able to specify existing subnets.

@pieterlange
Copy link
Contributor

@Camsteack you can use your own routetable and set mapPublicIPs in order to deploy to private subnets.

@Camsteack
Copy link

@pieterlange this is what we are doing 😃
It's just a bit painful to manually change the template to remove the subnets creation and specify your own

@mumoshu mumoshu removed the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Nov 30, 2016
@mumoshu
Copy link
Contributor

mumoshu commented Feb 16, 2017

This is supported since v0.9.4-rc.1. Please read the updated comments in cluster.yaml to select an appropriate set of settings according to your need 😃

@mumoshu mumoshu closed this as completed Feb 16, 2017
@mumoshu mumoshu added this to the v0.9.4-rc.1 milestone Feb 16, 2017
@sonnysideup
Copy link

kube-aws version: v0.9.7

Is it still true that using managed VS existing subnets is still preferred? Here's my use case.

I'm using Terraform to build out a brand-new VPC and all of its associated objects (route tables, NAT gateways, subnets, etc...). I specifically created an "application" tier of subnets and now I want kube-aws to use those when standing up a k8s cluster.

I'm really trying to assess the risks involved here.

@redbaron
Copy link
Contributor

We run kube-aws in existing subnets without problem for a long time, pretty safe to use

@sonnysideup
Copy link

@redbaron One of the things we're trying to understand is how to get this working with existing subnets.

In the case where our subnet is managed by kube-aws, you can differentiate between public/private subnets using the following option:

subnets:
  - name: ManagedPublicSubnet1
    private: true # does some magic
    availabilityZone: us-west-1a
    instanceCIDR: "10.0.0.0/24"

When we want to leverage an existing public subnet (configured to use a route table that is connected to a NAT gateway and confirmed working), how does the K8s cluster know it is public?

@redbaron
Copy link
Contributor

redbaron commented Jul 18, 2017

You are missing ID:

subnets:
  - name: ExistingSubnet1
    private: true # does some magic
    availabilityZone: us-west-1a
    instanceCIDR: "10.0.0.0/24"
    id: subnet-aabbcc # <<< HERE

@redbaron
Copy link
Contributor

keep private: true even for public subnet, if else logic in kube-aws is slightly twisted, basically if id: is specified AND private: true is present, then subnet is used as is, kube-aws doesn't try to do anything with it.

That is how it works for us 100% right now, maybe private:true is not needed for existing subnets anymore, but there were some quirks so it is how it is in our configs.

@sonnysideup
Copy link

I'm sorry, I was unclear. We're able to get existing private subnets working, but failing to configure existing PUBLIC subnets correctly. 😢

The documentation in cluster.yaml related to using existing public subnets expresses:

An internet gateway(igw) and a route table contains the route to the igw must have been properly configured by YOU. kube-aws tries to reuse the subnet specified by id or idFromStackOutput but kube-aws never modify the subnet.

Even with that setup, every time I launch, say, a public-facing LoadBalancer service in K8s is stays in pending state with the following error message:

Error creating load balancer (will retry): Failed to create load balancer for service default/svc-name: could not find any suitable subnets for creating the ELB.

Another team member found that adding an AWS tag to the subnet will then allow the cluster to launch LBs in the public subnets.

AWS tag key = kubernetes.io/cluster/my-cluster-name, value = true.

Is that expected? I fear I'm missing something here.

@redbaron
Copy link
Contributor

yes, tags need to be present on a subnet, existing subnets are not tagged by kube-aws.

Following tags need to be present:

VPC:
kubernetes.io/cluster/$CLUSTER_NAME=shared

Subnets where internal ELBs will be created:
kubernetes.io/cluster/$CLUSTER_NAME=shared
kubernetes.io/role/internal-elb=true

Subnets where external ELBs will be created:
kubernetes.io/cluster/$CLUSTER_NAME=shared
kubernetes.io/role/elb=true

@sonnysideup
Copy link

Thank you so much for your help. I was able to get this working and found this other issue that gives more context. NOTE: You cluster tag is valid for K8s v1.6, v1.5 uses KubernetesCluster.

kubernetes/kubernetes#29298

davidmccormick pushed a commit to HotelsDotCom/kube-aws that referenced this issue Jul 18, 2018
…ues-with-nds-rollouts to hcom-flavour

* commit 'f6c440e0654c605a24248ad6b58a04d097e49c56':
  Make sure that canal and flannel land on all nodes (including tainted ones).  Correct addition syntax in add-node-cidrs.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants