-
Notifications
You must be signed in to change notification settings - Fork 468
Conversation
@c-knowles give me a few days to digest this PR. I haven't figured out where I stand on kube-aws deploying to existing subnets. |
Sure. To use the option, just put the IDs of the subnets in as per the cluster template changes. Are there other places enforcing the options specified are valid as a set? For instance, specifying the subnet IDs without specifying the VPC ID probably isn't an option we'd want to support. The previous workarounds I've seen mean that kube-aws was still creating the subnets but then the stack template would have to be edited to accommodate the existing route tables. I had considered #212 and whether to just add that network setup as an option using a typical setup. i.e. build that directly into kube-aws. However, one major advantage for us doing it this way is that we can then share the VPC between different systems which in turn cuts the cost of running several NAT Gateways. So we'd at least need the option to turn off network setup all together. What I'm wondering now is whether it would be best to split out the network setup into a separate stack inside kube-aws and then provide the user with the option about which one they want and whether to even create one. In the first instance we could support the NAT Gateway scenario and the default all public one that already exists. The network stack could be created first and then the IDs exported from it into the cluster.yaml file. I think this would then simplify the K8S stack setup because it would always take a list of IDs for the VPC no matter which option is chosen by the user. |
Only check if subnets have a zone or ID, it seems the simplest way to re-use all the existing validation.
On second thought, availability zone is required to define the scaling group so make sure we always have it.
Having a little trouble checking out how well on latest master updates, it seems to have an issue with controller startup. I thought it was my new config at first but I reverted to the old setup - kube-aws creating the VPC etc - and controller logs are saying this over and over:
@colhom which branch is best to go with if I want to ensure stability? I will try another cluster shortly using the latest published kube-aws just to check if it's a more general problem with the OS updates or similar. EDIT: Latest master seems to work if |
Interesting the latest published build of kube-aws (0.8.1) does exactly as above on the controller but eventually seems to recover:
|
If our workers are in private subnets, we may wish to place the controller in a public subnet to access the dashboard without any tunnelling.
@colhom based on some initial usage of this, perhaps allowing the usage of existing route tables would be a better way to go so we have less chance of conflicting private IPs. We could even support the new cross-stack references as well. Any thoughts on that? |
Closing this for now as described in #716, I believe using existing route tables rather than subnets to be a better solution. |
We want to manage our VPC separately so this supports deployment to existing VPC subnets.
Inspired by @eugenetaranov's existing work in #212 since it avoids the manual edit of the generate stack template.