Skip to content
This repository has been archived by the owner on Sep 4, 2021. It is now read-only.

UX regarding DNS pointing to kube api node #257

Closed
xied75 opened this issue Jan 22, 2016 · 9 comments
Closed

UX regarding DNS pointing to kube api node #257

xied75 opened this issue Jan 22, 2016 · 9 comments

Comments

@xied75
Copy link

xied75 commented Jan 22, 2016

Dear All,

Currently after I finish kube-aws up I'll set an A record in aws route53 for the domain name, so then I can fire kubectl. The problem is it seems route53 need some time to settle down on the new IP value, it either can't resolve or pointing to previous values (I'm doing rapid testing loops, not always cleaning up the route53 record sets)

Now that means I can't really immediately start to fire my bunch of kubectl since it would just error if DNS can't resolve or trying to use an old value which is more of my fault.

So I wonder should the kubeconfig just use the new IP address other than trying to do DNS?

My temp fix is to add a line to /etc/hosts, ignoring DNS all together. Comments?

@mgoodness
Copy link

I add both the IP address and the FQDN to my kubeconfig, commenting out the FQDN until dig reports the record is updated. Then I go back into kubeconfig and swap the lines.

@xied75
Copy link
Author

xied75 commented Jan 22, 2016

@mgoodness That doesn't sound anything like Automation?

@mgoodness
Copy link

Regardless, that's my workaround. With everything kube-aws does, moving a couple #s around when necessary doesn't strike me as particularly onerous.

@aaronlevy
Copy link
Contributor

The kubeconfig can use the IP, but you will likely need to add insecure-skip-verify:true (which we don't want to assume by default).

The issue is that we don't have a public IP of the master node at launch time (when we generate the TLS assets). We can sign the certs, however, with an expected DNS name.

During testing I usually will just go the /etc/hosts route and not bother with setting up an actual DNS record.

@aaronlevy aaronlevy self-assigned this Jan 22, 2016
@xied75
Copy link
Author

xied75 commented Jan 25, 2016

@aaronlevy Thanks for the explanation. We are using kube-aws as part of our tooling for production deployment (multiple clusters multi-tenancy on-demand thing), anything manual needs to be removed. The work around of adding a line in /etc/hosts does work, which is what I'm doing, since I'm running kube-aws within a container, and each run of the container deploy a cluster therefore it's ok to mess with the hosts file, but it wouldn't be wise if multiple run target the same hosts file.

I wonder given the kube api IP is an EIP of aws, user might already own EIP or want to designate fixed values from their business requirement, maybe we could allow config this in the yaml, so that you can include the IP in the cert signing. Or separate EIP creation with the rest?

All my discussion was more of a flow thing, that's why I used the word UX. :)

@aaronlevy
Copy link
Contributor

Would it be fair to describe this as a feature request for providing custom TLS assets? We've talked about this, as we make a lot of assumptions currently, but can't cover all use-cases. The workflow would essentially allow you to provide custom certificates (with any additional IPs, for example), then those would be used in the deployment process.

Initially this would likely point to docs similar to these: https://coreos.com/kubernetes/docs/latest/openssl.html#kubernetes-api-server-keypair

Then you would probably just need to place those assets in a known location during deployment.

@colhom
Copy link
Contributor

colhom commented Mar 22, 2016

This is part of the list for #340

@colhom
Copy link
Contributor

colhom commented Mar 29, 2016

The route53 stuff should be optional. There are the modes we should support:

  • No route53 integration. It's up to you to make the controller EIP routable via externalDNSName (how it works now)
  • Make new host record in existing hosted zone. --> Validate externalDNSName is a valid subdomain of the hosted zone.
  • Make new hosted zone and host record

cgag added a commit to cgag/coreos-kubernetes that referenced this issue Apr 12, 2016
Currently it's on the user to create a record, via Route53 or otherwise,
in order to make the controller IP accessible via externalDNSName.  This
commit adds an option to automatically create a Route53 record in a given
hosted zone.

Related to: coreos#340, coreos#257
@aaronlevy
Copy link
Contributor

I'm going to close this as it seems this work is captured by #340 -- please re-open if necessary

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants