Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm init failed with swap on error #122

Closed
praveenkanna opened this issue Dec 6, 2017 · 12 comments
Closed

kubeadm init failed with swap on error #122

praveenkanna opened this issue Dec 6, 2017 · 12 comments

Comments

@praveenkanna
Copy link

praveenkanna commented Dec 6, 2017

Firstly, Thank you very much guys for your awesome work. Recently started working on kubernetes and came through your repository and kucean is really helpful in creating my own Kubernetes cluster.

I picked latest tag v0.1.6 and started the process in installing the cluster. Then i have seen this below error:

fatal: [kube-master]: FAILED! => {
    "changed": true, 
    "cmd": "kubeadm init  --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log", 
    "delta": "0:00:00.825112", 
    "end": "2017-12-06 11:35:59.254000", 
    "failed": true, 
    "invocation": {
        "module_args": {
            "_raw_params": "kubeadm init  --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log", 
            "_uses_shell": true, 
            "chdir": null, 
            "creates": "/etc/.kubeadm-complete", 
            "executable": null, 
            "removes": null, 
            "warn": true
        }
    }, 
    "rc": 2, 
    "start": "2017-12-06 11:35:58.428888", 
    "stderr": "[preflight] WARNING: Connection to \"https://172.29.123.19:6443\" uses proxy \"http://proxy.esl.cisco.com:80\". If that is not intended, adjust your proxy settings\n[preflight] Some fatal errors occurred:\n\trunning with swap on is not supported. Please disable swap\n[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`", 
    "stderr_lines": [
        "[preflight] WARNING: Connection to \"https://172.29.123.19:6443\" uses proxy \"http://proxy.esl.cisco.com:80\". If that is not intended, adjust your proxy settings", 
        "[preflight] Some fatal errors occurred:", 
        "\trunning with swap on is not supported. Please disable swap", 
        "[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`"
    ], 
    "stdout": "", 
    "stdout_lines": []
}

PLAY RECAP ***********************************************************************************************************************************************************
kube-master                : ok=20   changed=3    unreachable=0    failed=1   
kube-node-1                : ok=15   changed=3    unreachable=0    failed=0   

Can any one help me out in getting rid of this and help me getting my Kubernetes cluster up.

Thanks,
Praveen.

@praveenkanna praveenkanna changed the title install-go role not present and kubeadm init failed with swap on error kubeadm init failed with swap on error Dec 6, 2017
@dougbtv
Copy link
Member

dougbtv commented Dec 6, 2017 via email

@dougbtv
Copy link
Member

dougbtv commented Dec 7, 2017

Shoot -- I might've responded to the wrong issue, but, still appreciate you submitting this issue @praveenkanna -- @leifmadsen is going to respond with an actually helpful response! (ahhh, I was replying to the original title. Glad Leif can help regardless)

@leifmadsen
Copy link
Contributor

@praveenkanna are you creating your own virtual machines rather than using the virthost-setup.yml? If so, then the issue would be that the virtual machines you're creating have swap enabled (which is no longer allowed in Kubernetes 1.8, so you'd need to create the VMs with no swap partition, or use swapoff to disable the swap prior to kubeadm init).

If you're using the virthost-setup.yml, then that sounds like a bug in the tag, which would actually be kind of shocking to me :)

I'm going to give it a shot in a few minutes, and see if I can reproduce the issue. If so, I'll come back here and let you know. If not, then I'll show you what I ran to get it to operate correctly.

@leifmadsen
Copy link
Contributor

@praveenkanna we just tagged a 0.1.7 release as well, which actually fixes up a bug with the vms.local.generated file (which you would get if you were using the virthost-setup.yml).

I just tried 0.1.6, and the way I'd run it, 0.1.6 is actually broken :) (I had noticed it and fixed a bug already to resolve that).

When doing the deployment, this is how I would do it:

ansible-playbook -i inventory/virthost virthost-setup.yml
ansible-playbook -i inventory/vms.local.generated kube-install.yml

Then in my virthost/ inventory directory, I have these two files (with content):

virthost.local

vmhost ansible_host=virthost.management.61will.space ansible_ssh_user=root

[virthost]
vmhost

groups_vars/virthost.yml

---
bridge_networking: false
images_directory: /home/images/kubelab
spare_disk_location: /home/images/kubelab
ssh_proxy_enabled: true
ssh_proxy_user: root
ssh_proxy_host: virthost
vm_ssh_key_path: /home/lmadsen/.ssh/id_vm_rsa

If you do it that way, then the vms.local.generated will allow you to more easily execute a remote virtual host via SSH proxy / tunnels.

@praveenkanna
Copy link
Author

@leifmadsen @dougbtv Thanks for your responses guys. i'm not setting up the vms using virthost-setup.yml. I have two already provisioned vms with centos 7 installed and using them in vms.local.generated and running the playbooks(ansible-playbook -i inventory/vms.local.generated kube-install.yml) and Yes they do have swap partition.

I even did swapoff and tried running the playbook but still seeing the same issue. I also tried adding Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" in kubadm.conf file but no luck.

Anyways I will try the latest tag and will post if i get any errors.

@praveenkanna
Copy link
Author

praveenkanna commented Dec 8, 2017

@leifmadsen @dougbtv Seems like it is working now with the new tag but i'm getting this error, Can you guys please check and help me out to figure it out what it is !!

fatal: [kube-master]: FAILED! => {
    "changed": true, 
    "cmd": "kubeadm init  --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log", 
    "delta": "0:00:30.138666", 
    "end": "2017-12-07 23:04:10.190628", 
    "failed": true, 
    "invocation": {
        "module_args": {
            "_raw_params": "kubeadm init  --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log", 
            "_uses_shell": true, 
            "chdir": null, 
            "creates": "/etc/.kubeadm-complete", 
            "executable": null, 
            "removes": null, 
            "warn": true
        }
    }, 
    "rc": 1, 
    "start": "2017-12-07 23:03:40.051962", 
    "stderr": "unable to get URL \"https://dl.k8s.io/release/stable-1.8.txt\": Get https://dl.k8s.io/release/stable-1.8.txt: dial tcp 23.236.58.218:443: i/o timeout", 
    "stderr_lines": [
        "unable to get URL \"https://dl.k8s.io/release/stable-1.8.txt\": Get https://dl.k8s.io/release/stable-1.8.txt: dial tcp 23.236.58.218:443: i/o timeout"
    ], 
    "stdout": "", 
    "stdout_lines": []
}

I thought i'm behind the proxy and passed the proxy then i'm seeing this error :

fatal: [kube-master]: FAILED! => {
    "changed": true, 
    "cmd": "kubeadm init  --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log", 
    "delta": "0:32:45.673375", 
    "end": "2017-12-07 23:38:38.552774", 
    "failed": true, 
    "invocation": {
        "module_args": {
            "_raw_params": "kubeadm init  --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log", 
            "_uses_shell": true, 
            "chdir": null, 
            "creates": "/etc/.kubeadm-complete", 
            "executable": null, 
            "removes": null, 
            "warn": true
        }
    }, 
    "rc": 1, 
    "start": "2017-12-07 23:05:52.879399", 
    "stderr": "[preflight] WARNING: Connection to \"https://172.29.123.20:6443\" uses proxy \"https://proxy.xxx.xxxx.com:80\". If that is not intended, adjust your proxy settings\ncouldn't initialize a Kubernetes cluster", 
    "stderr_lines": [
        "[preflight] WARNING: Connection to \"https://172.29.123.20:6443\" uses proxy \"https://proxy.xxx.xxxx.com:80\". If that is not intended, adjust your proxy settings", 
        "couldn't initialize a Kubernetes cluster"
    ], 
    "stdout": "", 
    "stdout_lines": []
}

@leifmadsen
Copy link
Contributor

@praveenkanna, unfortunately, I've not tested kubeadm behind a proxy, so I'm not sure what the expected configuration is, or if that is a well-played setup that will result in things happening properly. At this point I'm not sure the issue is with the playbooks or the setup of kucean but rather local network issues on your setup.

@dougbtv
Copy link
Member

dougbtv commented Dec 12, 2017

I'm reminded by Leif's response (thanks Leif) that something you might want to try is setting the skip_preflight_checks variable to true (which will add the --skip-preflight-checks flag to the kubeadm init command). E.g.:

ansible-playbook -i inventory/your.inventory -e 'skip_preflight_checks=true' kube-install.yml

Since it's a warning -- it might just be a validation by kubeadm which can be skipped (but, I haven't used it with a proxy setup, either)

@dougbtv
Copy link
Member

dougbtv commented Dec 12, 2017

Additionally I was able to dig up a little information about the proxy settings for kubeadm, apparently they come from environment variables.

The author of the proxy functionality for kubeadm posted information in this issue detailing what environment variables are used, and how to use them.

@praveenkanna -- if you figure out how to set the proxy environment variables, we'd certainly love any contributions around that. Especially a pull request would be great, but even if you document what you use, we can add it to our backlog to add that functionality.

@leifmadsen leifmadsen removed their assignment Dec 12, 2017
@leifmadsen
Copy link
Contributor

@dougbtv is there any documentation around the preflight checks variables? I wonder if we need to open a documentation issue to start showing what stuff you can pass into kucean and the scenarios you might want to do that?

Of course you can send someone to start looking through the group_vars/all.yml and the roles defaults files, but it might be a nice thing to centrally document those variables. Of course, that comes at the burden of making sure any new variables are added get new documentation around it.

We can discuss on IRC and then propose an issue if we decide there is something to do.

@dougbtv
Copy link
Member

dougbtv commented Dec 12, 2017

+1, we should have some kind of "options overview" for commonly used options. It's more like, just a few examples of when to use variables and no centralization of that.

@leifmadsen
Copy link
Contributor

I'm going to close this issue out, as I believe it's either been addressed in a later release, or was a local issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants