Waiting for kube-system pods to start #1430
Comments
Thanks for the cdk-field-agent attachment.
@Ariestattoo I believe this is an issue we've seen before when installing to localhost/LXD when the storage backend is ZFS. You might be able to work around this by running I will follow up on two points here:
|
@Cynerva Thanks for the great insight!! |
Thanks. Taking a quick glance in the DIR archive, it's hitting the same error:
I'm guessing either LXD didn't actually stop using ZFS, or I misdiagnosed the issue and it's not ZFS related after all. The I don't have anything helpful to offer right now, and lots of work to juggle so it'll be a few days before I can come back to this. Thanks again for the detailed report and sorry for the trouble. |
I appreciate the time constraints...doing this on my lunch break. Couple questions if I might.
|
Afraid not. The fatal error is coming from the kubelet service on the kubernetes-worker units, but you're gonna need the rest of the cluster (easyrsa, etcd, kubernetes-master, flannel) for kubernetes-worker to get far enough to start kubelet.
Either there's a bug in kubelet (one of the Kubernetes core services), or kubelet is missing a dependency that it needs. I'm guessing the latter, which would make it a bug in the kubernetes-worker charm. |
@Ariestattoo local deployment doesn't work now #1426 . In result I'd like to suggest to setup your cluster on Ubuntu 16.04 manually. |
It does work, you're using btrfs in your linked bug. This problem seems to be zfs is still being used. |
So I used LXC to create these pools and then selected the relevant choice when using conjure-up. Is that incorrect? My default pool is an XFS pool based on a block device. Where do you see my configuration error? I have several existing controllers and models already created and being used in the XFS pool. Are you suggesting I run LXD init again instead of using LXC to manually create and select? |
@battlemidget I've tried both btrfs and ZFS, this information is in the ticket. |
@sumlin Yea, what I'm saying is don't use those for now (at least until we can figure out why those are giving us trouble) and stick with |
@battlemidget oh, thank you, I will. |
FYI @battlemidget @sumlin @Ariestattoo I was running into the same issue, and after running |
Thanks for the feedback, the kubernetes guys know there is something going wrong when using a storage backend other than dir and are working to try and track down the root cause. @Cynerva btrfs tends to be the default if you don't have the zfs utils package installed. I think we should talk to lxd guys as well to see if btrfs is the right choice as a default in these cases. |
This resolved it for me. Feel free to close |
Report
I realize that this is a non specific error, but I am not sure where I can further investigate at this point. Any suggestions or insight is very much appreciated.
Thank you for trying conjure-up! Before reporting a bug please make sure you've gone through this checklist:
sudo snap refresh conjure-up --edge
? yesPlease provide the output of the following commands
Please attach tarball of ~/.cache/conjure-up:
conjure-up.tar.gz
Sosreport
200MB zip file 30 MB xz
What Spell was Selected?
kubernetes-canonical
What provider (aws, maas, localhost, etc)?
localhost
MAAS Users
Which version of MAAS?
Commands ran
conjure-up
Please outline what commands were run to install and execute conjure-up:
Additional Information
cdk-field-agent
The text was updated successfully, but these errors were encountered: