Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to run command when scale-up[seautil route add ] #2258

Open
4meepo opened this issue Jun 29, 2023 · 1 comment
Open

Failed to run command when scale-up[seautil route add ] #2258

4meepo opened this issue Jun 29, 2023 · 1 comment
Labels
kind/bug Something isn't working

Comments

@4meepo
Copy link

4meepo commented Jun 29, 2023

What happen?

After I created a cluster with only one master, I tried to add a new node into the cluster. Then error
failed to run command[export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin"; seautil route add --host 10.103.97.2 --gateway 192.168.1.201 occurred.

Relevant log output?

root@master-01:/home/vagrant# sealer scale-up  --nodes 192.168.1.201 -p 'vagrant'
2023-06-29 16:12:45 [INFO] [scale-up.go:135] start to scale up cluster

2023-06-29 16:12:46 [INFO] [installer.go:456] The cri is docker, cluster runtime type is kubernetes


2023-06-29 16:12:47 [INFO] [hook.go:138] start to run hook(node_disk-init) on host([192.168.1.201])

2023-06-29 16:12:47 [INFO] [hook.go:194] start to run hook on host 192.168.1.201

+ export EtcdDevice=
+ EtcdDevice=
+ bash scripts/install-lvm.sh
+ vgcreate --version
  LVM version:     2.03.07(2) (2019-11-30)
  Library version: 1.02.167 (2019-11-30)
  Driver version:  4.41.0
  Configuration:   ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=${prefix}/lib/x86_64-linux-gnu --libexecdir=${prefix}/lib/x86_64-linux-gnu --runstatedir=/run --disable-maintainer-mode --disable-dependency-tracking --exec-prefix= --bindir=/bin --libdir=/lib/x86_64-linux-gnu --sbindir=/sbin --with-usrlibdir=/usr/lib/x86_64-linux-gnu --with-optimisation=-O2 --with-cache=internal --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --with-default-pid-dir=/run --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm --with-thin=internal --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair --enable-applib --enable-blkid_wiping --enable-cmdlib --enable-dmeventd --enable-dbus-service --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-lvmpolld --enable-notify-dbus --enable-pkgconfig --enable-readline --enable-udev_rules --enable-udev_sync
+ exit 0
+ bash scripts/disk_init_v2.sh
+ storageDev=
+ etcdDev=
+ container_runtime_size=
+ kubelet_size=
+ file_system=
+ container_runtime=
+ '[' '' == '' ']'
+ container_runtime=docker
++ echo
++ grep '&'
+ containAnd=
+ NEW_IFS=,
+ '[' '' '!=' '' ']'
+ '[' -z '' ']'
+ file_system=ext4
+ utils_info 'set file system to default value - ext4'
+ echo -e '\033[1;32mset file system to default value - ext4\033[0m'
set file system to default value - ext4
+ utils_shouldMkFs
+ '[' '' '!=' '' ']'
+ return 1
+ utils_shouldMkFs
+ '[' '' '!=' '' ']'
+ return 1
+ utils_info 'device is empty! exit...'
+ echo -e '\033[1;32mdevice is empty! exit...\033[0m'
device is empty! exit...
+ exit 0
2023-06-29 16:12:47 [INFO] [hook.go:138] start to run hook(pre_init_host) on host([192.168.1.201])

2023-06-29 16:12:47 [INFO] [hook.go:194] start to run hook on host 192.168.1.201

+ bash scripts/pre_init_host.sh
+ '[' '' '!=' true ']'
+ set_logrotate
+ cat
+ '[' '!' -f /etc/cron.hourly/logrotate ']'
+ chmod +x /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/conntrack /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/etcdctl /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/helm /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/kubeadm /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/kubectl /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/kubelet /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/mc /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/nerdctl /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/seautil /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/trident /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/velero
+ cp -f /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/conntrack /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/etcdctl /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/helm /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/kubeadm /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/kubectl /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/kubelet /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/mc /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/nerdctl /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/seautil /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/trident /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../bin/velero /usr/bin/
+ rm -f /etc/sysctl.d/ack-d-enable-ipv6.conf
+ configure_sysctl 'net.ipv6.conf.all.disable_ipv6 = 0' /etc/sysctl.d/ack-d-enable-ipv6.conf
+ local 'config=net.ipv6.conf.all.disable_ipv6 = 0'
+ local dest=/etc/sysctl.d/ack-d-enable-ipv6.conf
+ echo 'net.ipv6.conf.all.disable_ipv6 = 0'
+ grep 'net.ipv6.conf.all.disable_ipv6 = 0' /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 0
+ sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
* Applying /etc/sysctl.d/ack-d-enable-ipv6.conf ...
net.ipv6.conf.all.disable_ipv6 = 0
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
* Applying /usr/lib/sysctl.d/protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.conf ...
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
+ configure_sysctl 'net.ipv6.conf.all.forwarding = 1' /etc/sysctl.d/ack-d-enable-ipv6.conf
+ local 'config=net.ipv6.conf.all.forwarding = 1'
+ local dest=/etc/sysctl.d/ack-d-enable-ipv6.conf
+ echo 'net.ipv6.conf.all.forwarding = 1'
+ grep 'net.ipv6.conf.all.forwarding = 1' /etc/sysctl.conf
net.ipv6.conf.all.forwarding = 1
+ sysctl --system
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
* Applying /etc/sysctl.d/ack-d-enable-ipv6.conf ...
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
* Applying /usr/lib/sysctl.d/protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.conf ...
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
+ KUBELET_EXTRA_ARGS=KUBELET_EXTRA_ARGS=--node-labels=ack-d.alibabacloud.com/managed-node=true
+ '[' 192.168.1.201 = '' ']'
+ KUBELET_EXTRA_ARGS='KUBELET_EXTRA_ARGS=--node-labels=ack-d.alibabacloud.com/managed-node=true --node-ip=192.168.1.201'
+ family_of_ip_need_get=6
+ '[' '' == 6 ']'
+ chmod +x ./bin/trident
+ cp -f ./bin/trident /usr/bin/trident
++ trident get-default-route-ip --ip-family 6
Trident version 1.14.3, Build: d1412075 go1.17.2, Build time: 2023-03-31-11-37-07, EnableLicense: false, Log file: /var/lib/sealer/data/default-kubernetes-cluster/rootfs/trident_log_2023-06-29T16-12-56+08-00
failed to choose an ipv6 ip from default route
+ anotherIP=
+ '[' 1 -eq 0 ']'
+ '[' '' = true ']'
++ arch
+ ARCH=x86_64
+ '[' x86_64 = x86_64 ']'
+ grep -i nvidia
+ lspci
+ echo KUBELET_EXTRA_ARGS=--node-labels=ack-d.alibabacloud.com/managed-node=true --node-ip=192.168.1.201
scripts/pre_init_host.sh: line 106: /etc/sysconfig/kubelet: No such file or directory
+ echo KUBELET_EXTRA_ARGS=--node-labels=ack-d.alibabacloud.com/managed-node=true --node-ip=192.168.1.201
++ trident get-accept-ra-ifname
+ acceptRaIfname=
+ '[' 0 == 0 ']'
+ '[' '' '!=' '' ']'
+ DOCKER_VERSION=19.03.15
+ storage=/var/lib/docker
+ mkdir -p /var/lib/docker
+ utils_command_exists docker
+ command -v docker
+ disable_selinux
+ '[' -s /etc/selinux/config ']'
+ systemctl daemon-reload
+ systemctl restart docker.service
+ check_docker_valid
+ docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 19.03.15
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: systemd
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: ea765aba0d05254012b0b9e595e995c09186427f
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-146-generic
 Operating System: Ubuntu 20.04.6 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.585GiB
 Name: node-01
 ID: 3ZBF:EJUC:4NRA:AREL:H7J5:XIVS:ZOPK:RRDY:7ITK:4QDY:C2NO:WIQB
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  0.0.0.0/0
  ::/0
  127.0.0.0/8
 Live Restore Enabled: true
 Product License: Community Engine

WARNING: No swap limit support
++ docker info --format '{{json .ServerVersion}}'
++ tr -d '"'
+ dockerVersion=19.03.15
+ '[' 19.03.15 '!=' 19.03.15 ']'
+ mkdir -p /etc/sealerio/cri/
+ echo /var/run/dockershim.sock
2023-06-29 16:12:58 [INFO] [local.go:121] will install local private registry configuration on [192.168.1.201]


[copying files to 192.168.1.201] 100% [==================================================] (1/1, 3 it/s)2023-06-29 16:13:01 [INFO] [init.go:190] join command is: kubeadm join  apiserver.cluster.local:6443 --token l5adkh.lgx9894h4yjdzwl5 --discovery-token-ca-cert-hash sha256:a8d47217a4ae352c0823a14660111b0f64219900abc6d82c88ce9d56a350d052


+ disable_firewalld
++ utils_get_distribution
++ lsb_dist=
++ '[' -r /etc/os-release ']'
+++ . /etc/os-release
++++ NAME=Ubuntu
++++ VERSION='20.04.6 LTS (Focal Fossa)'
++++ ID=ubuntu
++++ ID_LIKE=debian
++++ PRETTY_NAME='Ubuntu 20.04.6 LTS'
++++ VERSION_ID=20.04
++++ HOME_URL=https://www.ubuntu.com/
++++ SUPPORT_URL=https://help.ubuntu.com/
++++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
++++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy
++++ VERSION_CODENAME=focal
++++ UBUNTU_CODENAME=focal
+++ echo ubuntu
++ lsb_dist=ubuntu
++ echo ubuntu
+ lsb_dist=ubuntu
++ echo ubuntu
++ tr '[:upper:]' '[:lower:]'
+ lsb_dist=ubuntu
+ case "$lsb_dist" in
+ command -v ufw
+ ufw disable
Firewall stopped and disabled on system startup
+ copy_bins
+ RPM_DIR=/var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../rpm/
+ for rpm in socat kubernetes-cni
+ rpm -qa
init-kube.sh: line 39: rpm: command not found
+ grep socat
+ rpm -ivh --force --nodeps /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../rpm//socat-1.7.3.2-2.el7.x86_64.rpm
init-kube.sh: line 40: rpm: command not found
+ for rpm in socat kubernetes-cni
+ grep kubernetes-cni
+ rpm -qa
init-kube.sh: line 39: rpm: command not found
+ rpm -ivh --force --nodeps /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../rpm//kubernetes-cni.x86_64.rpm
init-kube.sh: line 40: rpm: command not found
+ chmod -R 755 ../bin/conntrack ../bin/etcdctl ../bin/helm ../bin/kubeadm ../bin/kubectl ../bin/kubelet ../bin/mc ../bin/nerdctl ../bin/seautil ../bin/trident ../bin/velero
+ chmod 644 ../bin
+ cp ../bin/conntrack ../bin/etcdctl ../bin/helm ../bin/kubeadm ../bin/kubectl ../bin/kubelet ../bin/mc ../bin/nerdctl ../bin/seautil ../bin/trident ../bin/velero /usr/bin
+ cp ../scripts/kubelet-pre-start.sh /usr/bin
+ chmod +x /usr/bin/kubelet-pre-start.sh
+ copy_kubelet_service
+ mkdir -p /etc/systemd/system
+ cp ../etc/kubelet.service /etc/systemd/system/
+ '[' -d /etc/systemd/system/kubelet.service.d ']'
+ cp ../etc/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/
+ '[' -d /var/lib/kubelet ']'
+ /usr/bin/kubelet-pre-start.sh
/usr/bin/kubelet-pre-start.sh: line 13: getenforce: command not found
/usr/bin/kubelet-pre-start.sh: line 14: setenforce: command not found
# set by ack-distro
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
fs.may_detach_mounts = 1
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf ...
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
* Applying /etc/sysctl.d/ack-d-enable-ipv6.conf ...
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
* Applying /usr/lib/sysctl.d/protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.conf ...
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.rp_filter = 0
+ systemctl enable kubelet
+ bash /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/install-s3fs.sh
+ s3fs --version
/var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/install-s3fs.sh: line 11: s3fs: command not found
+ utils_os_env
++ cat /etc/issue
++ grep -i ubuntu
++ wc -l
+ ubu=1
++ grep -i debian
++ cat /etc/issue
++ wc -l
+ debian=0
++ cat /etc/centos-release
++ grep CentOS
++ wc -l
cat: /etc/centos-release: No such file or directory
+ cet=0
++ cat /etc/redhat-release
++ grep 'Red Hat'
cat: /etc/redhat-release: No such file or directory
++ wc -l
+ redhat=0
++ cat /etc/redhat-release
++ grep Alibaba
++ wc -l
cat: /etc/redhat-release: No such file or directory
+ alios=0
++ cat /etc/kylin-release
++ wc -l
cat: /etc/kylin-release: No such file or directory
++ grep -E Kylin
+ kylin=0
++ cat /etc/anolis-release
++ grep -E Anolis
++ wc -l
cat: /etc/anolis-release: No such file or directory
+ anolis=0
+ '[' 1 == 1 ']'
+ export OS=Ubuntu
+ OS=Ubuntu
+ case "$OS" in
+ echo -e 'Not support get OS version of Ubuntu'
Not support get OS version of Ubuntu
+ [[ Ubuntu == \C\e\n\t\O\S ]]
+ [[ Ubuntu == \A\n\o\l\i\s ]]
+ [[ Ubuntu == \A\l\i\O\S ]]
+ [[ Ubuntu == \R\e\d\H\a\t ]]
+ '[' '' == '' ']'
+ echo 'install s3fs now only support Redhat-like OS, skip install it'
install s3fs now only support Redhat-like OS, skip install it
+ exit 0
+ bash /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/nvidia-docker.sh
+ RPM_DIR=/var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts/../rpm/nvidia
+ public::nvidia::check_has_gpu /var/lib/sealer/data/default-kubernetes-cluster/rootfs/scripts
+ utils_arch_env
++ uname -m
+ ARCH=x86_64
+ case $ARCH in
+ ARCH=amd64
+ public::nvidia::check
+ '[' amd64 '!=' amd64 ']'
+ which nvidia-smi
+ return 1
+ return 1
+ exit 0
2023-06-29 16:13:10 [INFO] [join_nodes.go:54] start to join 192.168.1.201 as worker

Usage:
  sealer scale-up [flags]

Examples:

scale-up cluster:
  sealer scale-up --masters 192.168.0.1 --nodes 192.168.0.2 -p 'Sealer123'
  sealer scale-up --masters 192.168.0.1-192.168.0.3 --nodes 192.168.0.4-192.168.0.6 -p 'Sealer123'


Flags:
  -e, --env strings        set custom environment variables
  -h, --help               help for scale-up
      --ignore-cache       whether ignore cache when distribute sealer image, default is false.
  -m, --masters string     set Count or IPList to masters
  -n, --nodes string       set Count or IPList to nodes
  -p, --passwd string      set cloud provider or baremetal server password
      --pk string          set baremetal server private key (default "/root/.ssh/id_rsa")
      --pk-passwd string   set baremetal server private key password
      --port uint16        set the sshd service port number for the server (default port: 22) (default 22)
  -u, --user string        set baremetal server username (default "root")

Global Flags:
      --color string               set the log color mode, the possible values can be [never always] (default "always")
      --config string              config file of sealer tool (default is $HOME/.sealer.json)
  -d, --debug                      turn on debug mode
      --hide-path                  hide the log path
      --hide-time                  hide the log time
      --log-to-file                write log message to disk (default true)
  -q, --quiet                      silence the usage when fail
      --remote-logger-url string   remote logger url, if not empty, will send log to this url
      --task-name string           task name which will embedded in the remote logger header, only valid when --remote-logger-url is set

2023-06-29 16:13:11 [ERROR] [root.go:75] sealer-v0.10.0: failed to check multi network: [ssh][192.168.1.201]failed to run command[export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin"; seautil route add --host 10.103.97.2 --gateway 192.168.1.201]:

What you expected to happen?

Add a new node to my cluster successfully.

How to reproduce it (as minimally and precisely as possible)?

ClusterFile

apiVersion: sealer.io/v2
kind: Cluster
metadata:
  name: default-kubernetes-cluster
spec:
  image:  ack-agility-registry.cn-shanghai.cr.aliyuncs.com/ecp_builder/ackdistro:v1-22-15-ack-10
  env:
    - SkipPreflight=true
    - Network=calico
    - EnableLocalDNSCache=false
  ssh:
    user: root
    passwd: vagrant
  hosts:
    - ips: [ 192.168.1.200 ]
      roles: [ master ]

I ran sealer run -f clusterfile successfully. Then I want to add a new node.
When I ran sealer scale-up --nodes 192.168.1.201 -p 'vagrant', it failed with above logs.

Anything else we need to know?

No response

What is the version of Sealer you using?

{"gitVersion":"v0.10.0","gitCommit":"d83ead0","buildDate":"2023-05-16 02:39:03","goVersion":"go1.17.13","compiler":"gc","platform":"linux/amd64"}

What is your OS environment?

Ubuntu 20.04

What is the Kernel version?

Linux master-01 5.4.0-146-generic #163-Ubuntu SMP Fri Mar 17 18:26:02 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Other environment you want to tell us?

  • Install tools: Vagrant
  • Others:
    My Vagrantfile content:
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|

  config.vm.define "master-01" do |master|
    master.vm.box = "generic/ubuntu2004"
    master.vm.hostname = "master-01"
    master.vm.network "private_network", ip: "192.168.1.200"
    master.vm.provider "virtualbox" do |vb|
      vb.cpus = 4
      vb.memory = "8000"
      vb.customize ['modifyvm', :id, '--macaddress1', '080027000051']
	  vb.customize ['modifyvm', :id, '--natnet1', '10.0.51.0/24']
    end
    master.vm.provision "shell", inline: <<-SHELL
      sudo sed -i s@/archive.ubuntu.com/@/mirrors.ustc.edu.cn/@g /etc/apt/sources.list
      sudo apt-get update
      sudo apt-get install ntp -y
      sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
      sudo systemctl restart sshd
      wget http://ack-a-aecp.oss-cn-hangzhou.aliyuncs.com/ack-distro/sealer/sealer-0.9.4-beta2-linux-amd64.tar.gz -O sealer-latest-linux-amd64.tar.gz && sudo tar -xvf sealer-latest-linux-amd64.tar.gz -C /usr/bin
    SHELL
  end

  config.vm.define "worker-01" do |worker|
    worker.vm.box = "generic/ubuntu2004"
    worker.vm.hostname = "worker-01"
    worker.vm.network "private_network", ip: "192.168.1.201"
    worker.vm.provider "virtualbox" do |vb|
      vb.cpus = 4
      vb.memory = "8000"
	  vb.customize ['modifyvm', :id, '--macaddress1', '080027000052']
	  vb.customize ['modifyvm', :id, '--natnet1', '10.0.52.0/24']
    end
    worker.vm.provision "shell", inline: <<-SHELL
      sudo sed -i s@/archive.ubuntu.com/@/mirrors.ustc.edu.cn/@g /etc/apt/sources.list
      sudo apt-get update
      sudo apt-get install ntp -y
      sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config
      sudo systemctl restart sshd
    SHELL
  end
end
@4meepo 4meepo added the kind/bug Something isn't working label Jun 29, 2023
@4meepo 4meepo changed the title Failed to run command[seautil route add ] Failed to run command when scale-up[seautil route add ] Jun 29, 2023
@kakaZhou719
Copy link
Member

@4meepo ,on 192.168.1.201 node ,rerun: seautil route add --host 10.103.97.2 --gateway 192.168.1.201 ,see what happens ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants