Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"ensure CRDs are installed first" - no matches for kind "InitConfiguration" / "ClusterConfiguration" / "KubeProxyConfiguration" #662

Open
abctaylor opened this issue Nov 6, 2023 · 0 comments
Labels

Comments

@abctaylor
Copy link

Describe the Bug

When trying to apply config with kubectl apply -f /etc/kubernetes/config.yaml (which is generated by this module), the error at the end is outputted, reporting no matches for "InitConfiguration", "ClusterConfiguration" and "KubeProxyConfiguration".

Expected Behavior

The config should apply and not error out

Steps to Reproduce

Install module
Set up running cluster
Try apply the yaml above

Environment

  • kubeadm.k8s.io/v1beta3
  • RHEL 8
  • k8s 1.28

Additional Context

The error:

[root@kube1-lon ~]# k apply -f /etc/kubernetes/config.yaml
resource mapping not found for name: "" namespace: "" from "/etc/kubernetes/config.yaml": no matches for kind "InitConfiguration" in version "kubeadm.k8s.io/v1beta3"
ensure CRDs are installed first
resource mapping not found for name: "" namespace: "" from "/etc/kubernetes/config.yaml": no matches for kind "ClusterConfiguration" in version "kubeadm.k8s.io/v1beta3"
ensure CRDs are installed first
resource mapping not found for name: "" namespace: "" from "/etc/kubernetes/config.yaml": no matches for kind "KubeProxyConfiguration" in version "kubeproxy.config.k8s.io/v1alpha1"
ensure CRDs are installed first

the yaml itself:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 9****q.9************y
  ttl: 24h
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.2.42.10
  bindPort: 6443
nodeRegistration:
  name: kube1-lon
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  kubeletExtraArgs:
---
apiServer:
  timeoutForControlPlane: 4m0s
  certSANs:
  extraArgs:
  extraVolumes:
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: "10.2.42.10:6443"
controllerManager:
  extraArgs:
  extraVolumes:
scheduler:
etcd:
    external:
        caFile: /etc/kubernetes/pki/etcd/ca.crt
        certFile: /etc/kubernetes/pki/etcd/client.crt
        endpoints:
          - https://10.2.42.10:2379
          - https://10.2.42.11:2379
        keyFile: /etc/kubernetes/pki/etcd/client.key
imageRepository:  registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.28.2
networking:
  dnsDomain: k8s.****.*****.net
  podSubnet: 10.3.128.0/18
  serviceSubnet: 10.3.192.0/20
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 5
clusterCIDR: 10.3.128.0/18
configSyncPeriod: 15m0s
conntrack:
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "iptables"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
@abctaylor abctaylor added the bug label Nov 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant