Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some IPv4 connectivity tests still run even with ipv4.enabled=false #2529

Open
2 of 3 tasks
george-zubrienko opened this issue Apr 30, 2024 · 1 comment
Open
2 of 3 tasks
Labels
kind/bug Something isn't working kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish the root cause.

Comments

@george-zubrienko
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

Running, for example:

cilium connectivity test --test all-entities-deny

When cilium is installed via helm chart (terraform) in chaining mode with AWS VPC CNI on IPv6 dualstack network with IPv6 being primary protocol (private subnets are ipv6 only, public are dualstack for backwards compat)

All values used:

MTU: 0
affinity:
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchLabels:
          k8s-app: cilium
      topologyKey: kubernetes.io/hostname
agent: true
agentNotReadyTaintKey: node.cilium.io/agent-not-ready
aksbyocni:
  enabled: false
alibabacloud:
  enabled: false
annotateK8sNode: false
annotations: {}
apiRateLimit: null
authentication:
  enabled: false
  gcInterval: 5m0s
  mutual:
    connectTimeout: 5s
    port: 4250
    spire:
      adminSocketPath: /run/spire/sockets/admin.sock
      agentSocketPath: /run/spire/sockets/agent/agent.sock
      annotations: {}
      connectionTimeout: 30s
      enabled: false
      install:
        agent:
          affinity: {}
          annotations: {}
          image:
            digest: sha256:99405637647968245ff9fe215f8bd2bd0ea9807be9725f8bf19fe1b21471e52b
            override: null
            pullPolicy: IfNotPresent
            repository: ghcr.io/spiffe/spire-agent
            tag: 1.8.5
            useDigest: true
          labels: {}
          nodeSelector: {}
          podSecurityContext: {}
          securityContext: {}
          serviceAccount:
            create: true
            name: spire-agent
          skipKubeletVerification: true
          tolerations:
          - effect: NoSchedule
            key: node.kubernetes.io/not-ready
          - effect: NoSchedule
            key: node-role.kubernetes.io/master
          - effect: NoSchedule
            key: node-role.kubernetes.io/control-plane
          - effect: NoSchedule
            key: node.cloudprovider.kubernetes.io/uninitialized
            value: "true"
          - key: CriticalAddonsOnly
            operator: Exists
        enabled: true
        existingNamespace: false
        initImage:
          digest: sha256:223ae047b1065bd069aac01ae3ac8088b3ca4a527827e283b85112f29385fb1b
          override: null
          pullPolicy: IfNotPresent
          repository: docker.io/library/busybox
          tag: 1.36.1
          useDigest: true
        namespace: cilium-spire
        server:
          affinity: {}
          annotations: {}
          ca:
            keyType: rsa-4096
            subject:
              commonName: Cilium SPIRE CA
              country: US
              organization: SPIRE
          dataStorage:
            accessMode: ReadWriteOnce
            enabled: true
            size: 1Gi
            storageClass: null
          image:
            digest: sha256:28269265882048dcf0fed32fe47663cd98613727210b8d1a55618826f9bf5428
            override: null
            pullPolicy: IfNotPresent
            repository: ghcr.io/spiffe/spire-server
            tag: 1.8.5
            useDigest: true
          initContainers: []
          labels: {}
          nodeSelector: {}
          podSecurityContext: {}
          securityContext: {}
          service:
            annotations: {}
            labels: {}
            type: ClusterIP
          serviceAccount:
            create: true
            name: spire-server
          tolerations: []
      serverAddress: null
      trustDomain: spiffe.cilium
  queueSize: 1024
  rotatedIdentitiesQueueSize: 1024
autoDirectNodeRoutes: false
azure:
  enabled: false
bandwidthManager:
  bbr: false
  enabled: false
bgp:
  announce:
    loadbalancerIP: false
    podCIDR: false
  enabled: false
bgpControlPlane:
  enabled: false
  secretsNamespace:
    create: false
    name: kube-system
bpf:
  authMapMax: null
  autoMount:
    enabled: true
  ctAnyMax: null
  ctTcpMax: null
  hostLegacyRouting: null
  lbExternalClusterIP: false
  lbMapMax: 65536
  mapDynamicSizeRatio: null
  masquerade: null
  monitorAggregation: medium
  monitorFlags: all
  monitorInterval: 5s
  natMax: null
  neighMax: null
  nodeMapMax: null
  policyMapMax: 16384
  preallocateMaps: false
  root: /sys/fs/bpf
  tproxy: null
  vlanBypass: null
bpfClockProbe: false
certgen:
  affinity: {}
  annotations:
    cronJob: {}
    job: {}
  extraVolumeMounts: []
  extraVolumes: []
  image:
    digest: sha256:5586de5019abc104637a9818a626956cd9b1e827327b958186ec412ae3d5dea6
    override: null
    pullPolicy: IfNotPresent
    repository: quay.io/cilium/certgen
    tag: v0.1.11
    useDigest: true
  podLabels: {}
  tolerations: []
  ttlSecondsAfterFinished: 1800
cgroup:
  autoMount:
    enabled: true
    resources: {}
  hostRoot: /run/cilium/cgroupv2
cleanBpfState: false
cleanState: false
cluster:
  id: 0
  name: default
clustermesh:
  annotations: {}
  apiserver:
    affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              k8s-app: clustermesh-apiserver
          topologyKey: kubernetes.io/hostname
    etcd:
      init:
        extraArgs: []
        extraEnv: []
        resources: {}
      lifecycle: {}
      resources: {}
      securityContext: {}
    extraArgs: []
    extraEnv: []
    extraVolumeMounts: []
    extraVolumes: []
    image:
      digest: sha256:3fadf85d2aa0ecec09152e7e2d57648bda7e35bdc161b25ab54066dd4c3b299c
      override: null
      pullPolicy: IfNotPresent
      repository: quay.io/cilium/clustermesh-apiserver
      tag: v1.15.4
      useDigest: true
    kvstoremesh:
      enabled: false
      extraArgs: []
      extraEnv: []
      extraVolumeMounts: []
      lifecycle: {}
      resources: {}
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
          - ALL
    lifecycle: {}
    metrics:
      enabled: true
      etcd:
        enabled: true
        mode: basic
        port: 9963
      kvstoremesh:
        enabled: true
        port: 9964
      port: 9962
      serviceMonitor:
        annotations: {}
        enabled: false
        etcd:
          interval: 10s
          metricRelabelings: null
          relabelings: null
        interval: 10s
        kvstoremesh:
          interval: 10s
          metricRelabelings: null
          relabelings: null
        labels: {}
        metricRelabelings: null
        relabelings: null
    nodeSelector:
      kubernetes.io/os: linux
    podAnnotations: {}
    podDisruptionBudget:
      enabled: false
      maxUnavailable: 1
      minAvailable: null
    podLabels: {}
    podSecurityContext: {}
    priorityClassName: ""
    replicas: 1
    resources: {}
    securityContext: {}
    service:
      annotations: {}
      externalTrafficPolicy: null
      internalTrafficPolicy: null
      nodePort: 32379
      type: NodePort
    terminationGracePeriodSeconds: 30
    tls:
      admin:
        cert: ""
        key: ""
      authMode: legacy
      auto:
        certManagerIssuerRef: {}
        certValidityDuration: 1095
        enabled: true
        method: helm
      client:
        cert: ""
        key: ""
      remote:
        cert: ""
        key: ""
      server:
        cert: ""
        extraDnsNames: []
        extraIpAddresses: []
        key: ""
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 1
      type: RollingUpdate
  config:
    clusters: []
    domain: mesh.cilium.io
    enabled: false
  maxConnectedClusters: 255
  useAPIServer: false
cni:
  binPath: /opt/cni/bin
  chainingMode: aws-cni
  chainingTarget: null
  confFileMountPath: /tmp/cni-configuration
  confPath: /etc/cni/net.d
  configMapKey: cni-config
  customConf: false
  exclusive: false
  hostConfDirMountPath: /host/etc/cni/net.d
  install: true
  logFile: /var/run/cilium/cilium-cni.log
  resources:
    requests:
      cpu: 100m
      memory: 10Mi
  uninstall: false
conntrackGCInterval: ""
conntrackGCMaxInterval: ""
containerRuntime:
  integration: none
crdWaitTimeout: ""
customCalls:
  enabled: false
daemon:
  allowedConfigOverrides: null
  blockedConfigOverrides: null
  configSources: null
  runPath: /var/run/cilium
dashboards:
  annotations: {}
  enabled: false
  label: grafana_dashboard
  labelValue: "1"
  namespace: null
debug:
  enabled: false
  verbose: null
disableEndpointCRD: false
dnsPolicy: ""
dnsProxy:
  dnsRejectResponseCode: refused
  enableDnsCompression: true
  endpointMaxIpPerHostname: 50
  idleConnectionGracePeriod: 0s
  maxDeferredConnectionDeletes: 10000
  minTtl: 0
  preCache: ""
  proxyPort: 0
  proxyResponseMaxDelay: 100ms
egressGateway:
  enabled: false
  installRoutes: false
  reconciliationTriggerInterval: 1s
enableCiliumEndpointSlice: false
enableCriticalPriorityClass: true
enableIPv4BIGTCP: false
enableIPv4Masquerade: false
enableIPv6BIGTCP: false
enableIPv6Masquerade: false
enableK8sTerminatingEndpoint: true
enableMasqueradeRouteSource: false
enableRuntimeDeviceDetection: false
enableXTSocketFallback: true
encryption:
  enabled: false
  interface: ""
  ipsec:
    interface: ""
    keyFile: ""
    keyRotationDuration: 5m
    keyWatcher: true
    mountPath: ""
    secretName: ""
  keyFile: keys
  mountPath: /etc/ipsec
  nodeEncryption: false
  secretName: cilium-ipsec-keys
  strictMode:
    allowRemoteNodeIdentities: false
    cidr: ""
    enabled: false
  type: ipsec
  wireguard:
    persistentKeepalive: 0s
    userspaceFallback: false
endpointHealthChecking:
  enabled: true
endpointRoutes:
  enabled: true
endpointStatus:
  enabled: false
  status: ""
eni:
  awsEnablePrefixDelegation: false
  awsReleaseExcessIPs: false
  ec2APIEndpoint: ""
  enabled: false
  eniTags: {}
  gcInterval: ""
  gcTags: {}
  iamRole: ""
  instanceTagsFilter: []
  subnetIDsFilter: []
  subnetTagsFilter: []
  updateEC2AdapterLimitViaAPI: true
envoy:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: cilium.io/no-schedule
            operator: NotIn
            values:
            - "true"
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            k8s-app: cilium
        topologyKey: kubernetes.io/hostname
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            k8s-app: cilium-envoy
        topologyKey: kubernetes.io/hostname
  annotations: {}
  connectTimeoutSeconds: 2
  dnsPolicy: null
  enabled: false
  extraArgs: []
  extraContainers: []
  extraEnv: []
  extraHostPathMounts: []
  extraVolumeMounts: []
  extraVolumes: []
  healthPort: 9878
  idleTimeoutDurationSeconds: 60
  image:
    digest: sha256:d52f476c29a97c8b250fdbfbb8472191a268916f6a8503671d0da61e323b02cc
    override: null
    pullPolicy: IfNotPresent
    repository: quay.io/cilium/cilium-envoy
    tag: v1.27.4-21905253931655328edaacf3cd16aeda73bbea2f
    useDigest: true
  livenessProbe:
    failureThreshold: 10
    periodSeconds: 30
  log:
    format: '[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v'
    path: ""
  maxConnectionDurationSeconds: 0
  maxRequestsPerConnection: 0
  nodeSelector:
    kubernetes.io/os: linux
  podAnnotations: {}
  podLabels: {}
  podSecurityContext: {}
  priorityClassName: null
  prometheus:
    enabled: true
    port: "9964"
    serviceMonitor:
      annotations: {}
      enabled: false
      interval: 10s
      labels: {}
      metricRelabelings: null
      relabelings:
      - replacement: ${1}
        sourceLabels:
        - __meta_kubernetes_pod_node_name
        targetLabel: node
  readinessProbe:
    failureThreshold: 3
    periodSeconds: 30
  resources: {}
  rollOutPods: false
  securityContext:
    capabilities:
      envoy:
      - NET_ADMIN
      - SYS_ADMIN
    privileged: false
    seLinuxOptions:
      level: s0
      type: spc_t
  startupProbe:
    failureThreshold: 105
    periodSeconds: 2
  terminationGracePeriodSeconds: 1
  tolerations:
  - operator: Exists
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 2
    type: RollingUpdate
envoyConfig:
  enabled: false
  secretsNamespace:
    create: true
    name: cilium-secrets
etcd:
  annotations: {}
  clusterDomain: cluster.local
  enabled: false
  endpoints:
  - https://CHANGE-ME:2379
  extraArgs: []
  extraVolumeMounts: []
  extraVolumes: []
  image:
    digest: sha256:04b8327f7f992693c2cb483b999041ed8f92efc8e14f2a5f3ab95574a65ea2dc
    override: null
    pullPolicy: IfNotPresent
    repository: quay.io/cilium/cilium-etcd-operator
    tag: v2.0.7
    useDigest: true
  k8sService: false
  nodeSelector:
    kubernetes.io/os: linux
  podAnnotations: {}
  podDisruptionBudget:
    enabled: false
    maxUnavailable: 1
    minAvailable: null
  podLabels: {}
  podSecurityContext: {}
  priorityClassName: ""
  resources: {}
  securityContext: {}
  ssl: false
  tolerations:
  - operator: Exists
  topologySpreadConstraints: []
  updateStrategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
externalIPs:
  enabled: false
externalWorkloads:
  enabled: false
extraArgs:
- --api-rate-limit=endpoint-create=rate-limit:16/s,rate-burst:1024,max-parallel-requests:8192,auto-adjust:true
- --api-rate-limit=endpoint-delete=rate-limit:1024/s,rate-burst:4096,max-parallel-requests:8192,auto-adjust:true
- --api-rate-limit=endpoint-get=rate-limit:32/s,rate-burst:1024,max-parallel-requests:8192,auto-adjust:true
- --api-rate-limit=endpoint-patch=rate-limit:16/s,rate-burst:1024,max-parallel-requests:4096,auto-adjust:true
- --api-rate-limit=endpoint-list=rate-limit:16/s,rate-burst:1024,max-parallel-requests:4096,auto-adjust:true
extraConfig: {}
extraContainers: []
extraEnv: []
extraHostPathMounts: []
extraVolumeMounts: []
extraVolumes: []
gatewayAPI:
  enabled: false
  secretsNamespace:
    create: true
    name: cilium-secrets
    sync: true
gke:
  enabled: false
healthChecking: true
healthPort: 9879
highScaleIPcache:
  enabled: false
hostFirewall:
  enabled: false
hostPort:
  enabled: false
hubble:
  annotations: {}
  enabled: true
  export:
    dynamic:
      config:
        configMapName: cilium-flowlog-config
        content:
        - excludeFilters: []
          fieldMask: []
          filePath: /var/run/cilium/hubble/events.log
          includeFilters: []
          name: all
        createConfigMap: true
      enabled: false
    fileMaxBackups: 5
    fileMaxSizeMb: 10
    static:
      allowList: []
      denyList: []
      enabled: false
      fieldMask: []
      filePath: /var/run/cilium/hubble/events.log
  listenAddress: :4244
  metrics:
    dashboards:
      annotations: {}
      enabled: false
      label: grafana_dashboard
      labelValue: "1"
      namespace: null
    enableOpenMetrics: false
    enabled: null
    port: 9965
    serviceAnnotations: {}
    serviceMonitor:
      annotations: {}
      enabled: false
      interval: 10s
      jobLabel: ""
      labels: {}
      metricRelabelings: null
      relabelings:
      - replacement: ${1}
        sourceLabels:
        - __meta_kubernetes_pod_node_name
        targetLabel: node
  peerService:
    clusterDomain: cluster.local
    targetPort: 4244
  preferIpv6: false
  redact:
    enabled: false
    http:
      headers:
        allow: []
        deny: []
      urlQuery: false
      userInfo: true
    kafka:
      apiKey: false
  relay:
    affinity:
      podAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              k8s-app: cilium
          topologyKey: kubernetes.io/hostname
    annotations: {}
    dialTimeout: null
    enabled: false
    extraEnv: []
    extraVolumeMounts: []
    extraVolumes: []
    gops:
      enabled: true
      port: 9893
    image:
      digest: sha256:03ad857feaf52f1b4774c29614f42a50b370680eb7d0bfbc1ae065df84b1070a
      override: null
      pullPolicy: IfNotPresent
      repository: quay.io/cilium/hubble-relay
      tag: v1.15.4
      useDigest: true
    listenHost: ""
    listenPort: "4245"
    nodeSelector:
      kubernetes.io/os: linux
    podAnnotations: {}
    podDisruptionBudget:
      enabled: false
      maxUnavailable: 1
      minAvailable: null
    podLabels: {}
    podSecurityContext:
      fsGroup: 65532
    pprof:
      address: localhost
      enabled: false
      port: 6062
    priorityClassName: ""
    prometheus:
      enabled: false
      port: 9966
      serviceMonitor:
        annotations: {}
        enabled: false
        interval: 10s
        labels: {}
        metricRelabelings: null
        relabelings: null
    replicas: 1
    resources: {}
    retryTimeout: null
    rollOutPods: false
    securityContext:
      capabilities:
        drop:
        - ALL
      runAsGroup: 65532
      runAsNonRoot: true
      runAsUser: 65532
    service:
      nodePort: 31234
      type: ClusterIP
    sortBufferDrainTimeout: null
    sortBufferLenMax: null
    terminationGracePeriodSeconds: 1
    tls:
      client:
        cert: ""
        key: ""
      server:
        cert: ""
        enabled: false
        extraDnsNames: []
        extraIpAddresses: []
        key: ""
        mtls: false
        relayName: ui.hubble-relay.cilium.io
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 1
      type: RollingUpdate
  skipUnknownCGroupIDs: null
  socketPath: /var/run/cilium/hubble.sock
  tls:
    auto:
      certManagerIssuerRef: {}
      certValidityDuration: 1095
      enabled: true
      method: helm
      schedule: 0 0 1 */4 *
    enabled: true
    server:
      cert: ""
      extraDnsNames: []
      extraIpAddresses: []
      key: ""
  ui:
    affinity: {}
    annotations: {}
    backend:
      extraEnv: []
      extraVolumeMounts: []
      extraVolumes: []
      image:
        digest: sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803
        override: null
        pullPolicy: IfNotPresent
        repository: quay.io/cilium/hubble-ui-backend
        tag: v0.13.0
        useDigest: true
      livenessProbe:
        enabled: false
      readinessProbe:
        enabled: false
      resources: {}
      securityContext: {}
    baseUrl: /
    enabled: false
    frontend:
      extraEnv: []
      extraVolumeMounts: []
      extraVolumes: []
      image:
        digest: sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666
        override: null
        pullPolicy: IfNotPresent
        repository: quay.io/cilium/hubble-ui
        tag: v0.13.0
        useDigest: true
      resources: {}
      securityContext: {}
      server:
        ipv6:
          enabled: true
    ingress:
      annotations: {}
      className: ""
      enabled: false
      hosts:
      - chart-example.local
      labels: {}
      tls: []
    nodeSelector:
      kubernetes.io/os: linux
    podAnnotations: {}
    podDisruptionBudget:
      enabled: false
      maxUnavailable: 1
      minAvailable: null
    podLabels: {}
    priorityClassName: ""
    replicas: 1
    rollOutPods: false
    securityContext:
      fsGroup: 1001
      runAsGroup: 1001
      runAsUser: 1001
    service:
      annotations: {}
      nodePort: 31235
      type: ClusterIP
    standalone:
      enabled: false
      tls:
        certsVolume: {}
    tls:
      client:
        cert: ""
        key: ""
    tolerations: []
    topologySpreadConstraints: []
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 1
      type: RollingUpdate
identityAllocationMode: crd
identityChangeGracePeriod: ""
image:
  digest: sha256:b760a4831f5aab71c711f7537a107b751d0d0ce90dd32d8b358df3c5da385426
  override: null
  pullPolicy: IfNotPresent
  repository: quay.io/cilium/cilium
  tag: v1.15.4
  useDigest: true
imagePullSecrets: null
ingressController:
  default: false
  defaultSecretName: null
  defaultSecretNamespace: null
  enableProxyProtocol: false
  enabled: false
  enforceHttps: true
  ingressLBAnnotationPrefixes:
  - service.beta.kubernetes.io
  - service.kubernetes.io
  - cloud.google.com
  loadbalancerMode: dedicated
  secretsNamespace:
    create: true
    name: cilium-secrets
    sync: true
  service:
    allocateLoadBalancerNodePorts: null
    annotations: {}
    insecureNodePort: null
    labels: {}
    loadBalancerClass: null
    loadBalancerIP: null
    name: cilium-ingress
    secureNodePort: null
    type: LoadBalancer
initResources: {}
installNoConntrackIptablesRules: false
ipMasqAgent:
  enabled: false
ipam:
  ciliumNodeUpdateRate: 15s
  mode: cluster-pool
  operator:
    autoCreateCiliumPodIPPools: {}
    clusterPoolIPv4MaskSize: 24
    clusterPoolIPv4PodCIDRList:
    - 10.0.0.0/8
    clusterPoolIPv6MaskSize: 120
    clusterPoolIPv6PodCIDRList:
    - fd00::/104
    externalAPILimitBurstSize: null
    externalAPILimitQPS: null
ipv4:
  enabled: false
ipv4NativeRoutingCIDR: ""
ipv6:
  enabled: true
ipv6NativeRoutingCIDR: ""
k8s: {}
k8sClientRateLimit:
  burst: null
  qps: null
k8sNetworkPolicy:
  enabled: true
k8sServiceHost: ""
k8sServicePort: ""
keepDeprecatedLabels: false
keepDeprecatedProbes: false
kubeConfigPath: ""
kubeProxyReplacementHealthzBindAddr: ""
l2NeighDiscovery:
  enabled: true
  refreshPeriod: 30s
l2announcements:
  enabled: false
l2podAnnouncements:
  enabled: false
  interface: eth0
l7Proxy: true
livenessProbe:
  failureThreshold: 10
  periodSeconds: 30
loadBalancer:
  acceleration: best-effort
  l7:
    algorithm: round_robin
    backend: disabled
    ports: []
localRedirectPolicy: false
logSystemLoad: false
maglev: {}
monitor:
  enabled: false
name: cilium
nat46x64Gateway:
  enabled: false
nodePort:
  autoProtectPortRange: true
  bindProtection: true
  enableHealthCheck: true
  enableHealthCheckLoadBalancerIP: false
  enabled: false
nodeSelector:
  kubernetes.io/os: linux
nodeinit:
  affinity: {}
  annotations: {}
  bootstrapFile: /tmp/cilium-bootstrap.d/cilium-bootstrap-time
  enabled: false
  extraEnv: []
  extraVolumeMounts: []
  extraVolumes: []
  image:
    digest: sha256:e1d442546e868db1a3289166c14011e0dbd32115b338b963e56f830972bc22a2
    override: null
    pullPolicy: IfNotPresent
    repository: quay.io/cilium/startup-script
    tag: 62093c5c233ea914bfa26a10ba41f8780d9b737f
    useDigest: true
  nodeSelector:
    kubernetes.io/os: linux
  podAnnotations: {}
  podLabels: {}
  prestop:
    postScript: ""
    preScript: ""
  priorityClassName: ""
  resources:
    requests:
      cpu: 100m
      memory: 100Mi
  securityContext:
    capabilities:
      add:
      - SYS_MODULE
      - NET_ADMIN
      - SYS_ADMIN
      - SYS_CHROOT
      - SYS_PTRACE
    privileged: false
    seLinuxOptions:
      level: s0
      type: spc_t
  startup:
    postScript: ""
    preScript: ""
  tolerations:
  - operator: Exists
  updateStrategy:
    type: RollingUpdate
operator:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            io.cilium/app: operator
        topologyKey: kubernetes.io/hostname
  annotations: {}
  dashboards:
    annotations: {}
    enabled: false
    label: grafana_dashboard
    labelValue: "1"
    namespace: null
  dnsPolicy: ""
  enabled: true
  endpointGCInterval: 5m0s
  extraArgs: []
  extraEnv: []
  extraHostPathMounts: []
  extraVolumeMounts: []
  extraVolumes: []
  identityGCInterval: 15m0s
  identityHeartbeatTimeout: 30m0s
  image:
    alibabacloudDigest: sha256:7c0e5346483a517e18a8951f4d4399337fb47020f2d9225e2ceaa8c5d9a45a5f
    awsDigest: sha256:8675486ce8938333390c37302af162ebd12aaebc08eeeaf383bfb73128143fa9
    azureDigest: sha256:4c1a31502931681fa18a41ead2a3904b97d47172a92b7a7b205026bd1e715207
    genericDigest: sha256:404890a83cca3f28829eb7e54c1564bb6904708cdb7be04ebe69c2b60f164e9a
    override: null
    pullPolicy: IfNotPresent
    repository: quay.io/cilium/operator
    suffix: ""
    tag: v1.15.4
    useDigest: true
  nodeGCInterval: 5m0s
  nodeSelector:
    kubernetes.io/os: linux
  podAnnotations: {}
  podDisruptionBudget:
    enabled: false
    maxUnavailable: 1
    minAvailable: null
  podLabels: {}
  podSecurityContext: {}
  pprof:
    address: localhost
    enabled: false
    port: 6061
  priorityClassName: ""
  prometheus:
    enabled: true
    port: 9963
    serviceMonitor:
      annotations: {}
      enabled: false
      interval: 10s
      jobLabel: ""
      labels: {}
      metricRelabelings: null
      relabelings: null
  removeNodeTaints: true
  replicas: 2
  resources: {}
  rollOutPods: false
  securityContext: {}
  setNodeNetworkStatus: true
  setNodeTaints: null
  skipCNPStatusStartupClean: false
  skipCRDCreation: false
  tolerations:
  - operator: Exists
  topologySpreadConstraints: []
  unmanagedPodWatcher:
    intervalSeconds: 15
    restart: true
  updateStrategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 50%
    type: RollingUpdate
pmtuDiscovery:
  enabled: false
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
policyCIDRMatchMode: null
policyEnforcementMode: default
pprof:
  address: localhost
  enabled: false
  port: 6060
preflight:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            k8s-app: cilium
        topologyKey: kubernetes.io/hostname
  annotations: {}
  enabled: false
  extraEnv: []
  extraVolumeMounts: []
  extraVolumes: []
  image:
    digest: sha256:b760a4831f5aab71c711f7537a107b751d0d0ce90dd32d8b358df3c5da385426
    override: null
    pullPolicy: IfNotPresent
    repository: quay.io/cilium/cilium
    tag: v1.15.4
    useDigest: true
  nodeSelector:
    kubernetes.io/os: linux
  podAnnotations: {}
  podDisruptionBudget:
    enabled: false
    maxUnavailable: 1
    minAvailable: null
  podLabels: {}
  podSecurityContext: {}
  priorityClassName: ""
  resources: {}
  securityContext: {}
  terminationGracePeriodSeconds: 1
  tofqdnsPreCache: ""
  tolerations:
  - effect: NoSchedule
    key: node.kubernetes.io/not-ready
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
  - effect: NoSchedule
    key: node.cloudprovider.kubernetes.io/uninitialized
    value: "true"
  - key: CriticalAddonsOnly
    operator: Exists
  updateStrategy:
    type: RollingUpdate
  validateCNPs: true
priorityClassName: ""
prometheus:
  controllerGroupMetrics:
  - write-cni-file
  - sync-host-ips
  - sync-lb-maps-with-k8s-services
  enabled: false
  metrics: null
  port: 9962
  serviceMonitor:
    annotations: {}
    enabled: false
    interval: 10s
    jobLabel: ""
    labels: {}
    metricRelabelings: null
    relabelings:
    - replacement: ${1}
      sourceLabels:
      - __meta_kubernetes_pod_node_name
      targetLabel: node
    trustCRDsExist: false
proxy:
  prometheus:
    enabled: true
    port: null
  sidecarImageRegex: cilium/istio_proxy
rbac:
  create: true
readinessProbe:
  failureThreshold: 3
  periodSeconds: 30
remoteNodeIdentity: true
resourceQuotas:
  cilium:
    hard:
      pods: 10k
  enabled: false
  operator:
    hard:
      pods: "15"
resources: {}
rollOutCiliumPods: false
routingMode: native
sctp:
  enabled: false
securityContext:
  capabilities:
    applySysctlOverwrites:
    - SYS_ADMIN
    - SYS_CHROOT
    - SYS_PTRACE
    ciliumAgent:
    - CHOWN
    - KILL
    - NET_ADMIN
    - NET_RAW
    - IPC_LOCK
    - SYS_MODULE
    - SYS_ADMIN
    - SYS_RESOURCE
    - DAC_OVERRIDE
    - FOWNER
    - SETGID
    - SETUID
    cleanCiliumState:
    - NET_ADMIN
    - SYS_MODULE
    - SYS_ADMIN
    - SYS_RESOURCE
    mountCgroup:
    - SYS_ADMIN
    - SYS_CHROOT
    - SYS_PTRACE
  privileged: false
  seLinuxOptions:
    level: s0
    type: spc_t
serviceAccounts:
  cilium:
    annotations: {}
    automount: true
    create: true
    name: cilium
  clustermeshApiserver:
    annotations: {}
    automount: true
    create: true
    name: clustermesh-apiserver
  clustermeshcertgen:
    annotations: {}
    automount: true
    create: true
    name: clustermesh-apiserver-generate-certs
  envoy:
    annotations: {}
    automount: true
    create: true
    name: cilium-envoy
  etcd:
    annotations: {}
    automount: true
    create: true
    name: cilium-etcd-operator
  hubblecertgen:
    annotations: {}
    automount: true
    create: true
    name: hubble-generate-certs
  nodeinit:
    annotations: {}
    automount: true
    create: true
    enabled: false
    name: cilium-nodeinit
  operator:
    annotations: {}
    automount: true
    create: true
    name: cilium-operator
  preflight:
    annotations: {}
    automount: true
    create: true
    name: cilium-pre-flight
  relay:
    annotations: {}
    automount: false
    create: true
    name: hubble-relay
  ui:
    annotations: {}
    automount: true
    create: true
    name: hubble-ui
serviceNoBackendResponse: reject
sleepAfterInit: false
socketLB:
  enabled: false
startupProbe:
  failureThreshold: 105
  periodSeconds: 2
svcSourceRangeCheck: true
synchronizeK8sNodes: true
terminationGracePeriodSeconds: 1
tls:
  ca:
    cert: ""
    certValidityDuration: 1095
    key: ""
  caBundle:
    enabled: false
    key: ca.crt
    name: cilium-root-ca.crt
    useSecret: false
  secretsBackend: local
tolerations:
- operator: Exists
tunnelPort: 0
tunnelProtocol: ""
updateStrategy:
  rollingUpdate:
    maxUnavailable: 2
  type: RollingUpdate
vtep:
  cidr: ""
  enabled: false
  endpoint: ""
  mac: ""
  mask: ""
waitForKubeProxy: false
wellKnownIdentities:
  enabled: false

Output of the test:

.......
  ℹ️  📜 Applying CiliumNetworkPolicy 'all-entities-deny' to namespace 'cilium-test'..
  [-] Scenario [all-entities-deny/pod-to-pod]
  [.] Action [all-entities-deny/pod-to-pod/curl-ipv6-0: cilium-test/client-69748f45d8-rz8rh (2a05:d014:554:b286:231::2) -> cilium-test/echo-other-node-5d67f9786b-tl4w4 (2a05:d014:554:b285:efbc::3:8080)]
  [.] Action [all-entities-deny/pod-to-pod/curl-ipv6-1: cilium-test/client-69748f45d8-rz8rh (2a05:d014:554:b286:231::2) -> cilium-test/echo-same-node-6698bd45b-782dq (2a05:d014:554:b286:231::3:8080)]
  [.] Action [all-entities-deny/pod-to-pod/curl-ipv6-2: cilium-test/client2-ccd7b8bdf-nj5n6 (2a05:d014:554:b286:231::5) -> cilium-test/echo-other-node-5d67f9786b-tl4w4 (2a05:d014:554:b285:efbc::3:8080)]
  [.] Action [all-entities-deny/pod-to-pod/curl-ipv6-3: cilium-test/client2-ccd7b8bdf-nj5n6 (2a05:d014:554:b286:231::5) -> cilium-test/echo-same-node-6698bd45b-782dq (2a05:d014:554:b286:231::3:8080)]
  [.] Action [all-entities-deny/pod-to-pod/curl-ipv6-4: cilium-test/client3-868f7b8f6b-6w7jn (2a05:d014:554:b285:efbc::2) -> cilium-test/echo-other-node-5d67f9786b-tl4w4 (2a05:d014:554:b285:efbc::3:8080)]
  [.] Action [all-entities-deny/pod-to-pod/curl-ipv6-5: cilium-test/client3-868f7b8f6b-6w7jn (2a05:d014:554:b285:efbc::2) -> cilium-test/echo-same-node-6698bd45b-782dq (2a05:d014:554:b286:231::3:8080)]
  [-] Scenario [all-entities-deny/pod-to-cidr]
  [.] Action [all-entities-deny/pod-to-cidr/external-1111-0: cilium-test/client2-ccd7b8bdf-nj5n6 (2a05:d014:554:b286:231::5) -> external-1111 (1.1.1.1:443)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 https://1.1.1.1:443" succeeded while it should have failed: 169.254.172.21:54176 -> 1.1.1.1:443 = 302
  ℹ️  curl output:
  169.254.172.21:54176 -> 1.1.1.1:443 = 302

  [.] Action [all-entities-deny/pod-to-cidr/external-1111-1: cilium-test/client3-868f7b8f6b-6w7jn (2a05:d014:554:b285:efbc::2) -> external-1111 (1.1.1.1:443)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 https://1.1.1.1:443" succeeded while it should have failed: 169.254.172.4:37540 -> 1.1.1.1:443 = 302
  ℹ️  curl output:
  169.254.172.4:37540 -> 1.1.1.1:443 = 302

  [.] Action [all-entities-deny/pod-to-cidr/external-1111-2: cilium-test/client-69748f45d8-rz8rh (2a05:d014:554:b286:231::2) -> external-1111 (1.1.1.1:443)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 https://1.1.1.1:443" succeeded while it should have failed: 169.254.172.19:58442 -> 1.1.1.1:443 = 302
  ℹ️  curl output:
  169.254.172.19:58442 -> 1.1.1.1:443 = 302

  [.] Action [all-entities-deny/pod-to-cidr/external-1001-0: cilium-test/client-69748f45d8-rz8rh (2a05:d014:554:b286:231::2) -> external-1001 (1.0.0.1:443)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 https://1.0.0.1:443" succeeded while it should have failed: 169.254.172.19:36606 -> 1.0.0.1:443 = 302
  ℹ️  curl output:
  169.254.172.19:36606 -> 1.0.0.1:443 = 302

  [.] Action [all-entities-deny/pod-to-cidr/external-1001-1: cilium-test/client2-ccd7b8bdf-nj5n6 (2a05:d014:554:b286:231::5) -> external-1001 (1.0.0.1:443)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 https://1.0.0.1:443" succeeded while it should have failed: 169.254.172.21:37098 -> 1.0.0.1:443 = 302
  ℹ️  curl output:
  169.254.172.21:37098 -> 1.0.0.1:443 = 302

  [.] Action [all-entities-deny/pod-to-cidr/external-1001-2: cilium-test/client3-868f7b8f6b-6w7jn (2a05:d014:554:b285:efbc::2) -> external-1001 (1.0.0.1:443)]
  ❌ command "curl -w %{local_ip}:%{local_port} -> %{remote_ip}:%{remote_port} = %{response_code} --silent --fail --show-error --output /dev/null --connect-timeout 2 --max-time 10 https://1.0.0.1:443" succeeded while it should have failed: 169.254.172.4:59888 -> 1.0.0.1:443 = 302
  ℹ️  curl output:
  169.254.172.4:59888 -> 1.0.0.1:443 = 302

  ℹ️  📜 Deleting CiliumNetworkPolicy 'all-entities-deny' from namespace 'cilium-test'..

You can see that IPv6 tests pass, but IPv4 tests fail as expected. However, other IPv4 tests do not run when chart is installed with ipv4.enabled=false.

Cilium Version

1.15.4

Kernel Version

6.1.82-99.168.amzn2023.x86_64 cilium/cilium#1 SMP PREEMPT_DYNAMIC Mon Mar 25 17:11:31 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Kubernetes Version

v1.29.3-eks-adc7111

Regression

No response

Sysdump

No response

Relevant log output

No response

Anything else?

No response

Cilium Users Document

  • Are you a user of Cilium? Please add yourself to the Users doc

Code of Conduct

  • I agree to follow this project's Code of Conduct
@george-zubrienko george-zubrienko added kind/bug Something isn't working kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish the root cause. labels Apr 30, 2024
@youngnick
Copy link

Thanks for opening this issue @george-zubrienko. I agree that the connectivity tests should disable IPv4 tests completely when IPv4 is disabled - I don't think our tests cover this very well atm. Thanks for finding this.

That said, this issue is actually with the Cilium CLI, so I'll move it there.

@youngnick youngnick transferred this issue from cilium/cilium May 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish the root cause.
Projects
None yet
Development

No branches or pull requests

2 participants