Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

configuring interface prefix for egress-masquerade-interfaces does not work as documented #32184

Open
3 tasks done
soer3n opened this issue Apr 25, 2024 · 2 comments
Open
3 tasks done
Labels
info-completed The GH issue has received a reply from the author kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish severity and next steps. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages.

Comments

@soer3n
Copy link

soer3n commented Apr 25, 2024

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

Hey.

In the documentation there is a note that a prefix would be possible for configuring egress-masquerade-interfaces but this doesn't work for us.

When using for example eth+ and enabling enable-masquerade-to-route-source , the SNAT rules are completely missing on the nodes.

It was possible to fix that by setting egress-masquerade-interfaces value explicitly to public and private network interface, for example eth0 eth1, which then adds the required SNAT rules.

We migrated from flannel to cilium in native routing mode. Therefore we ran cilium without kube-proxy replacement for now. The nodes have a public and private network interface. Cluster traffic is on the private interface. We had issues to reach servers in the same network which are not part of a cluster due to missing masquerading when using eth+ for example.

Cilium Version

v1.15.1

Kernel Version

Linux 5.15.0-102-generic

Kubernetes Version

Client Version: v1.28.2
Server Version: v1.29.3

Regression

No response

Sysdump

No response

Relevant log output

No response

Anything else?

No response

Cilium Users Document

  • Are you a user of Cilium? Please add yourself to the Users doc

Code of Conduct

  • I agree to follow this project's Code of Conduct
@soer3n soer3n added kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish severity and next steps. labels Apr 25, 2024
@youngnick
Copy link
Contributor

Thanks for logging this issue @soer3n. It does seem like that should work, by the documentation. Any chance you could drop some more information in about how you've done the install, and where you set the flag etc? Ideally, https://docs.cilium.io/en/stable/operations/troubleshooting/#automatic-log-state-collection will have everything, but otherwise, the details of how you set the flags would be most useful.

@youngnick youngnick added the need-more-info More information is required to further debug or fix the issue. label Apr 29, 2024
@soer3n
Copy link
Author

soer3n commented Apr 30, 2024

Yes, sure. Here is the configmap which works as expected:

apiVersion: v1
data:
  agent-not-ready-taint-key: node.cilium.io/agent-not-ready
  arping-refresh-period: 30s
  auto-direct-node-routes: "true"
  bpf-lb-acceleration: disabled
  bpf-lb-external-clusterip: "false"
  bpf-lb-map-max: "65536"
  bpf-lb-sock: "false"
  bpf-map-dynamic-size-ratio: "0.0025"
  bpf-policy-map-max: "16384"
  bpf-root: /sys/fs/bpf
  cgroup-root: /run/cilium/cgroupv2
  cilium-endpoint-gc-interval: 5m0s
  cluster-id: "0"
  cluster-name: default
  cluster-pool-ipv4-cidr: 10.244.192.0/18
  cluster-pool-ipv4-mask-size: "24"
  cni-exclusive: "false"
  cni-log-file: /var/run/cilium/cilium-cni.log
  cni-uninstall: "false"
  custom-cni-conf: "true"
  debug: "true"
  debug-verbose: ""
  devices: ens16 ens17
  dnsproxy-enable-transparent-mode: "true"
  egress-gateway-reconciliation-trigger-interval: 1s
  egress-masquerade-interfaces: ens16 ens17
  enable-auto-protect-node-port-range: "true"
  enable-bgp-control-plane: "false"
  enable-bpf-clock-probe: "false"
  enable-endpoint-health-checking: "true"
  enable-external-ips: "false"
  enable-health-check-loadbalancer-ip: "false"
  enable-health-check-nodeport: "true"
  enable-health-checking: "true"
  enable-host-port: "false"
  enable-hubble: "false"
  enable-ipv4: "true"
  enable-ipv4-big-tcp: "false"
  enable-ipv4-masquerade: "true"
  enable-ipv6: "false"
  enable-ipv6-big-tcp: "false"
  enable-ipv6-masquerade: "false"
  enable-k8s-networkpolicy: "true"
  enable-k8s-terminating-endpoint: "true"
  enable-l2-neigh-discovery: "true"
  enable-l7-proxy: "true"
  enable-local-redirect-policy: "false"
  enable-masquerade-to-route-source: "true"
  enable-metrics: "true"
  enable-node-port: "false"
  enable-policy: default
  enable-remote-node-identity: "true"
  enable-sctp: "false"
  enable-svc-source-range-check: "true"
  enable-vtep: "false"
  enable-well-known-identities: "false"
  enable-wireguard: "false"
  enable-xt-socket-fallback: "true"
  encrypt-node: "false"
  external-envoy-proxy: "false"
  identity-allocation-mode: crd
  identity-gc-interval: 15m0s
  identity-heartbeat-timeout: 30m0s
  install-no-conntrack-iptables-rules: "false"
  ipam: cluster-pool
  ipam-cilium-node-update-rate: 15s
  ipv4-native-routing-cidr: 10.244.0.0/16
  k8s-client-burst: "20"
  k8s-client-qps: "10"
  kube-proxy-replacement: "false"
  kube-proxy-replacement-healthz-bind-address: ""
  max-connected-clusters: "255"
  mesh-auth-enabled: "true"
  mesh-auth-gc-interval: 5m0s
  mesh-auth-queue-size: "1024"
  mesh-auth-rotated-identities-queue-size: "1024"
  monitor-aggregation: medium
  monitor-aggregation-flags: all
  monitor-aggregation-interval: 5s
  node-port-bind-protection: "true"
  nodes-gc-interval: 5m0s
  operator-api-serve-addr: 127.0.0.1:9234
  operator-prometheus-serve-addr: :9963
  policy-cidr-match-mode: ""
  preallocate-bpf-maps: "false"
  procfs: /host/proc
  proxy-connect-timeout: "2"
  proxy-max-connection-duration-seconds: "0"
  proxy-max-requests-per-connection: "0"
  proxy-prometheus-port: "9964"
  remove-cilium-node-taints: "true"
  routing-mode: native
  service-no-backend-response: reject
  set-cilium-is-up-condition: "true"
  set-cilium-node-taints: "false"
  sidecar-istio-proxy-image: cilium/istio_proxy
  skip-cnp-status-startup-clean: "false"
  synchronize-k8s-nodes: "true"
  tofqdns-dns-reject-response-code: refused
  tofqdns-enable-dns-compression: "true"
  tofqdns-endpoint-max-ip-per-hostname: "50"
  tofqdns-idle-connection-grace-period: 0s
  tofqdns-max-deferred-connection-deletes: "10000"
  tofqdns-proxy-response-max-delay: 100ms
  unmanaged-pod-watcher-interval: "0"
  vtep-cidr: ""
  vtep-endpoint: ""
  vtep-mac: ""
  vtep-mask: ""
kind: ConfigMap
metadata:
  name: cilium-config
  namespace: kube-system

When editing egress-masquerade-interfaces to ens+ and restarting cilium agent pods no snat rules regarding the public and private interface are present on nodes. Also when removing explicit configured devices.

In the following log output i replaced used public ips with a placeholder called PUBLIC_GATEWAY_IP for a configured gateway route in the public ip range , PUBLIC_IP for the public ip with cidr /32 and PUBLIC_IP_CIDR for the cidr where the node assigned public ip is part of.

Debug Log of a node with egress-masquerade-interfaces to ens16 ens17 where snat rules are added to nodes :

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
level=debug msg="Skipped reading configuration file" error="Config File \"cilium\" Not Found in \"[/root]\"" subsys=config
level=info msg="Memory available for map entries (0.003% of 8323018752B): 20807546B" subsys=config
level=debug msg="Total memory for default map entries: 149422080" subsys=config
level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
level=info msg="  --agent-health-port='9879'" subsys=daemon
level=info msg="  --agent-labels=''" subsys=daemon
level=info msg="  --agent-liveness-update-interval='1s'" subsys=daemon
level=info msg="  --agent-not-ready-taint-key='node.cilium.io/agent-not-ready'" subsys=daemon
level=info msg="  --allocator-list-timeout='3m0s'" subsys=daemon
level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
level=info msg="  --allow-localhost='auto'" subsys=daemon
level=info msg="  --annotate-k8s-node='false'" subsys=daemon
level=info msg="  --api-rate-limit=''" subsys=daemon
level=info msg="  --arping-refresh-period='30s'" subsys=daemon
level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
level=info msg="  --auto-direct-node-routes='true'" subsys=daemon
level=info msg="  --bgp-announce-lb-ip='false'" subsys=daemon
level=info msg="  --bgp-announce-pod-cidr='false'" subsys=daemon
level=info msg="  --bgp-config-path='/var/lib/cilium/bgp/config.yaml'" subsys=daemon
level=info msg="  --bpf-auth-map-max='524288'" subsys=daemon
level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp='2h13m20s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-tcp='2h13m20s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-tcp-grace='1m0s'" subsys=daemon
level=info msg="  --bpf-filter-priority='1'" subsys=daemon
level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
level=info msg="  --bpf-lb-affinity-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
level=info msg="  --bpf-lb-dsr-dispatch='opt'" subsys=daemon
level=info msg="  --bpf-lb-dsr-l4-xlate='frontend'" subsys=daemon
level=info msg="  --bpf-lb-external-clusterip='false'" subsys=daemon
level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
level=info msg="  --bpf-lb-maglev-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
level=info msg="  --bpf-lb-rev-nat-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-rss-ipv4-src-cidr=''" subsys=daemon
level=info msg="  --bpf-lb-rss-ipv6-src-cidr=''" subsys=daemon
level=info msg="  --bpf-lb-service-backend-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-service-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-sock='false'" subsys=daemon
level=info msg="  --bpf-lb-sock-hostns-only='false'" subsys=daemon
level=info msg="  --bpf-lb-source-range-map-max='0'" subsys=daemon
level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
level=info msg="  --bpf-map-event-buffers=''" subsys=daemon
level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
level=info msg="  --bpf-policy-map-full-reconciliation-interval='15m0s'" subsys=daemon
level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
level=info msg="  --bpf-root='/sys/fs/bpf'" subsys=daemon
level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
level=info msg="  --bypass-ip-availability-upon-restore='false'" subsys=daemon
level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
level=info msg="  --cflags=''" subsys=daemon
level=info msg="  --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
level=info msg="  --cilium-endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg="  --cluster-health-port='4240'" subsys=daemon
level=info msg="  --cluster-id='0'" subsys=daemon
level=info msg="  --cluster-name='default'" subsys=daemon
level=info msg="  --cluster-pool-ipv4-cidr='10.244.192.0/18'" subsys=daemon
level=info msg="  --cluster-pool-ipv4-mask-size='24'" subsys=daemon
level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
level=info msg="  --clustermesh-ip-identities-sync-timeout='1m0s'" subsys=daemon
level=info msg="  --cmdref=''" subsys=daemon
level=info msg="  --cni-chaining-mode='none'" subsys=daemon
level=info msg="  --cni-chaining-target=''" subsys=daemon
level=info msg="  --cni-exclusive='true'" subsys=daemon
level=info msg="  --cni-external-routing='false'" subsys=daemon
level=info msg="  --cni-log-file='/var/run/cilium/cilium-cni.log'" subsys=daemon
level=info msg="  --cni-uninstall='false'" subsys=daemon
level=info msg="  --config=''" subsys=daemon
level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
level=info msg="  --config-sources='config-map:kube-system/cilium-config,cilium-node-config:kube-system/cilium-default'" subsys=daemon
level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
level=info msg="  --conntrack-gc-max-interval='0s'" subsys=daemon
level=info msg="  --controller-group-metrics=''" subsys=daemon
level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
level=info msg="  --custom-cni-conf='false'" subsys=daemon
level=info msg="  --datapath-mode='veth'" subsys=daemon
level=info msg="  --debug='true'" subsys=daemon
level=info msg="  --debug-verbose=''" subsys=daemon
level=info msg="  --derive-masquerade-ip-addr-from-device=''" subsys=daemon
level=info msg="  --devices='ens16,ens17'" subsys=daemon
level=info msg="  --direct-routing-device=''" subsys=daemon
level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
level=info msg="  --dns-policy-unload-on-shutdown='false'" subsys=daemon
level=info msg="  --dnsproxy-concurrency-limit='0'" subsys=daemon
level=info msg="  --dnsproxy-concurrency-processing-grace-period='0s'" subsys=daemon
level=info msg="  --dnsproxy-enable-transparent-mode='true'" subsys=daemon
level=info msg="  --dnsproxy-lock-count='131'" subsys=daemon
level=info msg="  --dnsproxy-lock-timeout='500ms'" subsys=daemon
level=info msg="  --egress-gateway-policy-map-max='16384'" subsys=daemon
level=info msg="  --egress-gateway-reconciliation-trigger-interval='1s'" subsys=daemon
level=info msg="  --egress-masquerade-interfaces='ens16,ens17'" subsys=daemon
level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
level=info msg="  --enable-bbr='false'" subsys=daemon
level=info msg="  --enable-bgp-control-plane='false'" subsys=daemon
level=info msg="  --enable-bpf-clock-probe='false'" subsys=daemon
level=info msg="  --enable-bpf-masquerade='false'" subsys=daemon
level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
level=info msg="  --enable-cilium-api-server-access='*'" subsys=daemon
level=info msg="  --enable-cilium-endpoint-slice='false'" subsys=daemon
level=info msg="  --enable-cilium-health-api-server-access='*'" subsys=daemon
level=info msg="  --enable-custom-calls='false'" subsys=daemon
level=info msg="  --enable-encryption-strict-mode='false'" subsys=daemon
level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
level=info msg="  --enable-endpoint-routes='false'" subsys=daemon
level=info msg="  --enable-envoy-config='false'" subsys=daemon
level=info msg="  --enable-external-ips='false'" subsys=daemon
level=info msg="  --enable-health-check-loadbalancer-ip='false'" subsys=daemon
level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
level=info msg="  --enable-health-checking='true'" subsys=daemon
level=info msg="  --enable-high-scale-ipcache='false'" subsys=daemon
level=info msg="  --enable-host-firewall='false'" subsys=daemon
level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
level=info msg="  --enable-host-port='false'" subsys=daemon
level=info msg="  --enable-hubble='false'" subsys=daemon
level=info msg="  --enable-hubble-recorder-api='true'" subsys=daemon
level=info msg="  --enable-icmp-rules='true'" subsys=daemon
level=info msg="  --enable-identity-mark='true'" subsys=daemon
level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
level=info msg="  --enable-ipsec='false'" subsys=daemon
level=info msg="  --enable-ipsec-key-watcher='true'" subsys=daemon
level=info msg="  --enable-ipv4='true'" subsys=daemon
level=info msg="  --enable-ipv4-big-tcp='false'" subsys=daemon
level=info msg="  --enable-ipv4-egress-gateway='false'" subsys=daemon
level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
level=info msg="  --enable-ipv4-masquerade='true'" subsys=daemon
level=info msg="  --enable-ipv6='false'" subsys=daemon
level=info msg="  --enable-ipv6-big-tcp='false'" subsys=daemon
level=info msg="  --enable-ipv6-masquerade='false'" subsys=daemon
level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
level=info msg="  --enable-k8s='true'" subsys=daemon
level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
level=info msg="  --enable-k8s-networkpolicy='true'" subsys=daemon
level=info msg="  --enable-k8s-terminating-endpoint='true'" subsys=daemon
level=info msg="  --enable-l2-announcements='false'" subsys=daemon
level=info msg="  --enable-l2-neigh-discovery='true'" subsys=daemon
level=info msg="  --enable-l2-pod-announcements='false'" subsys=daemon
level=info msg="  --enable-l7-proxy='true'" subsys=daemon
level=info msg="  --enable-local-node-route='true'" subsys=daemon
level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
level=info msg="  --enable-masquerade-to-route-source='true'" subsys=daemon
level=info msg="  --enable-metrics='true'" subsys=daemon
level=info msg="  --enable-mke='false'" subsys=daemon
level=info msg="  --enable-monitor='true'" subsys=daemon
level=info msg="  --enable-nat46x64-gateway='false'" subsys=daemon
level=info msg="  --enable-node-port='false'" subsys=daemon
level=info msg="  --enable-pmtu-discovery='false'" subsys=daemon
level=info msg="  --enable-policy='default'" subsys=daemon
level=info msg="  --enable-recorder='false'" subsys=daemon
level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
level=info msg="  --enable-runtime-device-detection='false'" subsys=daemon
level=info msg="  --enable-sctp='false'" subsys=daemon
level=info msg="  --enable-service-topology='false'" subsys=daemon
level=info msg="  --enable-session-affinity='false'" subsys=daemon
level=info msg="  --enable-srv6='false'" subsys=daemon
level=info msg="  --enable-stale-cilium-endpoint-cleanup='true'" subsys=daemon
level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
level=info msg="  --enable-tracing='false'" subsys=daemon
level=info msg="  --enable-unreachable-routes='false'" subsys=daemon
level=info msg="  --enable-vtep='false'" subsys=daemon
level=info msg="  --enable-well-known-identities='false'" subsys=daemon
level=info msg="  --enable-wireguard='false'" subsys=daemon
level=info msg="  --enable-wireguard-userspace-fallback='false'" subsys=daemon
level=info msg="  --enable-xdp-prefilter='false'" subsys=daemon
level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
level=info msg="  --encrypt-interface=''" subsys=daemon
level=info msg="  --encrypt-node='false'" subsys=daemon
level=info msg="  --encryption-strict-mode-allow-remote-node-identities='false'" subsys=daemon
level=info msg="  --encryption-strict-mode-cidr=''" subsys=daemon
level=info msg="  --endpoint-bpf-prog-watchdog-interval='30s'" subsys=daemon
level=info msg="  --endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg="  --endpoint-queue-size='25'" subsys=daemon
level=info msg="  --endpoint-status=''" subsys=daemon
level=info msg="  --envoy-config-timeout='2m0s'" subsys=daemon
level=info msg="  --envoy-log=''" subsys=daemon
level=info msg="  --exclude-local-address=''" subsys=daemon
level=info msg="  --external-envoy-proxy='false'" subsys=daemon
level=info msg="  --fixed-identity-mapping=''" subsys=daemon
level=info msg="  --fqdn-regex-compile-lru-size='1024'" subsys=daemon
level=info msg="  --gops-port='9890'" subsys=daemon
level=info msg="  --http-403-msg=''" subsys=daemon
level=info msg="  --http-idle-timeout='0'" subsys=daemon
level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
level=info msg="  --http-normalize-path='true'" subsys=daemon
level=info msg="  --http-request-timeout='3600'" subsys=daemon
level=info msg="  --http-retry-count='3'" subsys=daemon
level=info msg="  --http-retry-timeout='0'" subsys=daemon
level=info msg="  --hubble-disable-tls='false'" subsys=daemon
level=info msg="  --hubble-event-buffer-capacity='4095'" subsys=daemon
level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
level=info msg="  --hubble-export-allowlist=''" subsys=daemon
level=info msg="  --hubble-export-denylist=''" subsys=daemon
level=info msg="  --hubble-export-fieldmask=''" subsys=daemon
level=info msg="  --hubble-export-file-compress='false'" subsys=daemon
level=info msg="  --hubble-export-file-max-backups='5'" subsys=daemon
level=info msg="  --hubble-export-file-max-size-mb='10'" subsys=daemon
level=info msg="  --hubble-export-file-path=''" subsys=daemon
level=info msg="  --hubble-flowlogs-config-path=''" subsys=daemon
level=info msg="  --hubble-listen-address=''" subsys=daemon
level=info msg="  --hubble-metrics=''" subsys=daemon
level=info msg="  --hubble-metrics-server=''" subsys=daemon
level=info msg="  --hubble-monitor-events=''" subsys=daemon
level=info msg="  --hubble-prefer-ipv6='false'" subsys=daemon
level=info msg="  --hubble-recorder-sink-queue-size='1024'" subsys=daemon
level=info msg="  --hubble-recorder-storage-path='/var/run/cilium/pcaps'" subsys=daemon
level=info msg="  --hubble-redact-enabled='false'" subsys=daemon
level=info msg="  --hubble-redact-http-headers-allow=''" subsys=daemon
level=info msg="  --hubble-redact-http-headers-deny=''" subsys=daemon
level=info msg="  --hubble-redact-http-urlquery='false'" subsys=daemon
level=info msg="  --hubble-redact-http-userinfo='true'" subsys=daemon
level=info msg="  --hubble-redact-kafka-apikey='false'" subsys=daemon
level=info msg="  --hubble-skip-unknown-cgroup-ids='true'" subsys=daemon
level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
level=info msg="  --hubble-tls-cert-file=''" subsys=daemon
level=info msg="  --hubble-tls-client-ca-files=''" subsys=daemon
level=info msg="  --hubble-tls-key-file=''" subsys=daemon
level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
level=info msg="  --identity-gc-interval='15m0s'" subsys=daemon
level=info msg="  --identity-heartbeat-timeout='30m0s'" subsys=daemon
level=info msg="  --identity-restore-grace-period='10m0s'" subsys=daemon
level=info msg="  --install-egress-gateway-routes='false'" subsys=daemon
level=info msg="  --install-iptables-rules='true'" subsys=daemon
level=info msg="  --install-no-conntrack-iptables-rules='false'" subsys=daemon
level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
level=info msg="  --ipam='cluster-pool'" subsys=daemon
level=info msg="  --ipam-cilium-node-update-rate='15s'" subsys=daemon
level=info msg="  --ipam-default-ip-pool='default'" subsys=daemon
level=info msg="  --ipam-multi-pool-pre-allocation=''" subsys=daemon
level=info msg="  --ipsec-key-file=''" subsys=daemon
level=info msg="  --ipsec-key-rotation-duration='5m0s'" subsys=daemon
level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
level=info msg="  --iptables-random-fully='false'" subsys=daemon
level=info msg="  --ipv4-native-routing-cidr='10.244.0.0/16'" subsys=daemon
level=info msg="  --ipv4-node='auto'" subsys=daemon
level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
level=info msg="  --ipv4-range='auto'" subsys=daemon
level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
level=info msg="  --ipv4-service-range='auto'" subsys=daemon
level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
level=info msg="  --ipv6-mcast-device=''" subsys=daemon
level=info msg="  --ipv6-native-routing-cidr=''" subsys=daemon
level=info msg="  --ipv6-node='auto'" subsys=daemon
level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
level=info msg="  --ipv6-range='auto'" subsys=daemon
level=info msg="  --ipv6-service-range='auto'" subsys=daemon
level=info msg="  --join-cluster='false'" subsys=daemon
level=info msg="  --k8s-api-server=''" subsys=daemon
level=info msg="  --k8s-client-burst='20'" subsys=daemon
level=info msg="  --k8s-client-qps='10'" subsys=daemon
level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
level=info msg="  --keep-config='false'" subsys=daemon
level=info msg="  --kube-proxy-replacement='false'" subsys=daemon
level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
level=info msg="  --kvstore=''" subsys=daemon
level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
level=info msg="  --kvstore-max-consecutive-quorum-errors='2'" subsys=daemon
level=info msg="  --kvstore-opt=''" subsys=daemon
level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
level=info msg="  --l2-announcements-lease-duration='15s'" subsys=daemon
level=info msg="  --l2-announcements-renew-deadline='5s'" subsys=daemon
level=info msg="  --l2-announcements-retry-period='2s'" subsys=daemon
level=info msg="  --l2-pod-announcements-interface=''" subsys=daemon
level=info msg="  --label-prefix-file=''" subsys=daemon
level=info msg="  --labels=''" subsys=daemon
level=info msg="  --legacy-turn-off-k8s-event-handover='false'" subsys=daemon
level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
level=info msg="  --local-max-addr-scope='252'" subsys=daemon
level=info msg="  --local-router-ipv4=''" subsys=daemon
level=info msg="  --local-router-ipv6=''" subsys=daemon
level=info msg="  --log-driver=''" subsys=daemon
level=info msg="  --log-opt=''" subsys=daemon
level=info msg="  --log-system-load='false'" subsys=daemon
level=info msg="  --max-connected-clusters='255'" subsys=daemon
level=info msg="  --max-controller-interval='0'" subsys=daemon
level=info msg="  --max-internal-timer-delay='0s'" subsys=daemon
level=info msg="  --mesh-auth-enabled='true'" subsys=daemon
level=info msg="  --mesh-auth-gc-interval='5m0s'" subsys=daemon
level=info msg="  --mesh-auth-mutual-connect-timeout='5s'" subsys=daemon
level=info msg="  --mesh-auth-mutual-listener-port='0'" subsys=daemon
level=info msg="  --mesh-auth-queue-size='1024'" subsys=daemon
level=info msg="  --mesh-auth-rotated-identities-queue-size='1024'" subsys=daemon
level=info msg="  --mesh-auth-signal-backoff-duration='1s'" subsys=daemon
level=info msg="  --mesh-auth-spiffe-trust-domain='spiffe.cilium'" subsys=daemon
level=info msg="  --mesh-auth-spire-admin-socket=''" subsys=daemon
level=info msg="  --metrics=''" subsys=daemon
level=info msg="  --mke-cgroup-mount=''" subsys=daemon
level=info msg="  --monitor-aggregation='medium'" subsys=daemon
level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
level=info msg="  --monitor-queue-size='0'" subsys=daemon
level=info msg="  --mtu='0'" subsys=daemon
level=info msg="  --node-encryption-opt-out-labels='node-role.kubernetes.io/control-plane'" subsys=daemon
level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
level=info msg="  --node-port-algorithm='random'" subsys=daemon
level=info msg="  --node-port-bind-protection='true'" subsys=daemon
level=info msg="  --node-port-mode='snat'" subsys=daemon
level=info msg="  --node-port-range='30000,32767'" subsys=daemon
level=info msg="  --nodeport-addresses=''" subsys=daemon
level=info msg="  --nodes-gc-interval='5m0s'" subsys=daemon
level=info msg="  --operator-api-serve-addr='127.0.0.1:9234'" subsys=daemon
level=info msg="  --operator-prometheus-serve-addr=':9963'" subsys=daemon
level=info msg="  --policy-audit-mode='false'" subsys=daemon
level=info msg="  --policy-cidr-match-mode=''" subsys=daemon
level=info msg="  --policy-queue-size='100'" subsys=daemon
level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
level=info msg="  --pprof='false'" subsys=daemon
level=info msg="  --pprof-address='localhost'" subsys=daemon
level=info msg="  --pprof-port='6060'" subsys=daemon
level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
level=info msg="  --procfs='/host/proc'" subsys=daemon
level=info msg="  --prometheus-serve-addr=':9962'" subsys=daemon
level=info msg="  --proxy-connect-timeout='2'" subsys=daemon
level=info msg="  --proxy-gid='1337'" subsys=daemon
level=info msg="  --proxy-idle-timeout-seconds='60'" subsys=daemon
level=info msg="  --proxy-max-connection-duration-seconds='0'" subsys=daemon
level=info msg="  --proxy-max-requests-per-connection='0'" subsys=daemon
level=info msg="  --proxy-prometheus-port='9964'" subsys=daemon
level=info msg="  --read-cni-conf=''" subsys=daemon
level=info msg="  --remove-cilium-node-taints='true'" subsys=daemon
level=info msg="  --restore='true'" subsys=daemon
level=info msg="  --route-metric='0'" subsys=daemon
level=info msg="  --routing-mode='native'" subsys=daemon
level=info msg="  --service-no-backend-response='reject'" subsys=daemon
level=info msg="  --set-cilium-is-up-condition='true'" subsys=daemon
level=info msg="  --set-cilium-node-taints='false'" subsys=daemon
level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
level=info msg="  --skip-cnp-status-startup-clean='false'" subsys=daemon
level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
level=info msg="  --srv6-encap-mode='reduced'" subsys=daemon
level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
level=info msg="  --synchronize-k8s-nodes='true'" subsys=daemon
level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
level=info msg="  --trace-payloadlen='128'" subsys=daemon
level=info msg="  --trace-sock='true'" subsys=daemon
level=info msg="  --tunnel-port='0'" subsys=daemon
level=info msg="  --tunnel-protocol='vxlan'" subsys=daemon
level=info msg="  --unmanaged-pod-watcher-interval='0'" subsys=daemon
level=info msg="  --use-cilium-internal-ip-for-ipsec='false'" subsys=daemon
level=info msg="  --version='false'" subsys=daemon
level=info msg="  --vlan-bpf-bypass=''" subsys=daemon
level=info msg="  --vtep-cidr=''" subsys=daemon
level=info msg="  --vtep-endpoint=''" subsys=daemon
level=info msg="  --vtep-mac=''" subsys=daemon
level=info msg="  --vtep-mask=''" subsys=daemon
level=info msg="  --wireguard-persistent-keepalive='0s'" subsys=daemon
level=info msg="  --write-cni-conf-when-ready='/host/etc/cni/net.d/05-cilium.conflist'" subsys=daemon
level=info msg="     _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="|  _| | | | | |     |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.15.1 a368c8f0 2024-02-14T22:16:57+00:00 go version go1.21.6 linux/amd64" subsys=daemon
level=info msg="clang (10.0.0) and kernel (5.15.0) versions: OK!" subsys=linux-datapath
level=info msg="Kernel config file not found: if the agent fails to start, check the system requirements at https://docs.cilium.io/en/stable/operations/system_requirements" subsys=probes
level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
level=info msg=" - reserved:.*" subsys=labels-filter
level=info msg=" - :io\\.kubernetes\\.pod\\.namespace" subsys=labels-filter
level=info msg=" - :io\\.cilium\\.k8s\\.namespace\\.labels" subsys=labels-filter
level=info msg=" - :app\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:io\\.kubernetes" subsys=labels-filter
level=info msg=" - !:kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:statefulset\\.kubernetes\\.io/pod-name" subsys=labels-filter
level=info msg=" - !:apps\\.kubernetes\\.io/pod-index" subsys=labels-filter
level=info msg=" - !:batch\\.kubernetes\\.io/job-completion-index" subsys=labels-filter
level=info msg=" - !:.*beta\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:k8s\\.io" subsys=labels-filter
level=info msg=" - !:pod-template-generation" subsys=labels-filter
level=info msg=" - !:pod-template-hash" subsys=labels-filter
level=info msg=" - !:controller-revision-hash" subsys=labels-filter
level=info msg=" - !:annotation.*" subsys=labels-filter
level=info msg=" - !:etcd_node" subsys=labels-filter
level=debug msg=Invoking function="pprof.glob..func1 (pkg/pprof/cell.go:50)" subsys=hive
level=info msg=Invoked duration=1.662265ms function="pprof.glob..func1 (pkg/pprof/cell.go:50)" subsys=hive
level=debug msg=Invoking function="gops.registerGopsHooks (pkg/gops/cell.go:38)" subsys=hive
level=info msg=Invoked duration="137.19µs" function="gops.registerGopsHooks (pkg/gops/cell.go:38)" subsys=hive
level=debug msg=Invoking function="metrics.glob..func1 (pkg/metrics/cell.go:13)" subsys=hive
level=info msg=Invoked duration=2.040214ms function="metrics.glob..func1 (pkg/metrics/cell.go:13)" subsys=hive
level=debug msg=Invoking function="metricsmap.RegisterCollector (pkg/maps/metricsmap/metricsmap.go:281)" subsys=hive
level=info msg=Invoked duration="32.705µs" function="metricsmap.RegisterCollector (pkg/maps/metricsmap/metricsmap.go:281)" subsys=hive
level=debug msg=Invoking function="cmd.configureAPIServer (cmd/cells.go:215)" subsys=hive
level=debug msg="signalmap.newMap: &{0xc0014e0270 <nil> 4}" subsys=signal-map
level=debug msg="getting identity cache for identity allocator manager" subsys=identity-cache
level=info msg="Spire Delegate API Client is disabled as no socket path is configured" subsys=spire-delegate
level=info msg="Mutual authentication handler is disabled as no port is configured" subsys=auth
level=debug msg="newSignalManager: &{0xc0008eb4e8 [<nil> <nil> <nil>] <nil> 0xc0011af980 {{{0 0}}} {{} {} 0}}" subsys=signal
level=debug msg="Adding BGP reconciler: Preflight (priority 10)" subsys=bgp-control-plane
level=debug msg="Adding BGP reconciler: RoutePolicy (priority 70)" subsys=bgp-control-plane
level=debug msg="Adding BGP reconciler: LBService (priority 40)" subsys=bgp-control-plane
level=debug msg="Adding BGP reconciler: Neighbor (priority 60)" subsys=bgp-control-plane
level=debug msg="Adding BGP reconciler: ExportPodCIDR (priority 30)" subsys=bgp-control-plane
level=info msg=Invoked duration=185.556915ms function="cmd.configureAPIServer (cmd/cells.go:215)" subsys=hive
level=debug msg=Invoking function="cmd.unlockAfterAPIServer (cmd/deletion_queue.go:113)" subsys=hive
level=info msg=Invoked duration="35.06µs" function="cmd.unlockAfterAPIServer (cmd/deletion_queue.go:113)" subsys=hive
level=debug msg=Invoking function="controller.Init (pkg/controller/cell.go:67)" subsys=hive
level=info msg=Invoked duration="77.869µs" function="controller.Init (pkg/controller/cell.go:67)" subsys=hive
level=debug msg=Invoking function="endpointcleanup.registerCleanup (pkg/endpointcleanup/cleanup.go:66)" subsys=hive
level=info msg=Invoked duration="311.345µs" function="endpointcleanup.registerCleanup (pkg/endpointcleanup/cleanup.go:66)" subsys=hive
level=debug msg=Invoking function="cmd.glob..func3 (cmd/daemon_main.go:1612)" subsys=hive
level=info msg=Invoked duration="34.273µs" function="cmd.glob..func3 (cmd/daemon_main.go:1612)" subsys=hive
level=debug msg=Invoking function="cmd.registerEndpointBPFProgWatchdog (cmd/watchdogs.go:57)" subsys=hive
level=info msg=Invoked duration="150.723µs" function="cmd.registerEndpointBPFProgWatchdog (cmd/watchdogs.go:57)" subsys=hive
level=debug msg=Invoking function="envoy.registerEnvoyVersionCheck (pkg/envoy/cell.go:132)" subsys=hive
level=info msg=Invoked duration="67.311µs" function="envoy.registerEnvoyVersionCheck (pkg/envoy/cell.go:132)" subsys=hive
level=debug msg=Invoking function="bgpv1.glob..func1 (pkg/bgpv1/cell.go:71)" subsys=hive
level=info msg=Invoked duration="13.128µs" function="bgpv1.glob..func1 (pkg/bgpv1/cell.go:71)" subsys=hive
level=debug msg=Invoking function="cmd.registerDeviceReloader (cmd/device-reloader.go:48)" subsys=hive
level=info msg=Invoked duration="153.391µs" function="cmd.registerDeviceReloader (cmd/device-reloader.go:48)" subsys=hive
level=debug msg=Invoking function="utime.initUtimeSync (pkg/datapath/linux/utime/cell.go:31)" subsys=hive
level=info msg=Invoked duration="21.473µs" function="utime.initUtimeSync (pkg/datapath/linux/utime/cell.go:31)" subsys=hive
level=debug msg=Invoking function="agentliveness.newAgentLivenessUpdater (pkg/datapath/agentliveness/agent_liveness.go:43)" subsys=hive
level=info msg=Invoked duration="90.523µs" function="agentliveness.newAgentLivenessUpdater (pkg/datapath/agentliveness/agent_liveness.go:43)" subsys=hive
level=debug msg=Invoking function="statedb.RegisterTable[...] (pkg/statedb/db.go:121)" subsys=hive
level=info msg=Invoked duration="34.255µs" function="statedb.RegisterTable[...] (pkg/statedb/db.go:121)" subsys=hive
level=debug msg=Invoking function="l2responder.NewL2ResponderReconciler (pkg/datapath/l2responder/l2responder.go:72)" subsys=hive
level=info msg=Invoked duration="83.159µs" function="l2responder.NewL2ResponderReconciler (pkg/datapath/l2responder/l2responder.go:72)" subsys=hive
level=debug msg=Invoking function="garp.newGARPProcessor (pkg/datapath/garp/processor.go:27)" subsys=hive
level=info msg=Invoked duration="139.166µs" function="garp.newGARPProcessor (pkg/datapath/garp/processor.go:27)" subsys=hive
level=debug msg=Invoking function="bigtcp.glob..func1 (pkg/datapath/linux/bigtcp/bigtcp.go:58)" subsys=hive
level=info msg=Invoked duration="22.629µs" function="bigtcp.glob..func1 (pkg/datapath/linux/bigtcp/bigtcp.go:58)" subsys=hive
level=debug msg=Invoking function="linux.glob..func1 (pkg/datapath/linux/devices_controller.go:62)" subsys=hive
level=info msg=Invoked duration="14.266µs" function="linux.glob..func1 (pkg/datapath/linux/devices_controller.go:62)" subsys=hive
level=debug msg=Invoking function="ipcache.glob..func3 (pkg/datapath/ipcache/cell.go:25)" subsys=hive
level=debug msg="enabling events buffer" file-path= name=cilium_ipcache size=1024 subsys=bpf ttl=0s
level=info msg=Invoked duration="104.263µs" function="ipcache.glob..func3 (pkg/datapath/ipcache/cell.go:25)" subsys=hive
level=info msg=Starting subsys=hive
level=debug msg="Executing start hook" function="gops.registerGopsHooks.func1 (pkg/gops/cell.go:43)" subsys=hive
level=info msg="Started gops server" address="127.0.0.1:9890" subsys=gops
level=info msg="Start hook executed" duration="483.496µs" function="gops.registerGopsHooks.func1 (pkg/gops/cell.go:43)" subsys=hive
level=debug msg="Executing start hook" function="metrics.NewRegistry.func1 (pkg/metrics/registry.go:86)" subsys=hive
level=info msg="Start hook executed" duration="2.924µs" function="metrics.NewRegistry.func1 (pkg/metrics/registry.go:86)" subsys=hive
level=debug msg="Executing start hook" function="client.(*compositeClientset).onStart" subsys=hive
level=info msg="Establishing connection to apiserver" host="https://10.244.64.1:443" subsys=k8s-client
level=info msg="Serving prometheus metrics on :9962" subsys=metrics
level=info msg="Connected to apiserver" subsys=k8s-client
level=debug msg="Starting new controller" name=k8s-heartbeat subsys=controller uuid=3a4329d6-ed2e-4128-afc5-48cbdf28a01d
level=debug msg="Controller func execution time: 3.617µs" name=k8s-heartbeat subsys=controller uuid=3a4329d6-ed2e-4128-afc5-48cbdf28a01d
level=debug msg="Skipping Leases support fallback discovery" subsys=k8s
level=info msg="Start hook executed" duration=44.09638ms function="client.(*compositeClientset).onStart" subsys=hive
level=debug msg="Executing start hook" function="authmap.newAuthMap.func1 (pkg/maps/authmap/cell.go:27)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_auth_map subsys=ebpf
level=info msg="Start hook executed" duration="230.368µs" function="authmap.newAuthMap.func1 (pkg/maps/authmap/cell.go:27)" subsys=hive
level=debug msg="Executing start hook" function="configmap.newMap.func1 (pkg/maps/configmap/cell.go:23)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_runtime_config subsys=bpf
level=info msg="Start hook executed" duration="106.366µs" function="configmap.newMap.func1 (pkg/maps/configmap/cell.go:23)" subsys=hive
level=debug msg="Executing start hook" function="signalmap.newMap.func1 (pkg/maps/signalmap/cell.go:44)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_signals subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_signals subsys=bpf
level=info msg="Start hook executed" duration="228.048µs" function="signalmap.newMap.func1 (pkg/maps/signalmap/cell.go:44)" subsys=hive
level=debug msg="Executing start hook" function="nodemap.newNodeMap.func1 (pkg/maps/nodemap/cell.go:23)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_node_map subsys=ebpf
level=info msg="Start hook executed" duration="96.404µs" function="nodemap.newNodeMap.func1 (pkg/maps/nodemap/cell.go:23)" subsys=hive
level=debug msg="Executing start hook" function="eventsmap.newEventsMap.func1 (pkg/maps/eventsmap/cell.go:35)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_events subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_events subsys=bpf
level=info msg="Start hook executed" duration="150.905µs" function="eventsmap.newEventsMap.func1 (pkg/maps/eventsmap/cell.go:35)" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.Node].Start" subsys=hive
level=info msg="Start hook executed" duration="51.808µs" function="*resource.resource[*v1.Node].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Start hook executed" duration="4.142µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=debug msg="Executing start hook" function="node.NewLocalNodeStore.func1 (pkg/node/local_node_store.go:95)" subsys=hive
level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.193.0.0/16
level=info msg="Start hook executed" duration=12.933715ms function="node.NewLocalNodeStore.func1 (pkg/node/local_node_store.go:95)" subsys=hive
level=debug msg="Executing start hook" function="*statedb.DB.Start" subsys=hive
level=info msg="Start hook executed" duration="21.603µs" function="*statedb.DB.Start" subsys=hive
level=debug msg="Executing start hook" function="hive.New.func1.2 (pkg/hive/hive.go:105)" subsys=hive
level=info msg="Start hook executed" duration="36.904µs" function="hive.New.func1.2 (pkg/hive/hive.go:105)" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="6.614µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="*linux.devicesController.Start" subsys=hive
level=info msg="Devices changed" devices="[ens16 ens17]" subsys=devices-controller
level=info msg="Start hook executed" duration=3.796855ms function="*linux.devicesController.Start" subsys=hive
level=debug msg="Executing start hook" function="tables.(*nodeAddressController).register.func1 (pkg/datapath/tables/node_address.go:210)" subsys=hive
level=info msg="Node addresses updated" device=ens16 node-addresses="$PUBLIC_IP (ens16)" subsys=node-address
level=info msg="Node addresses updated" device=ens17 node-addresses="10.244.0.2 (ens17)" subsys=node-address
level=info msg="Node addresses updated" device=cilium_host node-addresses="10.244.192.198 (cilium_host), fe80::5898:1ff:fe62:70a4 (cilium_host)" subsys=node-address
level=info msg="Start hook executed" duration="790.579µs" function="tables.(*nodeAddressController).register.func1 (pkg/datapath/tables/node_address.go:210)" subsys=hive
level=debug msg="Executing start hook" function="*bandwidth.manager.Start" subsys=hive
level=debug msg="Starting one-shot job" func="tables.(*nodeAddressController).run" name=node-address-update subsys=jobs
level=info msg="Start hook executed" duration="342.038µs" function="*bandwidth.manager.Start" subsys=hive
level=debug msg="Executing start hook" function="modules.(*Manager).Start" subsys=hive
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 8.278µs ago, Message: }" subsys=hive
level=info msg="Start hook executed" duration=2.080973ms function="modules.(*Manager).Start" subsys=hive
level=debug msg="Executing start hook" function="*iptables.Manager.Start" subsys=hive
level=info msg="Start hook executed" duration=44.076315ms function="*iptables.Manager.Start" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="15.996µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="endpointmanager.newDefaultEndpointManager.func1 (pkg/endpointmanager/cell.go:216)" subsys=hive
level=debug msg="Starting new controller" name=endpoint-gc subsys=controller uuid=fefef9f0-efc2-4a07-b491-280fc70a5a83
level=info msg="Start hook executed" duration="182.656µs" function="endpointmanager.newDefaultEndpointManager.func1 (pkg/endpointmanager/cell.go:216)" subsys=hive
level=debug msg="Controller func execution time: 37.085µs" name=endpoint-gc subsys=controller uuid=fefef9f0-efc2-4a07-b491-280fc70a5a83
level=debug msg="Executing start hook" function="cmd.newPolicyTrifecta.func1 (cmd/policy.go:130)" subsys=hive
level=debug msg="creating new EventQueue" name=repository-change-queue numBufferedEvents=100 subsys=eventqueue
level=debug msg="creating new EventQueue" name=repository-reaction-queue numBufferedEvents=100 subsys=eventqueue
level=info msg="Start hook executed" duration="82.797µs" function="cmd.newPolicyTrifecta.func1 (cmd/policy.go:130)" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumEgressGatewayPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="7.036µs" function="*resource.resource[*cilium.io/v2.CiliumEgressGatewayPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 4.984µs ago, Message: }" subsys=hive
level=info msg="Start hook executed" duration="2.564µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*types.CiliumEndpoint].Start" subsys=hive
level=info msg="Start hook executed" duration="16.847µs" function="*resource.resource[*types.CiliumEndpoint].Start" subsys=hive
level=debug msg="Executing start hook" function="datapath.newDatapath.func1 (pkg/datapath/cells.go:170)" subsys=hive
level=info msg="Restored 6 node IDs from the BPF map" subsys=linux-datapath
level=info msg="Start hook executed" duration="450.466µs" function="datapath.newDatapath.func1 (pkg/datapath/cells.go:170)" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.Service].Start" subsys=hive
level=info msg="Start hook executed" duration="4.245µs" function="*resource.resource[*v1.Service].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*k8s.Endpoints].Start" subsys=hive
level=info msg="Start hook executed" duration="8.843µs" function="*resource.resource[*k8s.Endpoints].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.Pod].Start" subsys=hive
level=info msg="Start hook executed" duration="2.229µs" function="*resource.resource[*v1.Pod].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.Namespace].Start" subsys=hive
level=info msg="Start hook executed" duration="2.254µs" function="*resource.resource[*v1.Namespace].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.NetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="32.379µs" function="*resource.resource[*v1.NetworkPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="3.424µs" function="*resource.resource[*cilium.io/v2.CiliumNetworkPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumClusterwideNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="4.016µs" function="*resource.resource[*cilium.io/v2.CiliumClusterwideNetworkPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2alpha1.CiliumCIDRGroup].Start" subsys=hive
level=info msg="Start hook executed" duration="2.195µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumCIDRGroup].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2alpha1.CiliumEndpointSlice].Start" subsys=hive
level=info msg="Start hook executed" duration="2.506µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumEndpointSlice].Start" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="3.929µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="*manager.manager.Start" subsys=hive
level=debug msg="Performing regular background work" subsys=nodemanager syncInterval=1m0s
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 3.67µs ago, Message: }" subsys=hive
level=info msg="Start hook executed" duration="307.142µs" function="*manager.manager.Start" subsys=hive
level=debug msg="Executing start hook" function="*cni.cniConfigManager.Start" subsys=hive
level=debug msg="Starting new controller" name=write-cni-file subsys=controller uuid=54f9c08b-9c36-4181-b682-cc0b44dbdc98
level=info msg="Start hook executed" duration="148.888µs" function="*cni.cniConfigManager.Start" subsys=hive
level=debug msg="Executing start hook" function="k8s.newServiceCache.func1 (pkg/k8s/service_cache.go:144)" subsys=hive
level=info msg="Start hook executed" duration=991ns function="k8s.newServiceCache.func1 (pkg/k8s/service_cache.go:144)" subsys=hive
level=debug msg="Executing start hook" function="agent.newMonitorAgent.func1 (pkg/monitor/agent/cell.go:61)" subsys=hive
level=debug msg="Group not found" error="group: unknown group cilium" file-path=/var/run/cilium/monitor1_2.sock group=cilium subsys=api
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Start hook executed" duration="308.503µs" function="agent.newMonitorAgent.func1 (pkg/monitor/agent/cell.go:61)" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2alpha1.CiliumL2AnnouncementPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="12.33µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumL2AnnouncementPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="27.764µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="6.302µs" function="*job.group.Start" subsys=hive
level=debug msg="Executing start hook" function="envoy.newEnvoyAccessLogServer.func1 (pkg/envoy/cell.go:107)" subsys=hive
level=info msg="Start hook executed" duration="72.611µs" function="envoy.newEnvoyAccessLogServer.func1 (pkg/envoy/cell.go:107)" subsys=hive
level=debug msg="Executing start hook" function="envoy.newArtifactCopier.func1 (pkg/envoy/cell.go:178)" subsys=hive
level=debug msg="Envoy: No artifacts to copy to envoy - source path doesn't exist" source-path=/envoy-artifacts subsys=envoy-manager
level=info msg="Start hook executed" duration="35.266µs" function="envoy.newArtifactCopier.func1 (pkg/envoy/cell.go:178)" subsys=hive
level=debug msg="Executing start hook" function="envoy.newEnvoyXDSServer.func1 (pkg/envoy/cell.go:65)" subsys=hive
level=info msg="Start hook executed" duration="245.051µs" function="envoy.newEnvoyXDSServer.func1 (pkg/envoy/cell.go:65)" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="11.587µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="signal.provideSignalManager.func1 (pkg/signal/cell.go:25)" subsys=hive
level=info msg="Start hook executed" duration="255.999µs" function="signal.provideSignalManager.func1 (pkg/signal/cell.go:25)" subsys=hive
level=debug msg="Executing start hook" function="auth.registerAuthManager.func1 (pkg/auth/cell.go:112)" subsys=hive
level=debug msg="Starting cache restore" subsys=auth
level=info msg="Generating CNI configuration file with mode none" subsys=cni-config
level=debug msg="Existing CNI configuration file /host/etc/cni/net.d/05-cilium.conflist unchanged" subsys=cni-config
level=debug msg="Controller func execution time: 357.901µs" name=write-cni-file subsys=controller uuid=54f9c08b-9c36-4181-b682-cc0b44dbdc98
level=debug msg="Controller run succeeded; waiting for next controller update or stop" name=write-cni-file subsys=controller uuid=54f9c08b-9c36-4181-b682-cc0b44dbdc98
level=info msg="Envoy: Starting access log server listening on /var/run/cilium/envoy/sockets/access_log.sock" subsys=envoy-manager
level=debug msg="Starting one-shot job" func="l2announcer.(*L2Announcer).leaseGC" name="l2-announcer lease-gc" subsys=l2-announcer
level=info msg="Datapath signal listener running" subsys=signal
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 3.519µs ago, Message: }" subsys=hive
level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/envoy/sockets/xds.sock" subsys=envoy-manager
level=debug msg="Restored entries" cached_entries=0 subsys=auth
level=info msg="Start hook executed" duration=6.818543ms function="auth.registerAuthManager.func1 (pkg/auth/cell.go:112)" subsys=hive
level=debug msg="Executing start hook" function="auth.registerGCJobs.func1 (pkg/auth/cell.go:162)" subsys=hive
level=debug msg="Nodes synced" subsys=auth
level=info msg="Start hook executed" duration="16.867µs" function="auth.registerGCJobs.func1 (pkg/auth/cell.go:162)" subsys=hive
level=debug msg="Executing start hook" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="24.174µs" function="*job.group.Start" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="2.774µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="bigtcp.newBIGTCP.func1 (pkg/datapath/linux/bigtcp/bigtcp.go:240)" subsys=hive
level=debug msg="Observer job started" func="auth.(*AuthManager).handleAuthRequest" name="auth request-authentication" subsys=auth
level=info msg="Start hook executed" duration="198.364µs" function="bigtcp.newBIGTCP.func1 (pkg/datapath/linux/bigtcp/bigtcp.go:240)" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="3.382µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="*ipsec.keyCustodian.Start" subsys=hive
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 5.139µs ago, Message: }" subsys=hive
level=debug msg="Observer job started" func="auth.(*authMapGarbageCollector).handleIdentityChange" name="auth gc-identity-events" subsys=auth
level=info msg="Start hook executed" duration="204.647µs" function="*ipsec.keyCustodian.Start" subsys=hive
level=debug msg="Executing start hook" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="1.581µs" function="*job.group.Start" subsys=hive
level=debug msg="Executing start hook" function="mtu.newForCell.func1 (pkg/mtu/cell.go:40)" subsys=hive
level=debug msg="Starting timer job" func="auth.(*authMapGarbageCollector).cleanup" name="auth gc-cleanup" subsys=auth
level=info msg="Inheriting MTU from external network interface" device=ens17 ipAddr=10.244.0.2 mtu=1500 subsys=mtu
level=info msg="Start hook executed" duration="508.904µs" function="mtu.newForCell.func1 (pkg/mtu/cell.go:40)" subsys=hive
level=debug msg="Executing start hook" function="cmd.newDaemonPromise.func1 (cmd/daemon_main.go:1685)" subsys=hive
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_services_v2 size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_backends_v2 size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_backends_v3 size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_reverse_nat size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb_affinity_match size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_source_range size=128 subsys=bpf ttl=0s
level=debug msg="creating new EventQueue" name=config-modify-queue numBufferedEvents=10 subsys=eventqueue
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipcache subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipcache subsys=bpf
level=debug msg="Withholding numeric identities for later restoration" identity="[16777217]" subsys=identity-cache
level=debug msg="Starting new controller" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="enabling events buffer" file-path= name=cilium_lxc size=128 subsys=bpf ttl=0s
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lxc subsys=bpf
level=debug msg="Controller func execution time: 43.982µs" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller run failed" consecutiveErrors=1 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller func execution time: 19.32µs" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller run failed" consecutiveErrors=2 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=info msg="Removed map pin at /sys/fs/bpf/tc/globals/cilium_ipcache, recreating and re-pinning map cilium_ipcache" file-path=/sys/fs/bpf/tc/globals/cilium_ipcache name=cilium_ipcache subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipcache subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_metrics subsys=ebpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lb4_services_v2 subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lb4_backends_v3 subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lb4_reverse_nat subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_call_policy subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_call_policy subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ct4_global subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ct4_global subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ct_any4_global subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ct_any4_global subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipv4_frag_datagrams subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipv4_frag_datagrams subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lb4_source_range subsys=bpf
level=debug msg="Restoring service" serviceID=2 serviceIP="10.244.64.10:53" subsys=service
level=debug msg="Restoring service" l3n4Addr="{AddrCluster:10.244.64.10 L4Addr:{Protocol:NONE Port:53} Scope:0}" subsys=service
level=debug msg="Restoring service" serviceID=1 serviceIP="10.244.64.1:443" subsys=service
level=debug msg="Restoring service" l3n4Addr="{AddrCluster:10.244.64.1 L4Addr:{Protocol:NONE Port:443} Scope:0}" subsys=service
level=debug msg="Restoring service" serviceID=3 serviceIP="10.244.64.10:9153" subsys=service
level=debug msg="Restoring service" l3n4Addr="{AddrCluster:10.244.64.10 L4Addr:{Protocol:NONE Port:9153} Scope:0}" subsys=service
level=info msg="Restored services from maps" failedServices=0 restoredServices=3 subsys=service
level=debug msg="Restoring backend" backendID=12 backendPreferred=false backendState=0 l3n4Addr="10.244.193.28:53" subsys=service
level=debug msg="Restoring backend" backendID=13 backendPreferred=false backendState=0 l3n4Addr="10.244.193.28:9153" subsys=service
level=debug msg="Restoring backend" backendID=11 backendPreferred=false backendState=0 l3n4Addr="10.244.193.150:9153" subsys=service
level=debug msg="Restoring backend" backendID=10 backendPreferred=false backendState=0 l3n4Addr="10.244.193.150:53" subsys=service
level=debug msg="Restoring backend" backendID=1 backendPreferred=false backendState=0 l3n4Addr="10.244.0.1:6443" subsys=service
level=info msg="Restored backends from maps" failedBackends=0 restoredBackends=5 skippedBackends=0 subsys=service
level=info msg="Reading old endpoints..." subsys=daemon
level=debug msg="Found endpoint C header file" endpointID=1709 file-path=/var/run/cilium/state/1709/ep_config.h subsys=endpoint
level=debug msg="Endpoint restoring" ciliumEndpointName=/ code=OK containerID= containerInterface= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=1709 endpointState=restoring identity=4 ipv4=10.244.192.247 ipv6= k8sPodName=/ policyRevision=0 subsys=endpoint type=0
level=debug msg="Found endpoint C header file" endpointID=607 file-path=/var/run/cilium/state/607/ep_config.h subsys=endpoint
level=debug msg="Endpoint restoring" ciliumEndpointName=/ code=OK containerID= containerInterface= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=607 endpointState=restoring identity=1 ipv4= ipv6= k8sPodName=/ policyRevision=0 subsys=endpoint type=0
level=debug msg="Starting new controller" name=dns-garbage-collector-job subsys=controller uuid=94597cc6-fe08-4335-a4c3-c8a57c7e9939
level=debug msg="Running 'iptables -t mangle -n -L CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Controller func execution time: 18.043µs" name=dns-garbage-collector-job subsys=controller uuid=94597cc6-fe08-4335-a4c3-c8a57c7e9939
level=debug msg="Controller func execution time: 10.095µs" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller run failed" consecutiveErrors=3 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="DNS Proxy bound to addresses" addresses=2 port=46841 subsys=fqdn/dnsproxy
level=info msg="Reusing previous DNS proxy port: 46841" subsys=daemon
level=debug msg="Restored rules for endpoint 1709: map[]" subsys=fqdn/dnsproxy
level=debug msg="Restored rules for endpoint 607: map[]" subsys=fqdn/dnsproxy
level=debug msg="Trying to start the tcp4 DNS proxy on 127.0.0.1:46841" subsys=fqdn/dnsproxy
level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumendpoints.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumnetworkpolicies.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliuml2announcementpolicies.cilium.io" subsys=k8s
level=debug msg="Trying to start the udp4 DNS proxy on 127.0.0.1:46841" subsys=fqdn/dnsproxy
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumpodippools.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumloadbalancerippools.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumclusterwidenetworkpolicies.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumnodes.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumidentities.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumcidrgroups.cilium.io" subsys=k8s
level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
level=info msg="Creating or updating CiliumNode resource" node=node-pool0-0 subsys=nodediscovery
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumclusterwidenetworkpolicies.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumidentities.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumcidrgroups.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumloadbalancerippools.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumpodippools.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumnetworkpolicies.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliuml2announcementpolicies.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumnodes.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumendpoints.cilium.io" subsys=k8s
level=info msg="Retrieved node information from cilium node" nodeName=node-pool0-0 subsys=daemon
level=info msg="Received own node information from API server" ipAddr.ipv4=10.244.0.2 ipAddr.ipv6="<nil>" k8sNodeIP=10.244.0.2 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:917f869b-77a5-40f7-aeb5-26ec52465361 beta.kubernetes.io/os:linux io.cilium.migration/cilium-default:true kubernetes.io/arch:amd64 kubernetes.io/hostname:node-pool0-0 kubernetes.io/os:linux node.kubernetes.io/instance-type:917f869b-77a5-40f7-aeb5-26ec52465361]" nodeName=node-pool0-0 subsys=daemon v4Prefix=10.244.192.0/24 v6Prefix="<nil>"
level=info msg="Restored router IPs from node information" ipv4=10.244.192.198 ipv6="<nil>" subsys=daemon
level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
level=info msg="Direct routing device detected" direct-routing-device=ens17 subsys=linux-datapath
level=info msg="Enabling k8s event listener" subsys=k8s-watcher
level=debug msg="waiting for cache to synchronize" kubernetesResource="core/v1::Service" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="core/v1::Pods" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource=EndpointSliceOrEndpoint subsys=k8s
level=info msg="Using discoveryv1.EndpointSlice" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="networking.k8s.io/v1::NetworkPolicy" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="core/v1::Namespace" subsys=k8s
level=debug msg="Skip pod event using host networking" hostIP=10.244.0.2 k8sNamespace=kube-system k8sPodName=cilium-d69wc podIP=10.244.0.2 podIPs="[{10.244.0.2}]" subsys=k8s-watcher
level=debug msg="Skip pod event using host networking" hostIP=10.244.0.2 k8sNamespace=kube-system k8sPodName=csi-node-th77h podIP=10.244.0.2 podIPs="[{10.244.0.2}]" subsys=k8s-watcher
level=debug msg="Skip pod event using host networking" hostIP=10.244.0.2 k8sNamespace=kube-system k8sPodName=kube-proxy-899xk podIP=10.244.0.2 podIPs="[{10.244.0.2}]" subsys=k8s-watcher
level=debug msg="Processing 1 endpoints for EndpointSlice kubernetes" subsys=k8s
level=debug msg="EndpointSlice kubernetes has 1 backends" subsys=k8s
level=debug msg="Processing 2 endpoints for EndpointSlice kube-dns-5ctlp" subsys=k8s
level=debug msg="EndpointSlice kube-dns-5ctlp has 2 backends" subsys=k8s
level=debug msg="Controller func execution time: 6.286µs" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller run failed" consecutiveErrors=4 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Kubernetes service definition changed" action=service-updated endpoints="10.244.0.1:6443/TCP" k8sNamespace=default k8sSvcName=kubernetes old-endpoints= old-service=nil service="frontends:[10.244.64.1]/ports=[https]/selector=map[]" subsys=k8s-watcher
level=debug msg="Upserting service" backends="[10.244.0.1:6443]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceIP="{10.244.64.1 {TCP 443} 0}" serviceName=kubernetes serviceNamespace=default sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Acquired service ID" backends="[10.244.0.1:6443]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceID=1 serviceIP="{10.244.64.1 {TCP 443} 0}" serviceName=kubernetes serviceNamespace=default sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Upserted service entry" backendSlot=1 subsys=map-lb svcKey="10.244.64.1:47873" svcVal="1 0 (256) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=0 subsys=map-lb svcKey="10.244.64.1:47873" svcVal="0 1 (256) [0x0 0x0]"
level=debug msg="Kubernetes service definition changed" action=service-updated endpoints="10.244.193.150:53/TCP,10.244.193.150:53/UDP,10.244.193.150:9153/TCP,10.244.193.28:53/TCP,10.244.193.28:53/UDP,10.244.193.28:9153/TCP" k8sNamespace=kube-system k8sSvcName=kube-dns old-endpoints= old-service=nil service="frontends:[10.244.64.10]/ports=[dns dns-tcp metrics]/selector=map[k8s-app:kube-dns]" subsys=k8s-watcher
level=debug msg="Upserting service" backends="[10.244.193.150:53 10.244.193.28:53]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceIP="{10.244.64.10 {UDP 53} 0}" serviceName=kube-dns serviceNamespace=kube-system sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Acquired service ID" backends="[10.244.193.150:53 10.244.193.28:53]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceID=2 serviceIP="{10.244.64.10 {UDP 53} 0}" serviceName=kube-dns serviceNamespace=kube-system sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Upserted service entry" backendSlot=1 subsys=map-lb svcKey="10.244.64.10:13568" svcVal="10 0 (512) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=2 subsys=map-lb svcKey="10.244.64.10:13568" svcVal="12 0 (512) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=0 subsys=map-lb svcKey="10.244.64.10:13568" svcVal="0 2 (512) [0x0 0x0]"
level=debug msg="Upserting service" backends="[10.244.193.150:9153 10.244.193.28:9153]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceIP="{10.244.64.10 {TCP 9153} 0}" serviceName=kube-dns serviceNamespace=kube-system sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Acquired service ID" backends="[10.244.193.150:9153 10.244.193.28:9153]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceID=3 serviceIP="{10.244.64.10 {TCP 9153} 0}" serviceName=kube-dns serviceNamespace=kube-system sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Upserted service entry" backendSlot=1 subsys=map-lb svcKey="10.244.64.10:49443" svcVal="11 0 (768) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=2 subsys=map-lb svcKey="10.244.64.10:49443" svcVal="13 0 (768) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=0 subsys=map-lb svcKey="10.244.64.10:49443" svcVal="0 2 (768) [0x0 0x0]"
level=debug msg="cache synced" kubernetesResource="core/v1::Namespace" subsys=k8s
level=debug msg="cache synced" kubernetesResource="core/v1::Service" subsys=k8s
level=debug msg="cache synced" kubernetesResource="networking.k8s.io/v1::NetworkPolicy" subsys=k8s
level=debug msg="cache synced" kubernetesResource="core/v1::Pods" subsys=k8s
level=debug msg="cache synced" kubernetesResource=EndpointSliceOrEndpoint subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2::CiliumNetworkPolicy" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2::CiliumClusterwideNetworkPolicy" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2alpha1::CiliumCIDRGroup" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2::CiliumEndpoint" subsys=k8s
level=info msg="Removing stale endpoint interfaces" subsys=daemon
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2::CiliumNode" subsys=k8s
level=info msg="Skipping kvstore configuration" subsys=daemon
level=info msg="Waiting until local node addressing before starting watchers depending on it" subsys=k8s-watcher
level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=10.244.192.198 ipv6="<nil>" subsys=node
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix=10.244.192.0/24 v6Prefix="<nil>"
level=info msg="Restoring endpoints..." subsys=daemon
level=debug msg="Removing old health endpoint state directory" endpointID=1709 file-path=/var/run/cilium/state/1709 subsys=daemon
level=info msg="Node updated" clusterName=default nodeName=node-pool0-1 subsys=nodemanager
level=debug msg="Restoring endpoint" ciliumEndpointName=/ endpointID=607 subsys=daemon
level=debug msg="Restoring endpoint from previous cilium instance" ciliumEndpointName=/ code=OK containerID= containerInterface= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=607 endpointState=restoring identity=1 ipv4= ipv6= k8sPodName=/ policyRevision=0 subsys=endpoint type=0
level=info msg="Endpoints restored" failed=0 restored=1 subsys=daemon
level=debug msg="Removed outdated endpoint 1709 from endpoint map" subsys=daemon
level=debug msg="Allocated specific IP" ip=10.244.192.198 owner=router pool=default subsys=ipam
level=debug msg="Received node update event from custom-resource" node="{\"Name\":\"node-pool0-1\",\"Cluster\":\"default\",\"IPAddresses\":[{\"Type\":\"InternalIP\",\"IP\":\"10.244.0.3\"},{\"Type\":\"ExternalIP\",\"IP\":\"$NODE_PUBLIC_IP\"},{\"Type\":\"CiliumInternalIP\",\"IP\":\"10.244.193.34\"}],\"IPv4AllocCIDR\":{\"IP\":\"10.244.193.0\",\"Mask\":\"////AA==\"},\"IPv4SecondaryAllocCIDRs\":null,\"IPv6AllocCIDR\":null,\"IPv6SecondaryAllocCIDRs\":null,\"IPv4HealthIP\":\"10.244.193.24\",\"IPv6HealthIP\":\"\",\"IPv4IngressIP\":\"\",\"IPv6IngressIP\":\"\",\"ClusterID\":0,\"Source\":\"custom-resource\",\"EncryptionKey\":0,\"Labels\":{\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/instance-type\":\"4796a7c9-5475-4799-82b7-ab399f6cddad\",\"beta.kubernetes.io/os\":\"linux\",\"io.cilium.migration/cilium-default\":\"true\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"node-pool0-1\",\"kubernetes.io/os\":\"linux\",\"node.kubernetes.io/instance-type\":\"4796a7c9-5475-4799-82b7-ab399f6cddad\"},\"Annotations\":null,\"NodeIdentity\":0,\"WireguardPubKey\":\"\"}" subsys=nodemanager
level=info msg="Addressing information:" subsys=daemon
level=info msg="  Cluster-Name: default" subsys=daemon
level=info msg="  Cluster-ID: 0" subsys=daemon
level=info msg="  Local node-name: node-pool0-0" subsys=daemon
level=info msg="  Node-IPv6: <nil>" subsys=daemon
level=info msg="  External-Node IPv4: 10.244.0.2" subsys=daemon
level=info msg="  Internal-Node IPv4: 10.244.192.198" subsys=daemon
level=info msg="  IPv4 allocation prefix: 10.244.192.0/24" subsys=daemon
level=info msg="  IPv4 native routing prefix: 10.244.0.0/16" subsys=daemon
level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
level=info msg="  Local IPv4 addresses:" subsys=daemon
level=info msg="  - 10.244.0.2" subsys=daemon
level=info msg="  - 10.244.192.198" subsys=daemon
level=info msg="  - $PUBLIC_IP" subsys=daemon
level=debug msg="Allocated random IP" ip=10.244.192.217 owner=health pool=default subsys=ipam
level=debug msg="IPv4 health endpoint address: 10.244.192.217" subsys=daemon
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=debug msg="Upserting IP into ipcache layer" identity="{18005 custom-resource [] false false}" ipAddr=10.244.193.150 k8sNamespace=kube-system k8sPodName=coredns-f9955cc79-47469 key=0 namedPorts="map[dns:{53 17} dns-tcp:{53 6} metrics:{9153 6}]" subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{18005 custom-resource [] false false}" ipAddr="{10.244.193.150 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{18005 custom-resource [] false false}" ipAddr=10.244.193.28 k8sNamespace=kube-system k8sPodName=coredns-f9955cc79-nnkzr key=0 namedPorts="map[dns:{53 17} dns-tcp:{53 6} metrics:{9153 6}]" subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{18005 custom-resource [] false false}" ipAddr="{10.244.193.28 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=info msg="Node updated" clusterName=default nodeName=node-pool0-0 subsys=nodemanager
level=debug msg="Received node update event from local" node="{\"Name\":\"node-pool0-0\",\"Cluster\":\"default\",\"IPAddresses\":[{\"Type\":\"InternalIP\",\"IP\":\"10.244.0.2\"},{\"Type\":\"ExternalIP\",\"IP\":\"$PUBLIC_IP\"},{\"Type\":\"CiliumInternalIP\",\"IP\":\"10.244.192.198\"}],\"IPv4AllocCIDR\":{\"IP\":\"10.244.192.0\",\"Mask\":\"////AA==\"},\"IPv4SecondaryAllocCIDRs\":null,\"IPv6AllocCIDR\":null,\"IPv6SecondaryAllocCIDRs\":null,\"IPv4HealthIP\":\"10.244.192.217\",\"IPv6HealthIP\":\"\",\"IPv4IngressIP\":\"\",\"IPv6IngressIP\":\"\",\"ClusterID\":0,\"Source\":\"local\",\"EncryptionKey\":0,\"Labels\":{\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/instance-type\":\"917f869b-77a5-40f7-aeb5-26ec52465361\",\"beta.kubernetes.io/os\":\"linux\",\"io.cilium.migration/cilium-default\":\"true\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"node-pool0-0\",\"kubernetes.io/os\":\"linux\",\"node.kubernetes.io/instance-type\":\"917f869b-77a5-40f7-aeb5-26ec52465361\"},\"Annotations\":{},\"NodeIdentity\":1,\"WireguardPubKey\":\"\"}" subsys=nodemanager
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=info msg="Adding local node to cluster" node=node-pool0-0 subsys=nodediscovery
level=debug msg="Running 'ipset add cilium_node_set_v4 10.244.0.3 -exist' command" subsys=iptables
level=debug msg="Running 'ipset add cilium_node_set_v4 10.244.0.2 -exist' command" subsys=iptables
level=debug msg="Controller func execution time: 11.675µs" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller run failed" consecutiveErrors=5 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Node discovered - mark to keep" cluster=default name=node-pool0-1 node_ids="[28561]" subsys=auth
level=debug msg="Controller func execution time: 2.556µs" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller run failed" consecutiveErrors=6 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Node discovered - mark to keep" cluster=default name=node-pool0-0 node_ids="[0]" subsys=auth
level=debug msg="Controller func execution time: 21.297µs" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller run failed" consecutiveErrors=7 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller func execution time: 43.987µs" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=debug msg="Controller run failed" consecutiveErrors=8 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=5a08faa4-70a5-44fb-969f-3f2985d681ac
level=info msg="Creating or updating CiliumNode resource" node=node-pool0-0 subsys=nodediscovery
level=info msg="Waiting until all pre-existing resources have been received" subsys=k8s-watcher
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="core/v1::Service" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="networking.k8s.io/v1::NetworkPolicy" subsys=k8s
level=debug msg="resource \"networking.k8s.io/v1::NetworkPolicy\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="resource \"core/v1::Service\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource=EndpointSliceOrEndpoint subsys=k8s
level=debug msg="resource \"EndpointSliceOrEndpoint\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="core/v1::Pods" subsys=k8s
level=debug msg="resource \"core/v1::Pods\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="core/v1::Namespace" subsys=k8s
level=debug msg="resource \"core/v1::Namespace\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="Annotate k8s node is disabled." subsys=daemon
level=info msg="Initializing identity allocator" subsys=identity-cache
level=info msg="Allocating identities between range" cluster-id=0 max=65535 min=256 subsys=identity-cache
level=debug msg="Identity allocation backed by CRD" subsys=identity-cache
level=debug msg="Starting new controller" name=template-dir-watcher subsys=controller uuid=dc7a93d6-face-4d9d-98b8-a64153ad191e
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.fib_multipath_use_neigh sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.timer_migration sysParamValue=0
level=debug msg="writing configuration" file-path=netdev_config.h subsys=datapath-loader
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_connect', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet4Connect" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_sendmsg', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupUDP4Sendmsg" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_recvmsg', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupUDP4Recvmsg" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_getpeername', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CgroupInet4GetPeername" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_post_bind', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet4PostBind" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_pre_bind', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet4Bind" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_connect', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet6Connect" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_sendmsg', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupUDP6Sendmsg" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_recvmsg', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupUDP6Recvmsg" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_getpeername', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CgroupInet6GetPeername" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_post_bind', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet6PostBind" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_pre_bind', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet6Bind" subsys=socketlb
level=debug msg="Launching compiler" args="[-I/var/run/cilium/state/globals -I/var/run/cilium/state -I/var/lib/cilium/bpf -I/var/lib/cilium/bpf/include -g -O2 --target=bpf -std=gnu89 -nostdinc -D__NR_CPUS__=4 -Wall -Wextra -Werror -Wshadow -Wno-address-of-packed-member -Wno-unknown-warning-option -Wno-gnu-variable-sized-type-not-at-end -Wdeclaration-after-statement -Wimplicit-int-conversion -Wenum-conversion -mcpu=v3 -c /var/lib/cilium/bpf/bpf_alignchecker.c -o -]" subsys=datapath-loader target=clang
level=debug msg="UpdateIdentities: Adding a new identity" identity=18005 labels="[k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns]" subsys=policy
level=debug msg="Regenerating all endpoints" subsys=policy
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=debug msg="cache synced" kubernetesResource="cilium/v2::CiliumNode" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2::CiliumNode" subsys=k8s
level=debug msg="resource \"cilium/v2::CiliumNode\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 6.525µs ago, Message: }" subsys=hive
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 4.889µs ago, Message: }" subsys=hive
level=debug msg="Initial list of identities received" subsys=allocator
level=debug msg="Identity discovered - mark to keep" identity=18005 labels="k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns" subsys=auth
level=debug msg="Identities synced" subsys=auth
level=debug msg="cache synced" kubernetesResource="cilium/v2::CiliumNetworkPolicy" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2::CiliumNetworkPolicy" subsys=k8s
level=debug msg="cache synced" kubernetesResource="cilium/v2::CiliumEndpoint" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2::CiliumEndpoint" subsys=k8s
level=debug msg="cache synced" kubernetesResource="cilium/v2::CiliumClusterwideNetworkPolicy" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2::CiliumClusterwideNetworkPolicy" subsys=k8s
level=debug msg="resource \"cilium/v2::CiliumClusterwideNetworkPolicy\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="cache synced" kubernetesResource="cilium/v2alpha1::CiliumCIDRGroup" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2alpha1::CiliumCIDRGroup" subsys=k8s
level=debug msg="resource \"cilium/v2::CiliumEndpoint\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="resource \"cilium/v2alpha1::CiliumCIDRGroup\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="resource \"cilium/v2::CiliumNetworkPolicy\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="Compilation had peak RSS of 117220 bytes" compiler-pid=45 output=/var/run/cilium/state/bpf_alignchecker.o subsys=datapath-loader
level=debug msg="Updating direct route" addedCIDRs="[10.244.193.0/24]" newIP=10.244.0.3 oldIP="<nil>" removedCIDRs="[]" subsys=linux-datapath
level=debug msg="Running 'iptables -t nat -S' command" subsys=iptables
level=debug msg="Running 'ip6tables -t nat -S' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'ip6tables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'ip6tables -t nat -S OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'ip6tables -t mangle -S OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -E CILIUM_INPUT OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -E CILIUM_OUTPUT OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -E CILIUM_OUTPUT_raw OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -E CILIUM_POST_nat OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'ip6tables -t nat -S CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -E CILIUM_OUTPUT_nat OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -E CILIUM_PRE_nat OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -E CILIUM_POST_mangle OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -E CILIUM_PRE_mangle OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 13.883µs ago, Message: }" subsys=hive
level=debug msg="Running 'ip6tables -t mangle -S CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -E CILIUM_PRE_raw OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -E CILIUM_FORWARD OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -N CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -N CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -N CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -N CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -N CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -N CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -N CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -N CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -N CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -N CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_PRE_raw -m mark --mark 0x00000200/0x00000f00 -m comment --comment cilium: NOTRACK for proxy traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -A CILIUM_INPUT -m mark --mark 0x00000200/0x00000f00 -m comment --comment cilium: ACCEPT for proxy traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x00000a00/0xfffffeff -m comment --comment cilium: NOTRACK for proxy return traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x00000a00/0xfffffeff -m comment --comment cilium: NOTRACK for proxy return traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x00000800/0x00000e00 -m comment --comment cilium: NOTRACK for L7 proxy upstream traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x00000800/0x00000e00 -m comment --comment cilium: NOTRACK for L7 proxy upstream traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -A CILIUM_OUTPUT -m mark --mark 0x00000a00/0xfffffeff -m comment --comment cilium: ACCEPT for proxy return traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -A CILIUM_OUTPUT -m mark --mark 0x00000800/0x00000e00 -m comment --comment cilium: ACCEPT for l7 proxy upstream traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -A CILIUM_PRE_mangle -m socket --transparent -m comment --comment cilium: any->pod redirect proxied traffic to host proxy -j MARK --set-mark 0x00000200' command" subsys=iptables
level=debug msg="Running 'iptables -A CILIUM_FORWARD -o cilium_host -m comment --comment cilium: any->cluster on cilium_host forward accept -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -A CILIUM_FORWARD -i cilium_host -m comment --comment cilium: cluster->any on cilium_host forward accept (nodeport) -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -A CILIUM_FORWARD -i lxc+ -m comment --comment cilium: cluster->any on lxc+ forward accept -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -A CILIUM_FORWARD -i cilium_net -m comment --comment cilium: cluster->any on cilium_net forward accept (nodeport) -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -A CILIUM_OUTPUT -m mark ! --mark 0x00000e00/0x00000f00 -m mark ! --mark 0x00000d00/0x00000f00 -m mark ! --mark 0x00000a00/0x00000e00 -m mark ! --mark 0x00000800/0x00000e00 -m mark ! --mark 0x00000f00/0x00000f00 -m comment --comment cilium: host->any mark as from host -j MARK --set-xmark 0x00000c00/0x00000f00' command" subsys=iptables
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -o ens16,ens17 -m set --match-set cilium_node_set_v4 dst -m comment --comment exclude traffic to cluster nodes from masquerade -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -s 10.244.192.0/24 -d 8.8.4.4/32 -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -s 10.244.192.0/24 -d 8.8.8.8/32 -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -s 10.244.192.0/24 -d 10.244.0.0/19 -o ens17 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source 10.244.0.2' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -s 10.244.192.0/24 -d  PUBLIC_IP_CIDR -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -s 10.244.192.0/24 -d $PUBLIC_GATEWAY_IP -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptablessubsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -s 10.244.192.0/24 ! -d 10.244.0.0/16 -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -m mark --mark 0x00000a00/0x00000e00 -m comment --comment exclude proxy return traffic from masquerade -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -s 127.0.0.1 -o cilium_host -m comment --comment cilium host->cluster from 127.0.0.1 masquerade -j SNAT --to-source 10.244.192.198' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -m mark --mark 0x00000f00/0x00000f00 -o cilium_host -m conntrack --ctstate DNAT -m comment --comment hairpin traffic that originated from a local pod -j SNAT --to-source 10.244.192.198' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -I INPUT -m comment --comment cilium-feeder: CILIUM_INPUT -j CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -I OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT -j CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -I OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT_raw -j CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -I POSTROUTING -m comment --comment cilium-feeder: CILIUM_POST_nat -j CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -I OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT_nat -j CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -I PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_nat -j CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -I POSTROUTING -m comment --comment cilium-feeder: CILIUM_POST_mangle -j CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -I PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_mangle -j CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -I PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_raw -j CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -I FORWARD -m comment --comment cilium-feeder: CILIUM_FORWARD -j CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -A CILIUM_PRE_mangle -p tcp -m mark --mark 0xf9b60200 -m comment --comment cilium: TPROXY to host cilium-dns-egress proxy -j TPROXY --on-port 46841 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -A CILIUM_PRE_mangle -p udp -m mark --mark 0xf9b60200 -m comment --comment cilium: TPROXY to host cilium-dns-egress proxy -j TPROXY --on-port 46841 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_nat -j OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT_nat -j OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D POSTROUTING -m comment --comment cilium-feeder: CILIUM_POST_nat -j OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -o ens+ -m set --match-set cilium_node_set_v4 dst -m comment --comment exclude traffic to cluster nodes from masquerade -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -m mark --mark 0xa00/0xe00 -m comment --comment exclude proxy return traffic from masquerade -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -s 127.0.0.1/32 -o cilium_host -m comment --comment cilium host->cluster from 127.0.0.1 masquerade -j SNAT --to-source 10.244.192.198' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -o cilium_host -m mark --mark 0xf00/0xf00 -m conntrack --ctstate DNAT -m comment --comment hairpin traffic that originated from a local pod -j SNAT --to-source 10.244.192.198' command" subsys=iptables
level=debug msg="Running 'ip6tables -t nat -S' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_mangle -j OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D POSTROUTING -m comment --comment cilium-feeder: CILIUM_POST_mangle -j OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D OLD_CILIUM_PRE_mangle -m socket --transparent -m comment --comment cilium: any->pod redirect proxied traffic to host proxy -j MARK --set-xmark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D OLD_CILIUM_PRE_mangle -p tcp -m mark --mark 0xf9b60200 -m comment --comment cilium: TPROXY to host cilium-dns-egress proxy -j TPROXY --on-port 46841 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D OLD_CILIUM_PRE_mangle -p udp -m mark --mark 0xf9b60200 -m comment --comment cilium: TPROXY to host cilium-dns-egress proxy -j TPROXY --on-port 46841 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'ip6tables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_raw -j OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT_raw -j OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment cilium: NOTRACK for proxy return traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment cilium: NOTRACK for proxy return traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x800/0xe00 -m comment --comment cilium: NOTRACK for L7 proxy upstream traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x800/0xe00 -m comment --comment cilium: NOTRACK for L7 proxy upstream traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment cilium: NOTRACK for proxy traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D INPUT -m comment --comment cilium-feeder: CILIUM_INPUT -j OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D FORWARD -m comment --comment cilium-feeder: CILIUM_FORWARD -j OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT -j OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_FORWARD -o cilium_host -m comment --comment cilium: any->cluster on cilium_host forward accept -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_FORWARD -i cilium_host -m comment --comment cilium: cluster->any on cilium_host forward accept (nodeport) -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_FORWARD -i lxc+ -m comment --comment cilium: cluster->any on lxc+ forward accept -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_FORWARD -i cilium_net -m comment --comment cilium: cluster->any on cilium_net forward accept (nodeport) -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment cilium: ACCEPT for proxy traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_OUTPUT -m mark --mark 0xa00/0xfffffeff -m comment --comment cilium: ACCEPT for proxy return traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_OUTPUT -m mark --mark 0x800/0xe00 -m comment --comment cilium: ACCEPT for l7 proxy upstream traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0xa00/0xe00 -m mark ! --mark 0x800/0xe00 -m mark ! --mark 0xf00/0xf00 -m comment --comment cilium: host->any mark as from host -j MARK --set-xmark 0xc00/0xf00' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -F OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -X OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -F OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -X OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -F OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -X OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -F OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -X OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'ip6tables -t nat -S OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -F OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -X OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -F OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -X OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -F OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -X OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -F OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -X OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'ip6tables -t mangle -S OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -F OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -X OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -F OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -X OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=info msg="Iptables rules installed" subsys=iptables
level=info msg="Adding new proxy port rules for cilium-dns-egress:46841" id=cilium-dns-egress subsys=proxy
level=debug msg="Running 'iptables -t mangle -S' command" subsys=iptables
level=info msg="Iptables proxy rules installed" subsys=iptables
level=debug msg="AckProxyPort: acked proxy port 46841 ({true dns false 46841 1 true 46841 true})" id=cilium-dns-egress subsys=proxy
level=debug msg="Starting new controller" name=sync-host-ips subsys=controller uuid=c7ebf6e7-5d96-4146-828b-45e577883b99
level=debug msg="Controller func execution time: 634.292µs" name=sync-host-ips subsys=controller uuid=c7ebf6e7-5d96-4146-828b-45e577883b99
level=debug msg="Resolving identity" identityLabels="reserved:health" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=health identityLabels="reserved:health" isNew=false subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:remote-node" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=remote-node identityLabels="reserved:remote-node" isNew=false subsys=identity-cache
level=info msg="Initializing daemon" subsys=daemon
level=info msg="Validating configured node address ranges" subsys=daemon
level=debug msg="Resolving identity" identityLabels="cidr:10.244.0.1/32,reserved:kube-apiserver,reserved:world" subsys=identity-cache
level=info msg="Starting connection tracking garbage collector" subsys=daemon
level=debug msg="Reallocated restored local identity: 16777217" subsys=identity-cache
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ct4_global subsys=bpf
level=debug msg="Resolving identity" identityLabels="reserved:remote-node" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=remote-node identityLabels="reserved:remote-node" isNew=false subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:world" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=world identityLabels="reserved:world" isNew=false subsys=identity-cache
level=debug msg="Merged labels for reserved:host identity" labels="reserved:host" subsys=ipcache
level=debug msg="Merged labels for reserved:host identity" labels="reserved:host" subsys=ipcache
level=debug msg="Merged labels for reserved:host identity" labels="reserved:host" subsys=ipcache
level=debug msg="Resolving identity" identityLabels="reserved:health" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=health identityLabels="reserved:health" isNew=false subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:remote-node" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=remote-node identityLabels="reserved:remote-node" isNew=false subsys=identity-cache
level=debug msg="UpdateIdentities: Skipping add of an existing identical identity" identity=health subsys=policy
level=debug msg="UpdateIdentities: Skipping add of an existing identical identity" identity=remote-node subsys=policy
level=debug msg="UpdateIdentities: Adding a new identity" identity=16777217 labels="[cidr:0.0.0.0/0 cidr:0.0.0.0/1 cidr:0.0.0.0/2 cidr:0.0.0.0/3 cidr:0.0.0.0/4 cidr:10.0.0.0/7 cidr:10.0.0.0/8 cidr:10.128.0.0/9 cidr:10.192.0.0/10 cidr:10.224.0.0/11 cidr:10.240.0.0/12 cidr:10.240.0.0/13 cidr:10.244.0.0/14 cidr:10.244.0.0/15 cidr:10.244.0.0/16 cidr:10.244.0.0/17 cidr:10.244.0.0/18 cidr:10.244.0.0/19 cidr:10.244.0.0/20 cidr:10.244.0.0/21 cidr:10.244.0.0/22 cidr:10.244.0.0/23 cidr:10.244.0.0/24 cidr:10.244.0.0/25 cidr:10.244.0.0/26 cidr:10.244.0.0/27 cidr:10.244.0.0/28 cidr:10.244.0.0/29 cidr:10.244.0.0/30 cidr:10.244.0.0/31 cidr:10.244.0.1/32 cidr:8.0.0.0/5 cidr:8.0.0.0/6 reserved:kube-apiserver reserved:world]" subsys=policy
level=debug msg="UpdateIdentities: Skipping add of an existing identical identity" identity=world subsys=policy
....

Debug Log of a node with egress-masquerade-interfaces to ens+ where no snat rules are added to nodes regarding public and private interface:

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
level=debug msg="Skipped reading configuration file" error="Config File \"cilium\" Not Found in \"[/root]\"" subsys=config
level=info msg="Memory available for map entries (0.003% of 8323018752B): 20807546B" subsys=config
level=debug msg="Total memory for default map entries: 149422080" subsys=config
level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 65536" subsys=config
level=info msg="option bpf-nat-global-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-neigh-global-max set by dynamic sizing to 131072" subsys=config
level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 65536" subsys=config
level=info msg="  --agent-health-port='9879'" subsys=daemon
level=info msg="  --agent-labels=''" subsys=daemon
level=info msg="  --agent-liveness-update-interval='1s'" subsys=daemon
level=info msg="  --agent-not-ready-taint-key='node.cilium.io/agent-not-ready'" subsys=daemon
level=info msg="  --allocator-list-timeout='3m0s'" subsys=daemon
level=info msg="  --allow-icmp-frag-needed='true'" subsys=daemon
level=info msg="  --allow-localhost='auto'" subsys=daemon
level=info msg="  --annotate-k8s-node='false'" subsys=daemon
level=info msg="  --api-rate-limit=''" subsys=daemon
level=info msg="  --arping-refresh-period='30s'" subsys=daemon
level=info msg="  --auto-create-cilium-node-resource='true'" subsys=daemon
level=info msg="  --auto-direct-node-routes='true'" subsys=daemon
level=info msg="  --bgp-announce-lb-ip='false'" subsys=daemon
level=info msg="  --bgp-announce-pod-cidr='false'" subsys=daemon
level=info msg="  --bgp-config-path='/var/lib/cilium/bgp/config.yaml'" subsys=daemon
level=info msg="  --bpf-auth-map-max='524288'" subsys=daemon
level=info msg="  --bpf-ct-global-any-max='262144'" subsys=daemon
level=info msg="  --bpf-ct-global-tcp-max='524288'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp='2h13m20s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-tcp='2h13m20s'" subsys=daemon
level=info msg="  --bpf-ct-timeout-service-tcp-grace='1m0s'" subsys=daemon
level=info msg="  --bpf-filter-priority='1'" subsys=daemon
level=info msg="  --bpf-fragments-map-max='8192'" subsys=daemon
level=info msg="  --bpf-lb-acceleration='disabled'" subsys=daemon
level=info msg="  --bpf-lb-affinity-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-algorithm='random'" subsys=daemon
level=info msg="  --bpf-lb-dsr-dispatch='opt'" subsys=daemon
level=info msg="  --bpf-lb-dsr-l4-xlate='frontend'" subsys=daemon
level=info msg="  --bpf-lb-external-clusterip='false'" subsys=daemon
level=info msg="  --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
level=info msg="  --bpf-lb-maglev-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-maglev-table-size='16381'" subsys=daemon
level=info msg="  --bpf-lb-map-max='65536'" subsys=daemon
level=info msg="  --bpf-lb-mode='snat'" subsys=daemon
level=info msg="  --bpf-lb-rev-nat-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-rss-ipv4-src-cidr=''" subsys=daemon
level=info msg="  --bpf-lb-rss-ipv6-src-cidr=''" subsys=daemon
level=info msg="  --bpf-lb-service-backend-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-service-map-max='0'" subsys=daemon
level=info msg="  --bpf-lb-sock='false'" subsys=daemon
level=info msg="  --bpf-lb-sock-hostns-only='false'" subsys=daemon
level=info msg="  --bpf-lb-source-range-map-max='0'" subsys=daemon
level=info msg="  --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
level=info msg="  --bpf-map-event-buffers=''" subsys=daemon
level=info msg="  --bpf-nat-global-max='524288'" subsys=daemon
level=info msg="  --bpf-neigh-global-max='524288'" subsys=daemon
level=info msg="  --bpf-policy-map-full-reconciliation-interval='15m0s'" subsys=daemon
level=info msg="  --bpf-policy-map-max='16384'" subsys=daemon
level=info msg="  --bpf-root='/sys/fs/bpf'" subsys=daemon
level=info msg="  --bpf-sock-rev-map-max='262144'" subsys=daemon
level=info msg="  --bypass-ip-availability-upon-restore='false'" subsys=daemon
level=info msg="  --certificates-directory='/var/run/cilium/certs'" subsys=daemon
level=info msg="  --cflags=''" subsys=daemon
level=info msg="  --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
level=info msg="  --cilium-endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg="  --cluster-health-port='4240'" subsys=daemon
level=info msg="  --cluster-id='0'" subsys=daemon
level=info msg="  --cluster-name='default'" subsys=daemon
level=info msg="  --cluster-pool-ipv4-cidr='10.244.192.0/18'" subsys=daemon
level=info msg="  --cluster-pool-ipv4-mask-size='24'" subsys=daemon
level=info msg="  --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
level=info msg="  --clustermesh-ip-identities-sync-timeout='1m0s'" subsys=daemon
level=info msg="  --cmdref=''" subsys=daemon
level=info msg="  --cni-chaining-mode='none'" subsys=daemon
level=info msg="  --cni-chaining-target=''" subsys=daemon
level=info msg="  --cni-exclusive='true'" subsys=daemon
level=info msg="  --cni-external-routing='false'" subsys=daemon
level=info msg="  --cni-log-file='/var/run/cilium/cilium-cni.log'" subsys=daemon
level=info msg="  --cni-uninstall='false'" subsys=daemon
level=info msg="  --config=''" subsys=daemon
level=info msg="  --config-dir='/tmp/cilium/config-map'" subsys=daemon
level=info msg="  --config-sources='config-map:kube-system/cilium-config,cilium-node-config:kube-system/cilium-default'" subsys=daemon
level=info msg="  --conntrack-gc-interval='0s'" subsys=daemon
level=info msg="  --conntrack-gc-max-interval='0s'" subsys=daemon
level=info msg="  --controller-group-metrics=''" subsys=daemon
level=info msg="  --crd-wait-timeout='5m0s'" subsys=daemon
level=info msg="  --custom-cni-conf='false'" subsys=daemon
level=info msg="  --datapath-mode='veth'" subsys=daemon
level=info msg="  --debug='true'" subsys=daemon
level=info msg="  --debug-verbose=''" subsys=daemon
level=info msg="  --derive-masquerade-ip-addr-from-device=''" subsys=daemon
level=info msg="  --devices='ens16,ens17'" subsys=daemon
level=info msg="  --direct-routing-device=''" subsys=daemon
level=info msg="  --disable-endpoint-crd='false'" subsys=daemon
level=info msg="  --disable-envoy-version-check='false'" subsys=daemon
level=info msg="  --disable-iptables-feeder-rules=''" subsys=daemon
level=info msg="  --dns-max-ips-per-restored-rule='1000'" subsys=daemon
level=info msg="  --dns-policy-unload-on-shutdown='false'" subsys=daemon
level=info msg="  --dnsproxy-concurrency-limit='0'" subsys=daemon
level=info msg="  --dnsproxy-concurrency-processing-grace-period='0s'" subsys=daemon
level=info msg="  --dnsproxy-enable-transparent-mode='true'" subsys=daemon
level=info msg="  --dnsproxy-lock-count='131'" subsys=daemon
level=info msg="  --dnsproxy-lock-timeout='500ms'" subsys=daemon
level=info msg="  --egress-gateway-policy-map-max='16384'" subsys=daemon
level=info msg="  --egress-gateway-reconciliation-trigger-interval='1s'" subsys=daemon
level=info msg="  --egress-masquerade-interfaces='ens+'" subsys=daemon
level=info msg="  --egress-multi-home-ip-rule-compat='false'" subsys=daemon
level=info msg="  --enable-auto-protect-node-port-range='true'" subsys=daemon
level=info msg="  --enable-bandwidth-manager='false'" subsys=daemon
level=info msg="  --enable-bbr='false'" subsys=daemon
level=info msg="  --enable-bgp-control-plane='false'" subsys=daemon
level=info msg="  --enable-bpf-clock-probe='false'" subsys=daemon
level=info msg="  --enable-bpf-masquerade='false'" subsys=daemon
level=info msg="  --enable-bpf-tproxy='false'" subsys=daemon
level=info msg="  --enable-cilium-api-server-access='*'" subsys=daemon
level=info msg="  --enable-cilium-endpoint-slice='false'" subsys=daemon
level=info msg="  --enable-cilium-health-api-server-access='*'" subsys=daemon
level=info msg="  --enable-custom-calls='false'" subsys=daemon
level=info msg="  --enable-encryption-strict-mode='false'" subsys=daemon
level=info msg="  --enable-endpoint-health-checking='true'" subsys=daemon
level=info msg="  --enable-endpoint-routes='false'" subsys=daemon
level=info msg="  --enable-envoy-config='false'" subsys=daemon
level=info msg="  --enable-external-ips='false'" subsys=daemon
level=info msg="  --enable-health-check-loadbalancer-ip='false'" subsys=daemon
level=info msg="  --enable-health-check-nodeport='true'" subsys=daemon
level=info msg="  --enable-health-checking='true'" subsys=daemon
level=info msg="  --enable-high-scale-ipcache='false'" subsys=daemon
level=info msg="  --enable-host-firewall='false'" subsys=daemon
level=info msg="  --enable-host-legacy-routing='false'" subsys=daemon
level=info msg="  --enable-host-port='false'" subsys=daemon
level=info msg="  --enable-hubble='false'" subsys=daemon
level=info msg="  --enable-hubble-recorder-api='true'" subsys=daemon
level=info msg="  --enable-icmp-rules='true'" subsys=daemon
level=info msg="  --enable-identity-mark='true'" subsys=daemon
level=info msg="  --enable-ip-masq-agent='false'" subsys=daemon
level=info msg="  --enable-ipsec='false'" subsys=daemon
level=info msg="  --enable-ipsec-key-watcher='true'" subsys=daemon
level=info msg="  --enable-ipv4='true'" subsys=daemon
level=info msg="  --enable-ipv4-big-tcp='false'" subsys=daemon
level=info msg="  --enable-ipv4-egress-gateway='false'" subsys=daemon
level=info msg="  --enable-ipv4-fragment-tracking='true'" subsys=daemon
level=info msg="  --enable-ipv4-masquerade='true'" subsys=daemon
level=info msg="  --enable-ipv6='false'" subsys=daemon
level=info msg="  --enable-ipv6-big-tcp='false'" subsys=daemon
level=info msg="  --enable-ipv6-masquerade='false'" subsys=daemon
level=info msg="  --enable-ipv6-ndp='false'" subsys=daemon
level=info msg="  --enable-k8s='true'" subsys=daemon
level=info msg="  --enable-k8s-api-discovery='false'" subsys=daemon
level=info msg="  --enable-k8s-endpoint-slice='true'" subsys=daemon
level=info msg="  --enable-k8s-networkpolicy='true'" subsys=daemon
level=info msg="  --enable-k8s-terminating-endpoint='true'" subsys=daemon
level=info msg="  --enable-l2-announcements='false'" subsys=daemon
level=info msg="  --enable-l2-neigh-discovery='true'" subsys=daemon
level=info msg="  --enable-l2-pod-announcements='false'" subsys=daemon
level=info msg="  --enable-l7-proxy='true'" subsys=daemon
level=info msg="  --enable-local-node-route='true'" subsys=daemon
level=info msg="  --enable-local-redirect-policy='false'" subsys=daemon
level=info msg="  --enable-masquerade-to-route-source='true'" subsys=daemon
level=info msg="  --enable-metrics='true'" subsys=daemon
level=info msg="  --enable-mke='false'" subsys=daemon
level=info msg="  --enable-monitor='true'" subsys=daemon
level=info msg="  --enable-nat46x64-gateway='false'" subsys=daemon
level=info msg="  --enable-node-port='false'" subsys=daemon
level=info msg="  --enable-pmtu-discovery='false'" subsys=daemon
level=info msg="  --enable-policy='default'" subsys=daemon
level=info msg="  --enable-recorder='false'" subsys=daemon
level=info msg="  --enable-remote-node-identity='true'" subsys=daemon
level=info msg="  --enable-runtime-device-detection='false'" subsys=daemon
level=info msg="  --enable-sctp='false'" subsys=daemon
level=info msg="  --enable-service-topology='false'" subsys=daemon
level=info msg="  --enable-session-affinity='false'" subsys=daemon
level=info msg="  --enable-srv6='false'" subsys=daemon
level=info msg="  --enable-stale-cilium-endpoint-cleanup='true'" subsys=daemon
level=info msg="  --enable-svc-source-range-check='true'" subsys=daemon
level=info msg="  --enable-tracing='false'" subsys=daemon
level=info msg="  --enable-unreachable-routes='false'" subsys=daemon
level=info msg="  --enable-vtep='false'" subsys=daemon
level=info msg="  --enable-well-known-identities='false'" subsys=daemon
level=info msg="  --enable-wireguard='false'" subsys=daemon
level=info msg="  --enable-wireguard-userspace-fallback='false'" subsys=daemon
level=info msg="  --enable-xdp-prefilter='false'" subsys=daemon
level=info msg="  --enable-xt-socket-fallback='true'" subsys=daemon
level=info msg="  --encrypt-interface=''" subsys=daemon
level=info msg="  --encrypt-node='false'" subsys=daemon
level=info msg="  --encryption-strict-mode-allow-remote-node-identities='false'" subsys=daemon
level=info msg="  --encryption-strict-mode-cidr=''" subsys=daemon
level=info msg="  --endpoint-bpf-prog-watchdog-interval='30s'" subsys=daemon
level=info msg="  --endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg="  --endpoint-queue-size='25'" subsys=daemon
level=info msg="  --endpoint-status=''" subsys=daemon
level=info msg="  --envoy-config-timeout='2m0s'" subsys=daemon
level=info msg="  --envoy-log=''" subsys=daemon
level=info msg="  --exclude-local-address=''" subsys=daemon
level=info msg="  --external-envoy-proxy='false'" subsys=daemon
level=info msg="  --fixed-identity-mapping=''" subsys=daemon
level=info msg="  --fqdn-regex-compile-lru-size='1024'" subsys=daemon
level=info msg="  --gops-port='9890'" subsys=daemon
level=info msg="  --http-403-msg=''" subsys=daemon
level=info msg="  --http-idle-timeout='0'" subsys=daemon
level=info msg="  --http-max-grpc-timeout='0'" subsys=daemon
level=info msg="  --http-normalize-path='true'" subsys=daemon
level=info msg="  --http-request-timeout='3600'" subsys=daemon
level=info msg="  --http-retry-count='3'" subsys=daemon
level=info msg="  --http-retry-timeout='0'" subsys=daemon
level=info msg="  --hubble-disable-tls='false'" subsys=daemon
level=info msg="  --hubble-event-buffer-capacity='4095'" subsys=daemon
level=info msg="  --hubble-event-queue-size='0'" subsys=daemon
level=info msg="  --hubble-export-allowlist=''" subsys=daemon
level=info msg="  --hubble-export-denylist=''" subsys=daemon
level=info msg="  --hubble-export-fieldmask=''" subsys=daemon
level=info msg="  --hubble-export-file-compress='false'" subsys=daemon
level=info msg="  --hubble-export-file-max-backups='5'" subsys=daemon
level=info msg="  --hubble-export-file-max-size-mb='10'" subsys=daemon
level=info msg="  --hubble-export-file-path=''" subsys=daemon
level=info msg="  --hubble-flowlogs-config-path=''" subsys=daemon
level=info msg="  --hubble-listen-address=''" subsys=daemon
level=info msg="  --hubble-metrics=''" subsys=daemon
level=info msg="  --hubble-metrics-server=''" subsys=daemon
level=info msg="  --hubble-monitor-events=''" subsys=daemon
level=info msg="  --hubble-prefer-ipv6='false'" subsys=daemon
level=info msg="  --hubble-recorder-sink-queue-size='1024'" subsys=daemon
level=info msg="  --hubble-recorder-storage-path='/var/run/cilium/pcaps'" subsys=daemon
level=info msg="  --hubble-redact-enabled='false'" subsys=daemon
level=info msg="  --hubble-redact-http-headers-allow=''" subsys=daemon
level=info msg="  --hubble-redact-http-headers-deny=''" subsys=daemon
level=info msg="  --hubble-redact-http-urlquery='false'" subsys=daemon
level=info msg="  --hubble-redact-http-userinfo='true'" subsys=daemon
level=info msg="  --hubble-redact-kafka-apikey='false'" subsys=daemon
level=info msg="  --hubble-skip-unknown-cgroup-ids='true'" subsys=daemon
level=info msg="  --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
level=info msg="  --hubble-tls-cert-file=''" subsys=daemon
level=info msg="  --hubble-tls-client-ca-files=''" subsys=daemon
level=info msg="  --hubble-tls-key-file=''" subsys=daemon
level=info msg="  --identity-allocation-mode='crd'" subsys=daemon
level=info msg="  --identity-change-grace-period='5s'" subsys=daemon
level=info msg="  --identity-gc-interval='15m0s'" subsys=daemon
level=info msg="  --identity-heartbeat-timeout='30m0s'" subsys=daemon
level=info msg="  --identity-restore-grace-period='10m0s'" subsys=daemon
level=info msg="  --install-egress-gateway-routes='false'" subsys=daemon
level=info msg="  --install-iptables-rules='true'" subsys=daemon
level=info msg="  --install-no-conntrack-iptables-rules='false'" subsys=daemon
level=info msg="  --ip-allocation-timeout='2m0s'" subsys=daemon
level=info msg="  --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
level=info msg="  --ipam='cluster-pool'" subsys=daemon
level=info msg="  --ipam-cilium-node-update-rate='15s'" subsys=daemon
level=info msg="  --ipam-default-ip-pool='default'" subsys=daemon
level=info msg="  --ipam-multi-pool-pre-allocation=''" subsys=daemon
level=info msg="  --ipsec-key-file=''" subsys=daemon
level=info msg="  --ipsec-key-rotation-duration='5m0s'" subsys=daemon
level=info msg="  --iptables-lock-timeout='5s'" subsys=daemon
level=info msg="  --iptables-random-fully='false'" subsys=daemon
level=info msg="  --ipv4-native-routing-cidr='10.244.0.0/16'" subsys=daemon
level=info msg="  --ipv4-node='auto'" subsys=daemon
level=info msg="  --ipv4-pod-subnets=''" subsys=daemon
level=info msg="  --ipv4-range='auto'" subsys=daemon
level=info msg="  --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
level=info msg="  --ipv4-service-range='auto'" subsys=daemon
level=info msg="  --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
level=info msg="  --ipv6-mcast-device=''" subsys=daemon
level=info msg="  --ipv6-native-routing-cidr=''" subsys=daemon
level=info msg="  --ipv6-node='auto'" subsys=daemon
level=info msg="  --ipv6-pod-subnets=''" subsys=daemon
level=info msg="  --ipv6-range='auto'" subsys=daemon
level=info msg="  --ipv6-service-range='auto'" subsys=daemon
level=info msg="  --join-cluster='false'" subsys=daemon
level=info msg="  --k8s-api-server=''" subsys=daemon
level=info msg="  --k8s-client-burst='20'" subsys=daemon
level=info msg="  --k8s-client-qps='10'" subsys=daemon
level=info msg="  --k8s-heartbeat-timeout='30s'" subsys=daemon
level=info msg="  --k8s-kubeconfig-path=''" subsys=daemon
level=info msg="  --k8s-namespace='kube-system'" subsys=daemon
level=info msg="  --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
level=info msg="  --k8s-service-cache-size='128'" subsys=daemon
level=info msg="  --k8s-service-proxy-name=''" subsys=daemon
level=info msg="  --k8s-sync-timeout='3m0s'" subsys=daemon
level=info msg="  --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
level=info msg="  --keep-config='false'" subsys=daemon
level=info msg="  --kube-proxy-replacement='false'" subsys=daemon
level=info msg="  --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
level=info msg="  --kvstore=''" subsys=daemon
level=info msg="  --kvstore-connectivity-timeout='2m0s'" subsys=daemon
level=info msg="  --kvstore-lease-ttl='15m0s'" subsys=daemon
level=info msg="  --kvstore-max-consecutive-quorum-errors='2'" subsys=daemon
level=info msg="  --kvstore-opt=''" subsys=daemon
level=info msg="  --kvstore-periodic-sync='5m0s'" subsys=daemon
level=info msg="  --l2-announcements-lease-duration='15s'" subsys=daemon
level=info msg="  --l2-announcements-renew-deadline='5s'" subsys=daemon
level=info msg="  --l2-announcements-retry-period='2s'" subsys=daemon
level=info msg="  --l2-pod-announcements-interface=''" subsys=daemon
level=info msg="  --label-prefix-file=''" subsys=daemon
level=info msg="  --labels=''" subsys=daemon
level=info msg="  --legacy-turn-off-k8s-event-handover='false'" subsys=daemon
level=info msg="  --lib-dir='/var/lib/cilium'" subsys=daemon
level=info msg="  --local-max-addr-scope='252'" subsys=daemon
level=info msg="  --local-router-ipv4=''" subsys=daemon
level=info msg="  --local-router-ipv6=''" subsys=daemon
level=info msg="  --log-driver=''" subsys=daemon
level=info msg="  --log-opt=''" subsys=daemon
level=info msg="  --log-system-load='false'" subsys=daemon
level=info msg="  --max-connected-clusters='255'" subsys=daemon
level=info msg="  --max-controller-interval='0'" subsys=daemon
level=info msg="  --max-internal-timer-delay='0s'" subsys=daemon
level=info msg="  --mesh-auth-enabled='true'" subsys=daemon
level=info msg="  --mesh-auth-gc-interval='5m0s'" subsys=daemon
level=info msg="  --mesh-auth-mutual-connect-timeout='5s'" subsys=daemon
level=info msg="  --mesh-auth-mutual-listener-port='0'" subsys=daemon
level=info msg="  --mesh-auth-queue-size='1024'" subsys=daemon
level=info msg="  --mesh-auth-rotated-identities-queue-size='1024'" subsys=daemon
level=info msg="  --mesh-auth-signal-backoff-duration='1s'" subsys=daemon
level=info msg="  --mesh-auth-spiffe-trust-domain='spiffe.cilium'" subsys=daemon
level=info msg="  --mesh-auth-spire-admin-socket=''" subsys=daemon
level=info msg="  --metrics=''" subsys=daemon
level=info msg="  --mke-cgroup-mount=''" subsys=daemon
level=info msg="  --monitor-aggregation='medium'" subsys=daemon
level=info msg="  --monitor-aggregation-flags='all'" subsys=daemon
level=info msg="  --monitor-aggregation-interval='5s'" subsys=daemon
level=info msg="  --monitor-queue-size='0'" subsys=daemon
level=info msg="  --mtu='0'" subsys=daemon
level=info msg="  --node-encryption-opt-out-labels='node-role.kubernetes.io/control-plane'" subsys=daemon
level=info msg="  --node-port-acceleration='disabled'" subsys=daemon
level=info msg="  --node-port-algorithm='random'" subsys=daemon
level=info msg="  --node-port-bind-protection='true'" subsys=daemon
level=info msg="  --node-port-mode='snat'" subsys=daemon
level=info msg="  --node-port-range='30000,32767'" subsys=daemon
level=info msg="  --nodeport-addresses=''" subsys=daemon
level=info msg="  --nodes-gc-interval='5m0s'" subsys=daemon
level=info msg="  --operator-api-serve-addr='127.0.0.1:9234'" subsys=daemon
level=info msg="  --operator-prometheus-serve-addr=':9963'" subsys=daemon
level=info msg="  --policy-audit-mode='false'" subsys=daemon
level=info msg="  --policy-cidr-match-mode=''" subsys=daemon
level=info msg="  --policy-queue-size='100'" subsys=daemon
level=info msg="  --policy-trigger-interval='1s'" subsys=daemon
level=info msg="  --pprof='false'" subsys=daemon
level=info msg="  --pprof-address='localhost'" subsys=daemon
level=info msg="  --pprof-port='6060'" subsys=daemon
level=info msg="  --preallocate-bpf-maps='false'" subsys=daemon
level=info msg="  --prepend-iptables-chains='true'" subsys=daemon
level=info msg="  --procfs='/host/proc'" subsys=daemon
level=info msg="  --prometheus-serve-addr=':9962'" subsys=daemon
level=info msg="  --proxy-connect-timeout='2'" subsys=daemon
level=info msg="  --proxy-gid='1337'" subsys=daemon
level=info msg="  --proxy-idle-timeout-seconds='60'" subsys=daemon
level=info msg="  --proxy-max-connection-duration-seconds='0'" subsys=daemon
level=info msg="  --proxy-max-requests-per-connection='0'" subsys=daemon
level=info msg="  --proxy-prometheus-port='9964'" subsys=daemon
level=info msg="  --read-cni-conf=''" subsys=daemon
level=info msg="  --remove-cilium-node-taints='true'" subsys=daemon
level=info msg="  --restore='true'" subsys=daemon
level=info msg="  --route-metric='0'" subsys=daemon
level=info msg="  --routing-mode='native'" subsys=daemon
level=info msg="  --service-no-backend-response='reject'" subsys=daemon
level=info msg="  --set-cilium-is-up-condition='true'" subsys=daemon
level=info msg="  --set-cilium-node-taints='false'" subsys=daemon
level=info msg="  --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
level=info msg="  --skip-cnp-status-startup-clean='false'" subsys=daemon
level=info msg="  --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
level=info msg="  --srv6-encap-mode='reduced'" subsys=daemon
level=info msg="  --state-dir='/var/run/cilium'" subsys=daemon
level=info msg="  --synchronize-k8s-nodes='true'" subsys=daemon
level=info msg="  --tofqdns-dns-reject-response-code='refused'" subsys=daemon
level=info msg="  --tofqdns-enable-dns-compression='true'" subsys=daemon
level=info msg="  --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
level=info msg="  --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
level=info msg="  --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
level=info msg="  --tofqdns-min-ttl='0'" subsys=daemon
level=info msg="  --tofqdns-pre-cache=''" subsys=daemon
level=info msg="  --tofqdns-proxy-port='0'" subsys=daemon
level=info msg="  --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
level=info msg="  --trace-payloadlen='128'" subsys=daemon
level=info msg="  --trace-sock='true'" subsys=daemon
level=info msg="  --tunnel-port='0'" subsys=daemon
level=info msg="  --tunnel-protocol='vxlan'" subsys=daemon
level=info msg="  --unmanaged-pod-watcher-interval='0'" subsys=daemon
level=info msg="  --use-cilium-internal-ip-for-ipsec='false'" subsys=daemon
level=info msg="  --version='false'" subsys=daemon
level=info msg="  --vlan-bpf-bypass=''" subsys=daemon
level=info msg="  --vtep-cidr=''" subsys=daemon
level=info msg="  --vtep-endpoint=''" subsys=daemon
level=info msg="  --vtep-mac=''" subsys=daemon
level=info msg="  --vtep-mask=''" subsys=daemon
level=info msg="  --wireguard-persistent-keepalive='0s'" subsys=daemon
level=info msg="  --write-cni-conf-when-ready='/host/etc/cni/net.d/05-cilium.conflist'" subsys=daemon
level=info msg="     _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="|  _| | | | | |     |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.15.1 a368c8f0 2024-02-14T22:16:57+00:00 go version go1.21.6 linux/amd64" subsys=daemon
level=info msg="clang (10.0.0) and kernel (5.15.0) versions: OK!" subsys=linux-datapath
level=info msg="Kernel config file not found: if the agent fails to start, check the system requirements at https://docs.cilium.io/en/stable/operations/system_requirements" subsys=probes
level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
level=info msg=" - reserved:.*" subsys=labels-filter
level=info msg=" - :io\\.kubernetes\\.pod\\.namespace" subsys=labels-filter
level=info msg=" - :io\\.cilium\\.k8s\\.namespace\\.labels" subsys=labels-filter
level=info msg=" - :app\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:io\\.kubernetes" subsys=labels-filter
level=info msg=" - !:kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:statefulset\\.kubernetes\\.io/pod-name" subsys=labels-filter
level=info msg=" - !:apps\\.kubernetes\\.io/pod-index" subsys=labels-filter
level=info msg=" - !:batch\\.kubernetes\\.io/job-completion-index" subsys=labels-filter
level=info msg=" - !:.*beta\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:k8s\\.io" subsys=labels-filter
level=info msg=" - !:pod-template-generation" subsys=labels-filter
level=info msg=" - !:pod-template-hash" subsys=labels-filter
level=info msg=" - !:controller-revision-hash" subsys=labels-filter
level=info msg=" - !:annotation.*" subsys=labels-filter
level=info msg=" - !:etcd_node" subsys=labels-filter
level=debug msg=Invoking function="pprof.glob..func1 (pkg/pprof/cell.go:50)" subsys=hive
level=info msg=Invoked duration=1.407179ms function="pprof.glob..func1 (pkg/pprof/cell.go:50)" subsys=hive
level=debug msg=Invoking function="gops.registerGopsHooks (pkg/gops/cell.go:38)" subsys=hive
level=info msg=Invoked duration="100.764µs" function="gops.registerGopsHooks (pkg/gops/cell.go:38)" subsys=hive
level=debug msg=Invoking function="metrics.glob..func1 (pkg/metrics/cell.go:13)" subsys=hive
level=info msg=Invoked duration=1.19554ms function="metrics.glob..func1 (pkg/metrics/cell.go:13)" subsys=hive
level=debug msg=Invoking function="metricsmap.RegisterCollector (pkg/maps/metricsmap/metricsmap.go:281)" subsys=hive
level=info msg=Invoked duration="33.214µs" function="metricsmap.RegisterCollector (pkg/maps/metricsmap/metricsmap.go:281)" subsys=hive
level=debug msg=Invoking function="cmd.configureAPIServer (cmd/cells.go:215)" subsys=hive
level=debug msg="signalmap.newMap: &{0xc000824c30 <nil> 4}" subsys=signal-map
level=debug msg="getting identity cache for identity allocator manager" subsys=identity-cache
level=info msg="Spire Delegate API Client is disabled as no socket path is configured" subsys=spire-delegate
level=info msg="Mutual authentication handler is disabled as no port is configured" subsys=auth
level=debug msg="newSignalManager: &{0xc00119f758 [<nil> <nil> <nil>] <nil> 0xc0017479e0 {{{0 0}}} {{} {} 0}}" subsys=signal
level=debug msg="Adding BGP reconciler: Preflight (priority 10)" subsys=bgp-control-plane
level=debug msg="Adding BGP reconciler: LBService (priority 40)" subsys=bgp-control-plane
level=debug msg="Adding BGP reconciler: Neighbor (priority 60)" subsys=bgp-control-plane
level=debug msg="Adding BGP reconciler: RoutePolicy (priority 70)" subsys=bgp-control-plane
level=debug msg="Adding BGP reconciler: ExportPodCIDR (priority 30)" subsys=bgp-control-plane
level=info msg=Invoked duration=144.234221ms function="cmd.configureAPIServer (cmd/cells.go:215)" subsys=hive
level=debug msg=Invoking function="cmd.unlockAfterAPIServer (cmd/deletion_queue.go:113)" subsys=hive
level=info msg=Invoked duration="44.371µs" function="cmd.unlockAfterAPIServer (cmd/deletion_queue.go:113)" subsys=hive
level=debug msg=Invoking function="controller.Init (pkg/controller/cell.go:67)" subsys=hive
level=info msg=Invoked duration="90.303µs" function="controller.Init (pkg/controller/cell.go:67)" subsys=hive
level=debug msg=Invoking function="endpointcleanup.registerCleanup (pkg/endpointcleanup/cleanup.go:66)" subsys=hive
level=info msg=Invoked duration="300.972µs" function="endpointcleanup.registerCleanup (pkg/endpointcleanup/cleanup.go:66)" subsys=hive
level=debug msg=Invoking function="cmd.glob..func3 (cmd/daemon_main.go:1612)" subsys=hive
level=info msg=Invoked duration="44.96µs" function="cmd.glob..func3 (cmd/daemon_main.go:1612)" subsys=hive
level=debug msg=Invoking function="cmd.registerEndpointBPFProgWatchdog (cmd/watchdogs.go:57)" subsys=hive
level=info msg=Invoked duration="137.595µs" function="cmd.registerEndpointBPFProgWatchdog (cmd/watchdogs.go:57)" subsys=hive
level=debug msg=Invoking function="envoy.registerEnvoyVersionCheck (pkg/envoy/cell.go:132)" subsys=hive
level=info msg=Invoked duration="63.716µs" function="envoy.registerEnvoyVersionCheck (pkg/envoy/cell.go:132)" subsys=hive
level=debug msg=Invoking function="bgpv1.glob..func1 (pkg/bgpv1/cell.go:71)" subsys=hive
level=info msg=Invoked duration="14.463µs" function="bgpv1.glob..func1 (pkg/bgpv1/cell.go:71)" subsys=hive
level=debug msg=Invoking function="cmd.registerDeviceReloader (cmd/device-reloader.go:48)" subsys=hive
level=info msg=Invoked duration="165.738µs" function="cmd.registerDeviceReloader (cmd/device-reloader.go:48)" subsys=hive
level=debug msg=Invoking function="utime.initUtimeSync (pkg/datapath/linux/utime/cell.go:31)" subsys=hive
level=info msg=Invoked duration="28.32µs" function="utime.initUtimeSync (pkg/datapath/linux/utime/cell.go:31)" subsys=hive
level=debug msg=Invoking function="agentliveness.newAgentLivenessUpdater (pkg/datapath/agentliveness/agent_liveness.go:43)" subsys=hive
level=info msg=Invoked duration="121.715µs" function="agentliveness.newAgentLivenessUpdater (pkg/datapath/agentliveness/agent_liveness.go:43)" subsys=hive
level=debug msg=Invoking function="statedb.RegisterTable[...] (pkg/statedb/db.go:121)" subsys=hive
level=info msg=Invoked duration="44.202µs" function="statedb.RegisterTable[...] (pkg/statedb/db.go:121)" subsys=hive
level=debug msg=Invoking function="l2responder.NewL2ResponderReconciler (pkg/datapath/l2responder/l2responder.go:72)" subsys=hive
level=info msg=Invoked duration="117.647µs" function="l2responder.NewL2ResponderReconciler (pkg/datapath/l2responder/l2responder.go:72)" subsys=hive
level=debug msg=Invoking function="garp.newGARPProcessor (pkg/datapath/garp/processor.go:27)" subsys=hive
level=info msg=Invoked duration="114.235µs" function="garp.newGARPProcessor (pkg/datapath/garp/processor.go:27)" subsys=hive
level=debug msg=Invoking function="bigtcp.glob..func1 (pkg/datapath/linux/bigtcp/bigtcp.go:58)" subsys=hive
level=info msg=Invoked duration="14.241µs" function="bigtcp.glob..func1 (pkg/datapath/linux/bigtcp/bigtcp.go:58)" subsys=hive
level=debug msg=Invoking function="linux.glob..func1 (pkg/datapath/linux/devices_controller.go:62)" subsys=hive
level=info msg=Invoked duration="14.887µs" function="linux.glob..func1 (pkg/datapath/linux/devices_controller.go:62)" subsys=hive
level=debug msg=Invoking function="ipcache.glob..func3 (pkg/datapath/ipcache/cell.go:25)" subsys=hive
level=debug msg="enabling events buffer" file-path= name=cilium_ipcache size=1024 subsys=bpf ttl=0s
level=info msg=Invoked duration="94.247µs" function="ipcache.glob..func3 (pkg/datapath/ipcache/cell.go:25)" subsys=hive
level=info msg=Starting subsys=hive
level=debug msg="Executing start hook" function="gops.registerGopsHooks.func1 (pkg/gops/cell.go:43)" subsys=hive
level=info msg="Started gops server" address="127.0.0.1:9890" subsys=gops
level=info msg="Start hook executed" duration="529.124µs" function="gops.registerGopsHooks.func1 (pkg/gops/cell.go:43)" subsys=hive
level=debug msg="Executing start hook" function="metrics.NewRegistry.func1 (pkg/metrics/registry.go:86)" subsys=hive
level=info msg="Start hook executed" duration="2.11µs" function="metrics.NewRegistry.func1 (pkg/metrics/registry.go:86)" subsys=hive
level=debug msg="Executing start hook" function="client.(*compositeClientset).onStart" subsys=hive
level=info msg="Establishing connection to apiserver" host="https://10.244.64.1:443" subsys=k8s-client
level=info msg="Serving prometheus metrics on :9962" subsys=metrics
level=info msg="Connected to apiserver" subsys=k8s-client
level=debug msg="Starting new controller" name=k8s-heartbeat subsys=controller uuid=79126fc9-a23e-4e0c-a9db-ac5f83ecc418
level=debug msg="Controller func execution time: 894ns" name=k8s-heartbeat subsys=controller uuid=79126fc9-a23e-4e0c-a9db-ac5f83ecc418
level=debug msg="Skipping Leases support fallback discovery" subsys=k8s
level=info msg="Start hook executed" duration=18.082662ms function="client.(*compositeClientset).onStart" subsys=hive
level=debug msg="Executing start hook" function="authmap.newAuthMap.func1 (pkg/maps/authmap/cell.go:27)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_auth_map subsys=ebpf
level=info msg="Start hook executed" duration="145.955µs" function="authmap.newAuthMap.func1 (pkg/maps/authmap/cell.go:27)" subsys=hive
level=debug msg="Executing start hook" function="configmap.newMap.func1 (pkg/maps/configmap/cell.go:23)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_runtime_config subsys=bpf
level=info msg="Start hook executed" duration="77.791µs" function="configmap.newMap.func1 (pkg/maps/configmap/cell.go:23)" subsys=hive
level=debug msg="Executing start hook" function="signalmap.newMap.func1 (pkg/maps/signalmap/cell.go:44)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_signals subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_signals subsys=bpf
level=info msg="Start hook executed" duration="246.673µs" function="signalmap.newMap.func1 (pkg/maps/signalmap/cell.go:44)" subsys=hive
level=debug msg="Executing start hook" function="nodemap.newNodeMap.func1 (pkg/maps/nodemap/cell.go:23)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_node_map subsys=ebpf
level=info msg="Start hook executed" duration="55.083µs" function="nodemap.newNodeMap.func1 (pkg/maps/nodemap/cell.go:23)" subsys=hive
level=debug msg="Executing start hook" function="eventsmap.newEventsMap.func1 (pkg/maps/eventsmap/cell.go:35)" subsys=hive
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_events subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_events subsys=bpf
level=info msg="Start hook executed" duration="162.103µs" function="eventsmap.newEventsMap.func1 (pkg/maps/eventsmap/cell.go:35)" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.Node].Start" subsys=hive
level=info msg="Start hook executed" duration="54.959µs" function="*resource.resource[*v1.Node].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Start hook executed" duration="4.72µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=debug msg="Executing start hook" function="node.NewLocalNodeStore.func1 (pkg/node/local_node_store.go:95)" subsys=hive
level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.193.0.0/16
level=info msg="Start hook executed" duration=9.110683ms function="node.NewLocalNodeStore.func1 (pkg/node/local_node_store.go:95)" subsys=hive
level=debug msg="Executing start hook" function="*statedb.DB.Start" subsys=hive
level=info msg="Start hook executed" duration="22.312µs" function="*statedb.DB.Start" subsys=hive
level=debug msg="Executing start hook" function="hive.New.func1.2 (pkg/hive/hive.go:105)" subsys=hive
level=info msg="Start hook executed" duration="24.148µs" function="hive.New.func1.2 (pkg/hive/hive.go:105)" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="6.029µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="*linux.devicesController.Start" subsys=hive
level=info msg="Devices changed" devices="[ens17 ens16]" subsys=devices-controller
level=info msg="Start hook executed" duration=1.86918ms function="*linux.devicesController.Start" subsys=hive
level=debug msg="Executing start hook" function="tables.(*nodeAddressController).register.func1 (pkg/datapath/tables/node_address.go:210)" subsys=hive
level=info msg="Node addresses updated" device=ens16 node-addresses="$PUBLIC_IP (ens16)" subsys=node-address
level=info msg="Node addresses updated" device=ens17 node-addresses="10.244.0.2 (ens17)" subsys=node-address
level=info msg="Node addresses updated" device=cilium_host node-addresses="10.244.192.198 (cilium_host), fe80::5898:1ff:fe62:70a4 (cilium_host)" subsys=node-address
level=info msg="Start hook executed" duration="515.275µs" function="tables.(*nodeAddressController).register.func1 (pkg/datapath/tables/node_address.go:210)" subsys=hive
level=debug msg="Executing start hook" function="*bandwidth.manager.Start" subsys=hive
level=debug msg="Starting one-shot job" func="tables.(*nodeAddressController).run" name=node-address-update subsys=jobs
level=info msg="Start hook executed" duration="421.092µs" function="*bandwidth.manager.Start" subsys=hive
level=debug msg="Executing start hook" function="modules.(*Manager).Start" subsys=hive
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 30.242µs ago, Message: }" subsys=hive
level=info msg="Start hook executed" duration=1.151688ms function="modules.(*Manager).Start" subsys=hive
level=debug msg="Executing start hook" function="*iptables.Manager.Start" subsys=hive
level=info msg="Start hook executed" duration=44.141344ms function="*iptables.Manager.Start" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="4.193µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="endpointmanager.newDefaultEndpointManager.func1 (pkg/endpointmanager/cell.go:216)" subsys=hive
level=debug msg="Starting new controller" name=endpoint-gc subsys=controller uuid=ae0777a9-cf6d-4430-ad3f-1badc06f61ee
level=info msg="Start hook executed" duration="28.601µs" function="endpointmanager.newDefaultEndpointManager.func1 (pkg/endpointmanager/cell.go:216)" subsys=hive
level=debug msg="Executing start hook" function="cmd.newPolicyTrifecta.func1 (cmd/policy.go:130)" subsys=hive
level=debug msg="creating new EventQueue" name=repository-change-queue numBufferedEvents=100 subsys=eventqueue
level=debug msg="creating new EventQueue" name=repository-reaction-queue numBufferedEvents=100 subsys=eventqueue
level=info msg="Start hook executed" duration="243.94µs" function="cmd.newPolicyTrifecta.func1 (cmd/policy.go:130)" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumEgressGatewayPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="3.883µs" function="*resource.resource[*cilium.io/v2.CiliumEgressGatewayPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Start hook executed" duration="19.474µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=debug msg="Controller func execution time: 106.179µs" name=endpoint-gc subsys=controller uuid=ae0777a9-cf6d-4430-ad3f-1badc06f61ee
level=debug msg="Executing start hook" function="*resource.resource[*types.CiliumEndpoint].Start" subsys=hive
level=info msg="Start hook executed" duration="9.399µs" function="*resource.resource[*types.CiliumEndpoint].Start" subsys=hive
level=debug msg="Executing start hook" function="datapath.newDatapath.func1 (pkg/datapath/cells.go:170)" subsys=hive
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 7.694µs ago, Message: }" subsys=hive
level=info msg="Restored 6 node IDs from the BPF map" subsys=linux-datapath
level=info msg="Start hook executed" duration="266.246µs" function="datapath.newDatapath.func1 (pkg/datapath/cells.go:170)" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.Service].Start" subsys=hive
level=info msg="Start hook executed" duration="11.948µs" function="*resource.resource[*v1.Service].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*k8s.Endpoints].Start" subsys=hive
level=info msg="Start hook executed" duration="3.661µs" function="*resource.resource[*k8s.Endpoints].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.Pod].Start" subsys=hive
level=info msg="Start hook executed" duration=899ns function="*resource.resource[*v1.Pod].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.Namespace].Start" subsys=hive
level=info msg="Start hook executed" duration="1.209µs" function="*resource.resource[*v1.Namespace].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*v1.NetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="1.155µs" function="*resource.resource[*v1.NetworkPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="1.154µs" function="*resource.resource[*cilium.io/v2.CiliumNetworkPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2.CiliumClusterwideNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="1.815µs" function="*resource.resource[*cilium.io/v2.CiliumClusterwideNetworkPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2alpha1.CiliumCIDRGroup].Start" subsys=hive
level=info msg="Start hook executed" duration="1.089µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumCIDRGroup].Start" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2alpha1.CiliumEndpointSlice].Start" subsys=hive
level=info msg="Start hook executed" duration="2.761µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumEndpointSlice].Start" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="1.733µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="*manager.manager.Start" subsys=hive
level=debug msg="Performing regular background work" subsys=nodemanager syncInterval=1m0s
level=info msg="Start hook executed" duration="47.97µs" function="*manager.manager.Start" subsys=hive
level=debug msg="Executing start hook" function="*cni.cniConfigManager.Start" subsys=hive
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 2.658µs ago, Message: }" subsys=hive
level=debug msg="Starting new controller" name=write-cni-file subsys=controller uuid=4c7e1de9-f37e-4e7b-9901-eafc2dd9a4b3
level=info msg="Start hook executed" duration="290.485µs" function="*cni.cniConfigManager.Start" subsys=hive
level=debug msg="Executing start hook" function="k8s.newServiceCache.func1 (pkg/k8s/service_cache.go:144)" subsys=hive
level=info msg="Start hook executed" duration="2.547µs" function="k8s.newServiceCache.func1 (pkg/k8s/service_cache.go:144)" subsys=hive
level=debug msg="Executing start hook" function="agent.newMonitorAgent.func1 (pkg/monitor/agent/cell.go:61)" subsys=hive
level=info msg="Generating CNI configuration file with mode none" subsys=cni-config
level=debug msg="Existing CNI configuration file /host/etc/cni/net.d/05-cilium.conflist unchanged" subsys=cni-config
level=debug msg="Group not found" error="group: unknown group cilium" file-path=/var/run/cilium/monitor1_2.sock group=cilium subsys=api
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Start hook executed" duration="541.73µs" function="agent.newMonitorAgent.func1 (pkg/monitor/agent/cell.go:61)" subsys=hive
level=debug msg="Executing start hook" function="*resource.resource[*cilium.io/v2alpha1.CiliumL2AnnouncementPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="6.922µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumL2AnnouncementPolicy].Start" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="5.98µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="13.259µs" function="*job.group.Start" subsys=hive
level=debug msg="Controller func execution time: 436.287µs" name=write-cni-file subsys=controller uuid=4c7e1de9-f37e-4e7b-9901-eafc2dd9a4b3
level=debug msg="Executing start hook" function="envoy.newEnvoyAccessLogServer.func1 (pkg/envoy/cell.go:107)" subsys=hive
level=debug msg="Controller run succeeded; waiting for next controller update or stop" name=write-cni-file subsys=controller uuid=4c7e1de9-f37e-4e7b-9901-eafc2dd9a4b3
level=debug msg="Starting one-shot job" func="l2announcer.(*L2Announcer).leaseGC" name="l2-announcer lease-gc" subsys=l2-announcer
level=info msg="Start hook executed" duration="86.173µs" function="envoy.newEnvoyAccessLogServer.func1 (pkg/envoy/cell.go:107)" subsys=hive
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 48.002µs ago, Message: }" subsys=hive
level=debug msg="Executing start hook" function="envoy.newArtifactCopier.func1 (pkg/envoy/cell.go:178)" subsys=hive
level=debug msg="Envoy: No artifacts to copy to envoy - source path doesn't exist" source-path=/envoy-artifacts subsys=envoy-manager
level=info msg="Start hook executed" duration="62.265µs" function="envoy.newArtifactCopier.func1 (pkg/envoy/cell.go:178)" subsys=hive
level=debug msg="Executing start hook" function="envoy.newEnvoyXDSServer.func1 (pkg/envoy/cell.go:65)" subsys=hive
level=info msg="Envoy: Starting access log server listening on /var/run/cilium/envoy/sockets/access_log.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration="236.797µs" function="envoy.newEnvoyXDSServer.func1 (pkg/envoy/cell.go:65)" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="4.08µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="signal.provideSignalManager.func1 (pkg/signal/cell.go:25)" subsys=hive
level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/envoy/sockets/xds.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration="369.423µs" function="signal.provideSignalManager.func1 (pkg/signal/cell.go:25)" subsys=hive
level=debug msg="Executing start hook" function="auth.registerAuthManager.func1 (pkg/auth/cell.go:112)" subsys=hive
level=debug msg="Starting cache restore" subsys=auth
level=info msg="Datapath signal listener running" subsys=signal
level=debug msg="Restored entries" cached_entries=0 subsys=auth
level=info msg="Start hook executed" duration=6.080412ms function="auth.registerAuthManager.func1 (pkg/auth/cell.go:112)" subsys=hive
level=debug msg="Executing start hook" function="auth.registerGCJobs.func1 (pkg/auth/cell.go:162)" subsys=hive
level=debug msg="Nodes synced" subsys=auth
level=info msg="Start hook executed" duration="18.55µs" function="auth.registerGCJobs.func1 (pkg/auth/cell.go:162)" subsys=hive
level=debug msg="Executing start hook" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="16.508µs" function="*job.group.Start" subsys=hive
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="1.827µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="bigtcp.newBIGTCP.func1 (pkg/datapath/linux/bigtcp/bigtcp.go:240)" subsys=hive
level=debug msg="Observer job started" func="auth.(*authMapGarbageCollector).handleIdentityChange" name="auth gc-identity-events" subsys=auth
level=info msg="Start hook executed" duration="137.841µs" function="bigtcp.newBIGTCP.func1 (pkg/datapath/linux/bigtcp/bigtcp.go:240)" subsys=hive
level=debug msg="Starting timer job" func="auth.(*authMapGarbageCollector).cleanup" name="auth gc-cleanup" subsys=auth
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: Degraded, Since: 6.902µs ago, Message: }" subsys=hive
level=debug msg="Observer job started" func="auth.(*AuthManager).handleAuthRequest" name="auth request-authentication" subsys=auth
level=debug msg="Executing start hook" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="8.415µs" function="*cell.reporterHooks.Start" subsys=hive
level=debug msg="Executing start hook" function="*ipsec.keyCustodian.Start" subsys=hive
level=info msg="Start hook executed" duration="582.91µs" function="*ipsec.keyCustodian.Start" subsys=hive
level=debug msg="Executing start hook" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="3.385µs" function="*job.group.Start" subsys=hive
level=debug msg="Executing start hook" function="mtu.newForCell.func1 (pkg/mtu/cell.go:40)" subsys=hive
level=info msg="Inheriting MTU from external network interface" device=ens17 ipAddr=10.244.0.2 mtu=1500 subsys=mtu
level=info msg="Start hook executed" duration=1.116409ms function="mtu.newForCell.func1 (pkg/mtu/cell.go:40)" subsys=hive
level=debug msg="Executing start hook" function="cmd.newDaemonPromise.func1 (cmd/daemon_main.go:1685)" subsys=hive
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_services_v2 size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_backends_v2 size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_backends_v3 size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_reverse_nat size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb_affinity_match size=128 subsys=bpf ttl=0s
level=debug msg="enabling events buffer" file-path= name=cilium_lb4_source_range size=128 subsys=bpf ttl=0s
level=debug msg="creating new EventQueue" name=config-modify-queue numBufferedEvents=10 subsys=eventqueue
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipcache subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipcache subsys=bpf
level=debug msg="Withholding numeric identities for later restoration" identity="[16777217]" subsys=identity-cache
level=debug msg="Starting new controller" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="enabling events buffer" file-path= name=cilium_lxc size=128 subsys=bpf ttl=0s
level=debug msg="Controller func execution time: 120.935µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run failed" consecutiveErrors=1 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller func execution time: 4.735µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run failed" consecutiveErrors=2 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lxc subsys=bpf
level=info msg="Removed map pin at /sys/fs/bpf/tc/globals/cilium_ipcache, recreating and re-pinning map cilium_ipcache" file-path=/sys/fs/bpf/tc/globals/cilium_ipcache name=cilium_ipcache subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipcache subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_metrics subsys=ebpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lb4_services_v2 subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lb4_backends_v3 subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lb4_reverse_nat subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_call_policy subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_call_policy subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ct4_global subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ct4_global subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ct_any4_global subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ct_any4_global subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipv4_frag_datagrams subsys=bpf
level=debug msg="Unregistered BPF map" path=/sys/fs/bpf/tc/globals/cilium_ipv4_frag_datagrams subsys=bpf
level=debug msg="Registered BPF map" path=/sys/fs/bpf/tc/globals/cilium_lb4_source_range subsys=bpf
level=debug msg="Restoring service" serviceID=1 serviceIP="10.244.64.1:443" subsys=service
level=debug msg="Restoring service" l3n4Addr="{AddrCluster:10.244.64.1 L4Addr:{Protocol:NONE Port:443} Scope:0}" subsys=service
level=debug msg="Restoring service" serviceID=3 serviceIP="10.244.64.10:9153" subsys=service
level=debug msg="Restoring service" l3n4Addr="{AddrCluster:10.244.64.10 L4Addr:{Protocol:NONE Port:9153} Scope:0}" subsys=service
level=debug msg="Restoring service" serviceID=2 serviceIP="10.244.64.10:53" subsys=service
level=debug msg="Restoring service" l3n4Addr="{AddrCluster:10.244.64.10 L4Addr:{Protocol:NONE Port:53} Scope:0}" subsys=service
level=info msg="Restored services from maps" failedServices=0 restoredServices=3 subsys=service
level=debug msg="Restoring backend" backendID=12 backendPreferred=false backendState=0 l3n4Addr="10.244.193.28:53" subsys=service
level=debug msg="Restoring backend" backendID=13 backendPreferred=false backendState=0 l3n4Addr="10.244.193.28:9153" subsys=service
level=debug msg="Restoring backend" backendID=11 backendPreferred=false backendState=0 l3n4Addr="10.244.193.150:9153" subsys=service
level=debug msg="Restoring backend" backendID=10 backendPreferred=false backendState=0 l3n4Addr="10.244.193.150:53" subsys=service
level=debug msg="Restoring backend" backendID=1 backendPreferred=false backendState=0 l3n4Addr="10.244.0.1:6443" subsys=service
level=info msg="Restored backends from maps" failedBackends=0 restoredBackends=5 skippedBackends=0 subsys=service
level=info msg="Reading old endpoints..." subsys=daemon
level=debug msg="Found endpoint C header file" endpointID=230 file-path=/var/run/cilium/state/230/ep_config.h subsys=endpoint
level=debug msg="Endpoint restoring" ciliumEndpointName=/ code=OK containerID= containerInterface= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=230 endpointState=restoring identity=4 ipv4=10.244.192.246 ipv6= k8sPodName=/ policyRevision=0 subsys=endpoint type=0
level=debug msg="Found endpoint C header file" endpointID=607 file-path=/var/run/cilium/state/607/ep_config.h subsys=endpoint
level=debug msg="Endpoint restoring" ciliumEndpointName=/ code=OK containerID= containerInterface= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=607 endpointState=restoring identity=1 ipv4= ipv6= k8sPodName=/ policyRevision=0 subsys=endpoint type=0
level=debug msg="Starting new controller" name=dns-garbage-collector-job subsys=controller uuid=ca86f43d-5a44-452c-85b3-57d5c80e1588
level=debug msg="Running 'iptables -t mangle -n -L CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Controller func execution time: 59.413µs" name=dns-garbage-collector-job subsys=controller uuid=ca86f43d-5a44-452c-85b3-57d5c80e1588
level=debug msg="Controller func execution time: 8.416µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run failed" consecutiveErrors=3 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="DNS Proxy bound to addresses" addresses=2 port=46841 subsys=fqdn/dnsproxy
level=info msg="Reusing previous DNS proxy port: 46841" subsys=daemon
level=debug msg="Restored rules for endpoint 230: map[]" subsys=fqdn/dnsproxy
level=debug msg="Restored rules for endpoint 607: map[]" subsys=fqdn/dnsproxy
level=debug msg="Trying to start the tcp4 DNS proxy on 127.0.0.1:46841" subsys=fqdn/dnsproxy
level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
level=debug msg="Trying to start the udp4 DNS proxy on 127.0.0.1:46841" subsys=fqdn/dnsproxy
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumpodippools.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumnodes.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumendpoints.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumloadbalancerippools.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliuml2announcementpolicies.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumnetworkpolicies.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumclusterwidenetworkpolicies.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumidentities.cilium.io" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="crd:ciliumcidrgroups.cilium.io" subsys=k8s
level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
level=info msg="Creating or updating CiliumNode resource" node=node-pool0-0 subsys=nodediscovery
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumnetworkpolicies.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumpodippools.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumidentities.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumendpoints.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliuml2announcementpolicies.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumloadbalancerippools.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumclusterwidenetworkpolicies.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumnodes.cilium.io" subsys=k8s
level=debug msg="canceled cache synchronization" kubernetesResource="crd:ciliumcidrgroups.cilium.io" subsys=k8s
level=info msg="Retrieved node information from cilium node" nodeName=node-pool0-0 subsys=daemon
level=info msg="Received own node information from API server" ipAddr.ipv4=10.244.0.2 ipAddr.ipv6="<nil>" k8sNodeIP=10.244.0.2 labels="map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:917f869b-77a5-40f7-aeb5-26ec52465361 beta.kubernetes.io/os:linux io.cilium.migration/cilium-default:true kubernetes.io/arch:amd64 kubernetes.io/hostname:node-pool0-0 kubernetes.io/os:linux node.kubernetes.io/instance-type:917f869b-77a5-40f7-aeb5-26ec52465361]" nodeName=node-pool0-0 subsys=daemon v4Prefix=10.244.192.0/24 v6Prefix="<nil>"
level=info msg="Restored router IPs from node information" ipv4=10.244.192.198 ipv6="<nil>" subsys=daemon
level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
level=info msg="Direct routing device detected" direct-routing-device=ens17 subsys=linux-datapath
level=info msg="Enabling k8s event listener" subsys=k8s-watcher
level=debug msg="waiting for cache to synchronize" kubernetesResource="core/v1::Service" subsys=k8s
level=info msg="Using discoveryv1.EndpointSlice" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="core/v1::Namespace" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="core/v1::Pods" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource=EndpointSliceOrEndpoint subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="networking.k8s.io/v1::NetworkPolicy" subsys=k8s
level=debug msg="Processing 1 endpoints for EndpointSlice kubernetes" subsys=k8s
level=debug msg="EndpointSlice kubernetes has 1 backends" subsys=k8s
level=debug msg="Processing 2 endpoints for EndpointSlice kube-dns-5ctlp" subsys=k8s
level=debug msg="EndpointSlice kube-dns-5ctlp has 2 backends" subsys=k8s
level=debug msg="Controller func execution time: 1.678µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run failed" consecutiveErrors=4 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Kubernetes service definition changed" action=service-updated endpoints="10.244.0.1:6443/TCP" k8sNamespace=default k8sSvcName=kubernetes old-endpoints="10.244.0.1:6443/TCP" old-service=nil service="frontends:[10.244.64.1]/ports=[https]/selector=map[]" subsys=k8s-watcher
level=debug msg="Upserting service" backends="[10.244.0.1:6443]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceIP="{10.244.64.1 {TCP 443} 0}" serviceName=kubernetes serviceNamespace=default sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Acquired service ID" backends="[10.244.0.1:6443]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceID=1 serviceIP="{10.244.64.1 {TCP 443} 0}" serviceName=kubernetes serviceNamespace=default sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Upserted service entry" backendSlot=1 subsys=map-lb svcKey="10.244.64.1:47873" svcVal="1 0 (256) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=0 subsys=map-lb svcKey="10.244.64.1:47873" svcVal="0 1 (256) [0x0 0x0]"
level=debug msg="Kubernetes service definition changed" action=service-updated endpoints="10.244.193.150:53/TCP,10.244.193.150:53/UDP,10.244.193.150:9153/TCP,10.244.193.28:53/TCP,10.244.193.28:53/UDP,10.244.193.28:9153/TCP" k8sNamespace=kube-system k8sSvcName=kube-dns old-endpoints="10.244.193.150:53/TCP,10.244.193.150:53/UDP,10.244.193.150:9153/TCP,10.244.193.28:53/TCP,10.244.193.28:53/UDP,10.244.193.28:9153/TCP" old-service=nil service="frontends:[10.244.64.10]/ports=[dns dns-tcp metrics]/selector=map[k8s-app:kube-dns]" subsys=k8s-watcher
level=debug msg="Upserting service" backends="[10.244.193.28:53 10.244.193.150:53]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceIP="{10.244.64.10 {UDP 53} 0}" serviceName=kube-dns serviceNamespace=kube-system sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Acquired service ID" backends="[10.244.193.28:53 10.244.193.150:53]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceID=2 serviceIP="{10.244.64.10 {UDP 53} 0}" serviceName=kube-dns serviceNamespace=kube-system sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Upserted service entry" backendSlot=1 subsys=map-lb svcKey="10.244.64.10:13568" svcVal="10 0 (512) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=2 subsys=map-lb svcKey="10.244.64.10:13568" svcVal="12 0 (512) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=0 subsys=map-lb svcKey="10.244.64.10:13568" svcVal="0 2 (512) [0x0 0x0]"
level=debug msg="Upserting service" backends="[10.244.193.28:9153 10.244.193.150:9153]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceIP="{10.244.64.10 {TCP 9153} 0}" serviceName=kube-dns serviceNamespace=kube-system sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Acquired service ID" backends="[10.244.193.28:9153 10.244.193.150:9153]" l7LBFrontendPorts="[]" l7LBProxyPort=0 loadBalancerSourceRanges="[]" serviceID=3 serviceIP="{10.244.64.10 {TCP 9153} 0}" serviceName=kube-dns serviceNamespace=kube-system sessionAffinity=false sessionAffinityTimeout=0 subsys=service svcExtTrafficPolicy=Cluster svcHealthCheckNodePort=0 svcIntTrafficPolicy=Cluster svcType=ClusterIP
level=debug msg="Upserted service entry" backendSlot=1 subsys=map-lb svcKey="10.244.64.10:49443" svcVal="11 0 (768) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=2 subsys=map-lb svcKey="10.244.64.10:49443" svcVal="13 0 (768) [0x0 0x0]"
level=debug msg="Upserted service entry" backendSlot=0 subsys=map-lb svcKey="10.244.64.10:49443" svcVal="0 2 (768) [0x0 0x0]"
level=debug msg="Skip pod event using host networking" hostIP=10.244.0.2 k8sNamespace=kube-system k8sPodName=cilium-fx2hs podIP=10.244.0.2 podIPs="[{10.244.0.2}]" subsys=k8s-watcher
level=debug msg="Skip pod event using host networking" hostIP=10.244.0.2 k8sNamespace=kube-system k8sPodName=csi-node-th77h podIP=10.244.0.2 podIPs="[{10.244.0.2}]" subsys=k8s-watcher
level=debug msg="Skip pod event using host networking" hostIP=10.244.0.2 k8sNamespace=kube-system k8sPodName=kube-proxy-899xk podIP=10.244.0.2 podIPs="[{10.244.0.2}]" subsys=k8s-watcher
level=debug msg="cache synced" kubernetesResource=EndpointSliceOrEndpoint subsys=k8s
level=debug msg="cache synced" kubernetesResource="core/v1::Service" subsys=k8s
level=debug msg="cache synced" kubernetesResource="core/v1::Namespace" subsys=k8s
level=debug msg="cache synced" kubernetesResource="networking.k8s.io/v1::NetworkPolicy" subsys=k8s
level=debug msg="cache synced" kubernetesResource="core/v1::Pods" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2::CiliumNetworkPolicy" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2alpha1::CiliumCIDRGroup" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2::CiliumEndpoint" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2::CiliumClusterwideNetworkPolicy" subsys=k8s
level=debug msg="waiting for cache to synchronize" kubernetesResource="cilium/v2::CiliumNode" subsys=k8s
level=info msg="Removing stale endpoint interfaces" subsys=daemon
level=info msg="Skipping kvstore configuration" subsys=daemon
level=info msg="Restored router address from node_config" file=/var/run/cilium/state/globals/node_config.h ipv4=10.244.192.198 ipv6="<nil>" subsys=node
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing cluster-pool IPAM" subsys=ipam v4Prefix=10.244.192.0/24 v6Prefix="<nil>"
level=info msg="Waiting until local node addressing before starting watchers depending on it" subsys=k8s-watcher
level=info msg="Restoring endpoints..." subsys=daemon
level=debug msg="Removing old health endpoint state directory" endpointID=230 file-path=/var/run/cilium/state/230 subsys=daemon
level=debug msg="Restoring endpoint" ciliumEndpointName=/ endpointID=607 subsys=daemon
level=debug msg="Restoring endpoint from previous cilium instance" ciliumEndpointName=/ code=OK containerID= containerInterface= datapathPolicyRevision=0 desiredPolicyRevision=0 endpointID=607 endpointState=restoring identity=1 ipv4= ipv6= k8sPodName=/ policyRevision=0 subsys=endpoint type=0
level=info msg="Endpoints restored" failed=0 restored=1 subsys=daemon
level=debug msg="Removed outdated endpoint 230 from endpoint map" subsys=daemon
level=debug msg="Allocated specific IP" ip=10.244.192.198 owner=router pool=default subsys=ipam
level=info msg="Node updated" clusterName=default nodeName=node-pool0-1 subsys=nodemanager
level=info msg="Addressing information:" subsys=daemon
level=info msg="  Cluster-Name: default" subsys=daemon
level=info msg="  Cluster-ID: 0" subsys=daemon
level=info msg="  Local node-name: node-pool0-0" subsys=daemon
level=info msg="  Node-IPv6: <nil>" subsys=daemon
level=info msg="  External-Node IPv4: 10.244.0.2" subsys=daemon
level=info msg="  Internal-Node IPv4: 10.244.192.198" subsys=daemon
level=debug msg="Upserting IP into ipcache layer" identity="{18005 custom-resource [] false false}" ipAddr=10.244.193.150 k8sNamespace=kube-system k8sPodName=coredns-f9955cc79-47469 key=0 namedPorts="map[dns:{53 17} dns-tcp:{53 6} metrics:{9153 6}]" subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{18005 custom-resource [] false false}" ipAddr="{10.244.193.150 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Received node update event from custom-resource" node="{\"Name\":\"node-pool0-1\",\"Cluster\":\"default\",\"IPAddresses\":[{\"Type\":\"InternalIP\",\"IP\":\"10.244.0.3\"},{\"Type\":\"ExternalIP\",\"IP\":\"$NODE_PUBLIC_IP\"},{\"Type\":\"CiliumInternalIP\",\"IP\":\"10.244.193.34\"}],\"IPv4AllocCIDR\":{\"IP\":\"10.244.193.0\",\"Mask\":\"////AA==\"},\"IPv4SecondaryAllocCIDRs\":null,\"IPv6AllocCIDR\":null,\"IPv6SecondaryAllocCIDRs\":null,\"IPv4HealthIP\":\"10.244.193.234\",\"IPv6HealthIP\":\"\",\"IPv4IngressIP\":\"\",\"IPv6IngressIP\":\"\",\"ClusterID\":0,\"Source\":\"custom-resource\",\"EncryptionKey\":0,\"Labels\":{\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/instance-type\":\"4796a7c9-5475-4799-82b7-ab399f6cddad\",\"beta.kubernetes.io/os\":\"linux\",\"io.cilium.migration/cilium-default\":\"true\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"node-pool0-1\",\"kubernetes.io/os\":\"linux\",\"node.kubernetes.io/instance-type\":\"4796a7c9-5475-4799-82b7-ab399f6cddad\"},\"Annotations\":null,\"NodeIdentity\":0,\"WireguardPubKey\":\"\"}" subsys=nodemanager
level=debug msg="Upserting IP into ipcache layer" identity="{18005 custom-resource [] false false}" ipAddr=10.244.193.28 k8sNamespace=kube-system k8sPodName=coredns-f9955cc79-nnkzr key=0 namedPorts="map[dns:{53 17} dns-tcp:{53 6} metrics:{9153 6}]" subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{18005 custom-resource [] false false}" ipAddr="{10.244.193.28 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=info msg="  IPv4 allocation prefix: 10.244.192.0/24" subsys=daemon
level=info msg="  IPv4 native routing prefix: 10.244.0.0/16" subsys=daemon
level=info msg="  Loopback IPv4: 169.254.42.1" subsys=daemon
level=info msg="  Local IPv4 addresses:" subsys=daemon
level=info msg="  - 10.244.0.2" subsys=daemon
level=info msg="  - 10.244.192.198" subsys=daemon
level=info msg="  - $PUBLIC_IP" subsys=daemon
level=debug msg="Allocated random IP" ip=10.244.192.247 owner=health pool=default subsys=ipam
level=debug msg="IPv4 health endpoint address: 10.244.192.247" subsys=daemon
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=info msg="Node updated" clusterName=default nodeName=node-pool0-0 subsys=nodemanager
level=debug msg="Received node update event from local" node="{\"Name\":\"node-pool0-0\",\"Cluster\":\"default\",\"IPAddresses\":[{\"Type\":\"InternalIP\",\"IP\":\"10.244.0.2\"},{\"Type\":\"ExternalIP\",\"IP\":\"$PUBLIC_IP\"},{\"Type\":\"CiliumInternalIP\",\"IP\":\"10.244.192.198\"}],\"IPv4AllocCIDR\":{\"IP\":\"10.244.192.0\",\"Mask\":\"////AA==\"},\"IPv4SecondaryAllocCIDRs\":null,\"IPv6AllocCIDR\":null,\"IPv6SecondaryAllocCIDRs\":null,\"IPv4HealthIP\":\"10.244.192.247\",\"IPv6HealthIP\":\"\",\"IPv4IngressIP\":\"\",\"IPv6IngressIP\":\"\",\"ClusterID\":0,\"Source\":\"local\",\"EncryptionKey\":0,\"Labels\":{\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/instance-type\":\"917f869b-77a5-40f7-aeb5-26ec52465361\",\"beta.kubernetes.io/os\":\"linux\",\"io.cilium.migration/cilium-default\":\"true\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"node-pool0-0\",\"kubernetes.io/os\":\"linux\",\"node.kubernetes.io/instance-type\":\"917f869b-77a5-40f7-aeb5-26ec52465361\"},\"Annotations\":{},\"NodeIdentity\":1,\"WireguardPubKey\":\"\"}" subsys=nodemanager
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=info msg="Adding local node to cluster" node=node-pool0-0 subsys=nodediscovery
level=debug msg="Running 'ipset add cilium_node_set_v4 10.244.0.3 -exist' command" subsys=iptables
level=debug msg="Running 'ipset add cilium_node_set_v4 10.244.0.2 -exist' command" subsys=iptables
level=debug msg="Controller func execution time: 9.993µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run failed" consecutiveErrors=5 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller func execution time: 3.202µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run failed" consecutiveErrors=6 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Node discovered - mark to keep" cluster=default name=node-pool0-1 node_ids="[28561]" subsys=auth
level=debug msg="Node discovered - mark to keep" cluster=default name=node-pool0-0 node_ids="[0]" subsys=auth
level=info msg="Creating or updating CiliumNode resource" node=node-pool0-0 subsys=nodediscovery
level=debug msg="Controller func execution time: 8.423µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run failed" consecutiveErrors=7 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=info msg="Waiting until all pre-existing resources have been received" subsys=k8s-watcher
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="core/v1::Namespace" subsys=k8s
level=debug msg="resource \"core/v1::Namespace\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="Controller func execution time: 182.72µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="core/v1::Pods" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource=EndpointSliceOrEndpoint subsys=k8s
level=debug msg="resource \"EndpointSliceOrEndpoint\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="Controller run failed" consecutiveErrors=8 error="k8s cache not fully synced" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="networking.k8s.io/v1::NetworkPolicy" subsys=k8s
level=debug msg="resource \"networking.k8s.io/v1::NetworkPolicy\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="resource \"core/v1::Pods\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="core/v1::Service" subsys=k8s
level=debug msg="resource \"core/v1::Service\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="Annotate k8s node is disabled." subsys=daemon
level=info msg="Initializing identity allocator" subsys=identity-cache
level=info msg="Allocating identities between range" cluster-id=0 max=65535 min=256 subsys=identity-cache
level=debug msg="Identity allocation backed by CRD" subsys=identity-cache
level=debug msg="Starting new controller" name=template-dir-watcher subsys=controller uuid=341e424c-df30-4e6d-9e95-c954fd44c3ec
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_host.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.forwarding sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.accept_local sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.cilium_net.send_redirects sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.core.bpf_jit_enable sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.conf.all.rp_filter sysParamValue=0
level=info msg="Setting sysctl" subsys=sysctl sysParamName=net.ipv4.fib_multipath_use_neigh sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.unprivileged_bpf_disabled sysParamValue=1
level=info msg="Setting sysctl" subsys=sysctl sysParamName=kernel.timer_migration sysParamValue=0
level=debug msg="writing configuration" file-path=netdev_config.h subsys=datapath-loader
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_connect', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet4Connect" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_sendmsg', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupUDP4Sendmsg" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_recvmsg', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupUDP4Recvmsg" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_getpeername', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CgroupInet4GetPeername" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_post_bind', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet4PostBind" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock4_pre_bind', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet4Bind" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_connect', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet6Connect" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_sendmsg', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupUDP6Sendmsg" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_recvmsg', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupUDP6Recvmsg" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_getpeername', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CgroupInet6GetPeername" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_post_bind', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet6PostBind" subsys=socketlb
level=debug msg="No pinned link '/sys/fs/bpf/cilium/socketlb/links/cgroup/cil_sock6_pre_bind', querying cgroup" subsys=socketlb
level=debug msg="No programs in cgroup /run/cilium/cgroupv2 with attach type CGroupInet6Bind" subsys=socketlb
level=debug msg="Launching compiler" args="[-I/var/run/cilium/state/globals -I/var/run/cilium/state -I/var/lib/cilium/bpf -I/var/lib/cilium/bpf/include -g -O2 --target=bpf -std=gnu89 -nostdinc -D__NR_CPUS__=4 -Wall -Wextra -Werror -Wshadow -Wno-address-of-packed-member -Wno-unknown-warning-option -Wno-gnu-variable-sized-type-not-at-end -Wdeclaration-after-statement -Wimplicit-int-conversion -Wenum-conversion -mcpu=v3 -c /var/lib/cilium/bpf/bpf_alignchecker.c -o -]" subsys=datapath-loader target=clang
level=debug msg="UpdateIdentities: Adding a new identity" identity=18005 labels="[k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns]" subsys=policy
level=debug msg="Regenerating all endpoints" subsys=policy
level=info msg="regenerating all endpoints" reason="one or more identities created or deleted" subsys=endpoint-manager
level=debug msg="cache synced" kubernetesResource="cilium/v2::CiliumNode" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2::CiliumNode" subsys=k8s
level=debug msg="resource \"cilium/v2::CiliumNode\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="Initial list of identities received" subsys=allocator
level=debug msg="Identity discovered - mark to keep" identity=18005 labels="k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system,k8s:io.cilium.k8s.policy.cluster=default,k8s:io.cilium.k8s.policy.serviceaccount=coredns,k8s:io.kubernetes.pod.namespace=kube-system,k8s:k8s-app=kube-dns" subsys=auth
level=debug msg="Identities synced" subsys=auth
level=debug msg="Compilation had peak RSS of 111632 bytes" compiler-pid=45 output=/var/run/cilium/state/bpf_alignchecker.o subsys=datapath-loader
level=debug msg="Updating direct route" addedCIDRs="[10.244.193.0/24]" newIP=10.244.0.3 oldIP="<nil>" removedCIDRs="[]" subsys=linux-datapath
level=debug msg="Running 'iptables -t nat -S' command" subsys=iptables
level=debug msg="Running 'ip6tables -t nat -S' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'ip6tables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S' command" subsys=iptables
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 5.034µs ago, Message: }" subsys=hive
level=debug msg="cache synced" kubernetesResource="cilium/v2::CiliumClusterwideNetworkPolicy" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2::CiliumClusterwideNetworkPolicy" subsys=k8s
level=debug msg="cache synced" kubernetesResource="cilium/v2alpha1::CiliumCIDRGroup" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2alpha1::CiliumCIDRGroup" subsys=k8s
level=debug msg="resource \"cilium/v2alpha1::CiliumCIDRGroup\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="resource \"cilium/v2::CiliumClusterwideNetworkPolicy\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="cache synced" kubernetesResource="cilium/v2::CiliumNetworkPolicy" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2::CiliumNetworkPolicy" subsys=k8s
level=debug msg="resource \"cilium/v2::CiliumNetworkPolicy\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="cache synced" kubernetesResource="cilium/v2::CiliumEndpoint" subsys=k8s
level=debug msg="stopped waiting for caches to be synced" kubernetesResource="cilium/v2::CiliumEndpoint" subsys=k8s
level=debug msg="resource \"cilium/v2::CiliumEndpoint\" cache has synced, stopping timeout watcher" subsys=k8s
level=debug msg="Running 'ip6tables -t filter -S' command" subsys=iptables
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 4.363µs ago, Message: }" subsys=hive
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'ip6tables -t nat -S OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'ip6tables -t mangle -S OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -E CILIUM_INPUT OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -E CILIUM_OUTPUT OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -E CILIUM_OUTPUT_raw OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -E CILIUM_POST_nat OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'ip6tables -t nat -S CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -E CILIUM_OUTPUT_nat OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -E CILIUM_PRE_nat OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -E CILIUM_POST_mangle OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -E CILIUM_PRE_mangle OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'ip6tables -t mangle -S CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -E CILIUM_PRE_raw OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -E CILIUM_FORWARD OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -N CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -N CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 4.947µs ago, Message: }" subsys=hive
level=debug msg="Running 'iptables -t raw -N CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -N CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -N CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -N CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -N CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -N CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -N CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -N CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_PRE_raw -m mark --mark 0x00000200/0x00000f00 -m comment --comment cilium: NOTRACK for proxy traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -A CILIUM_INPUT -m mark --mark 0x00000200/0x00000f00 -m comment --comment cilium: ACCEPT for proxy traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x00000a00/0xfffffeff -m comment --comment cilium: NOTRACK for proxy return traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x00000a00/0xfffffeff -m comment --comment cilium: NOTRACK for proxy return traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x00000800/0x00000e00 -m comment --comment cilium: NOTRACK for L7 proxy upstream traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x00000800/0x00000e00 -m comment --comment cilium: NOTRACK for L7 proxy upstream traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -A CILIUM_OUTPUT -m mark --mark 0x00000a00/0xfffffeff -m comment --comment cilium: ACCEPT for proxy return traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -A CILIUM_OUTPUT -m mark --mark 0x00000800/0x00000e00 -m comment --comment cilium: ACCEPT for l7 proxy upstream traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -A CILIUM_PRE_mangle -m socket --transparent -m comment --comment cilium: any->pod redirect proxied traffic to host proxy -j MARK --set-mark 0x00000200' command" subsys=iptables
level=debug msg="Running 'iptables -A CILIUM_FORWARD -o cilium_host -m comment --comment cilium: any->cluster on cilium_host forward accept -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -A CILIUM_FORWARD -i cilium_host -m comment --comment cilium: cluster->any on cilium_host forward accept (nodeport) -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -A CILIUM_FORWARD -i lxc+ -m comment --comment cilium: cluster->any on lxc+ forward accept -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -A CILIUM_FORWARD -i cilium_net -m comment --comment cilium: cluster->any on cilium_net forward accept (nodeport) -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -A CILIUM_OUTPUT -m mark ! --mark 0x00000e00/0x00000f00 -m mark ! --mark 0x00000d00/0x00000f00 -m mark ! --mark 0x00000a00/0x00000e00 -m mark ! --mark 0x00000800/0x00000e00 -m mark ! --mark 0x00000f00/0x00000f00 -m comment --comment cilium: host->any mark as from host -j MARK --set-xmark 0x00000c00/0x00000f00' command" subsys=iptables
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -o ens+ -m set --match-set cilium_node_set_v4 dst -m comment --comment exclude traffic to cluster nodes from masquerade -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -m mark --mark 0x00000a00/0x00000e00 -m comment --comment exclude proxy return traffic from masquerade -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -s 127.0.0.1 -o cilium_host -m comment --comment cilium host->cluster from 127.0.0.1 masquerade -j SNAT --to-source 10.244.192.198' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -A CILIUM_POST_nat -m mark --mark 0x00000f00/0x00000f00 -o cilium_host -m conntrack --ctstate DNAT -m comment --comment hairpin traffic that originated from a local pod -j SNAT --to-source 10.244.192.198' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -I INPUT -m comment --comment cilium-feeder: CILIUM_INPUT -j CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -I OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT -j CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -I OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT_raw -j CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -I POSTROUTING -m comment --comment cilium-feeder: CILIUM_POST_nat -j CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -I OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT_nat -j CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -I PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_nat -j CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -I POSTROUTING -m comment --comment cilium-feeder: CILIUM_POST_mangle -j CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -I PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_mangle -j CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -I PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_raw -j CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -I FORWARD -m comment --comment cilium-feeder: CILIUM_FORWARD -j CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -A CILIUM_PRE_mangle -p tcp -m mark --mark 0xf9b60200 -m comment --comment cilium: TPROXY to host cilium-dns-egress proxy -j TPROXY --on-port 46841 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -A CILIUM_PRE_mangle -p udp -m mark --mark 0xf9b60200 -m comment --comment cilium: TPROXY to host cilium-dns-egress proxy -j TPROXY --on-port 46841 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_nat -j OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT_nat -j OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=info msg="Node updated" clusterName=default nodeName=node-pool0-1 subsys=nodemanager
level=debug msg="Received node update event from custom-resource" node="{\"Name\":\"node-pool0-1\",\"Cluster\":\"default\",\"IPAddresses\":[{\"Type\":\"InternalIP\",\"IP\":\"10.244.0.3\"},{\"Type\":\"ExternalIP\",\"IP\":\"$NODE_PUBLIC_IP\"},{\"Type\":\"CiliumInternalIP\",\"IP\":\"10.244.193.34\"}],\"IPv4AllocCIDR\":{\"IP\":\"10.244.193.0\",\"Mask\":\"////AA==\"},\"IPv4SecondaryAllocCIDRs\":null,\"IPv6AllocCIDR\":null,\"IPv6SecondaryAllocCIDRs\":null,\"IPv4HealthIP\":\"\",\"IPv6HealthIP\":\"\",\"IPv4IngressIP\":\"\",\"IPv6IngressIP\":\"\",\"ClusterID\":0,\"Source\":\"custom-resource\",\"EncryptionKey\":0,\"Labels\":{\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/instance-type\":\"4796a7c9-5475-4799-82b7-ab399f6cddad\",\"beta.kubernetes.io/os\":\"linux\",\"io.cilium.migration/cilium-default\":\"true\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"node-pool0-1\",\"kubernetes.io/os\":\"linux\",\"node.kubernetes.io/instance-type\":\"4796a7c9-5475-4799-82b7-ab399f6cddad\"},\"Annotations\":null,\"NodeIdentity\":0,\"WireguardPubKey\":\"\"}" subsys=nodemanager
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=debug msg="Running 'ipset add cilium_node_set_v4 10.244.0.3 -exist' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D POSTROUTING -m comment --comment cilium-feeder: CILIUM_POST_nat -j OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Updating direct route" addedCIDRs="[]" newIP=10.244.0.3 oldIP=10.244.0.3 removedCIDRs="[]" subsys=linux-datapath
level=debug msg="Merged labels for reserved:host identity" labels="reserved:host" subsys=ipcache
level=debug msg="Resolving identity" identityLabels="cidr:10.244.0.1/32,reserved:kube-apiserver,reserved:world" subsys=identity-cache
level=debug msg="Reallocated restored local identity: 16777217" subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:health" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=health identityLabels="reserved:health" isNew=false subsys=identity-cache
level=debug msg="Merged labels for reserved:host identity" labels="reserved:host" subsys=ipcache
level=debug msg="Merged labels for reserved:host identity" labels="reserved:host" subsys=ipcache
level=debug msg="Resolving identity" identityLabels="reserved:health" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=health identityLabels="reserved:health" isNew=false subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:remote-node" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=remote-node identityLabels="reserved:remote-node" isNew=false subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:remote-node" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=remote-node identityLabels="reserved:remote-node" isNew=false subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:remote-node" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=remote-node identityLabels="reserved:remote-node" isNew=false subsys=identity-cache
level=debug msg="UpdateIdentities: Skipping add of an existing identical identity" identity=remote-node subsys=policy
level=debug msg="UpdateIdentities: Skipping add of an existing identical identity" identity=host subsys=policy
level=debug msg="UpdateIdentities: Adding a new identity" identity=16777217 labels="[cidr:0.0.0.0/0 cidr:0.0.0.0/1 cidr:0.0.0.0/2 cidr:0.0.0.0/3 cidr:0.0.0.0/4 cidr:10.0.0.0/7 cidr:10.0.0.0/8 cidr:10.128.0.0/9 cidr:10.192.0.0/10 cidr:10.224.0.0/11 cidr:10.240.0.0/12 cidr:10.240.0.0/13 cidr:10.244.0.0/14 cidr:10.244.0.0/15 cidr:10.244.0.0/16 cidr:10.244.0.0/17 cidr:10.244.0.0/18 cidr:10.244.0.0/19 cidr:10.244.0.0/20 cidr:10.244.0.0/21 cidr:10.244.0.0/22 cidr:10.244.0.0/23 cidr:10.244.0.0/24 cidr:10.244.0.0/25 cidr:10.244.0.0/26 cidr:10.244.0.0/27 cidr:10.244.0.0/28 cidr:10.244.0.0/29 cidr:10.244.0.0/30 cidr:10.244.0.0/31 cidr:10.244.0.1/32 cidr:8.0.0.0/5 cidr:8.0.0.0/6 reserved:kube-apiserver reserved:world]" subsys=policy
level=debug msg="UpdateIdentities: Skipping add of an existing identical identity" identity=health subsys=policy
level=debug msg="Waiting for proxy updates to complete..." subsys=endpoint-manager
level=debug msg="Wait time for proxy updates: 44.171µs" subsys=endpoint-manager
level=debug msg="Upserting IP into ipcache layer" identity="{host local [] false true}" ipAddr=10.244.192.198/32 key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{host local [] false true}" ipAddr="{10.244.192.198 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{16777217 kube-apiserver [] false true}" ipAddr=10.244.0.1/32 key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{16777217 kube-apiserver [] false true}" ipAddr="{10.244.0.1 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{health custom-resource [] false true}" ipAddr=10.244.193.234/32 key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{health custom-resource [] false true}" ipAddr="{10.244.193.234 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{health local [] false true}" ipAddr=10.244.192.247/32 key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{health local [] false true}" ipAddr="{10.244.192.247 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{host local [] false true}" ipAddr=$PUBLIC_IP key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{host local [] false true}" ipAddr="{$PUBLIC_IP ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{host local [] false true}" ipAddr=10.244.0.2/32 key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{host local [] false true}" ipAddr="{10.244.0.2 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{remote-node custom-resource [] false true}" ipAddr=10.244.0.3/32 key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{remote-node custom-resource [] false true}" ipAddr="{10.244.0.3 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{remote-node custom-resource [] false true}" ipAddr=10.244.193.34/32 key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{remote-node custom-resource [] false true}" ipAddr="{10.244.193.34 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Upserting IP into ipcache layer" identity="{remote-node custom-resource [] false true}" ipAddr=$NODE_PUBLIC_IP key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{remote-node custom-resource [] false true}" ipAddr="{$NODE_PUBLIC_IP ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Controller func execution time: 2.578944ms" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run succeeded; waiting for next controller update or stop" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller func execution time: 34.414µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run succeeded; waiting for next controller update or stop" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Deleting IP from ipcache layer" ipAddr=10.244.193.234/32 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{health custom-resource [] false true}" ipAddr="{10.244.193.234 ffffffff}" modification=Delete subsys=datapath-ipcache
level=debug msg="Controller func execution time: 92.453µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run succeeded; waiting for next controller update or stop" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -o ens16,ens17 -m set --match-set cilium_node_set_v4 dst -m comment --comment exclude traffic to cluster nodes from masquerade -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -s 10.244.192.0/24 -d 8.8.4.4/32 -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -s 10.244.192.0/24 -d 8.8.8.8/32 -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -s 10.244.192.0/24 -d 10.244.0.0/19 -o ens17 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source 10.244.0.2' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -s 10.244.192.0/24 -d $PUBLIC_IP_CIDR -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -s 10.244.192.0/24 -d $PUBLIC_GATEWAY_IP -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptablessubsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -s 10.244.192.0/24 ! -d 10.244.0.0/16 -o ens16 -m comment --comment cilium snat non-cluster via source route -j SNAT --to-source $PUBLIC_IP' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -m mark --mark 0xa00/0xe00 -m comment --comment exclude proxy return traffic from masquerade -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -s 127.0.0.1/32 -o cilium_host -m comment --comment cilium host->cluster from 127.0.0.1 masquerade -j SNAT --to-source 10.244.192.198' command" subsys=iptables
level=info msg="Node updated" clusterName=default nodeName=node-pool0-1 subsys=nodemanager
level=debug msg="Received node update event from custom-resource" node="{\"Name\":\"node-pool0-1\",\"Cluster\":\"default\",\"IPAddresses\":[{\"Type\":\"InternalIP\",\"IP\":\"10.244.0.3\"},{\"Type\":\"ExternalIP\",\"IP\":\"$NODE_PUBLIC_IP\"},{\"Type\":\"CiliumInternalIP\",\"IP\":\"10.244.193.34\"}],\"IPv4AllocCIDR\":{\"IP\":\"10.244.193.0\",\"Mask\":\"////AA==\"},\"IPv4SecondaryAllocCIDRs\":null,\"IPv6AllocCIDR\":null,\"IPv6SecondaryAllocCIDRs\":null,\"IPv4HealthIP\":\"10.244.193.245\",\"IPv6HealthIP\":\"\",\"IPv4IngressIP\":\"\",\"IPv6IngressIP\":\"\",\"ClusterID\":0,\"Source\":\"custom-resource\",\"EncryptionKey\":0,\"Labels\":{\"beta.kubernetes.io/arch\":\"amd64\",\"beta.kubernetes.io/instance-type\":\"4796a7c9-5475-4799-82b7-ab399f6cddad\",\"beta.kubernetes.io/os\":\"linux\",\"io.cilium.migration/cilium-default\":\"true\",\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/hostname\":\"node-pool0-1\",\"kubernetes.io/os\":\"linux\",\"node.kubernetes.io/instance-type\":\"4796a7c9-5475-4799-82b7-ab399f6cddad\"},\"Annotations\":null,\"NodeIdentity\":0,\"WireguardPubKey\":\"\"}" subsys=nodemanager
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -D OLD_CILIUM_POST_nat -o cilium_host -m mark --mark 0xf00/0xf00 -m conntrack --ctstate DNAT -m comment --comment hairpin traffic that originated from a local pod -j SNAT --to-source 10.244.192.198' command" subsys=iptables
level=debug msg="Running 'ipset add cilium_node_set_v4 10.244.0.3 -exist' command" subsys=iptables
level=debug msg="Updating direct route" addedCIDRs="[]" newIP=10.244.0.3 oldIP=10.244.0.3 removedCIDRs="[]" subsys=linux-datapath
level=debug msg="Resolving identity" identityLabels="reserved:remote-node" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=remote-node identityLabels="reserved:remote-node" isNew=false subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:remote-node" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=remote-node identityLabels="reserved:remote-node" isNew=false subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:remote-node" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=remote-node identityLabels="reserved:remote-node" isNew=false subsys=identity-cache
level=debug msg="Resolving identity" identityLabels="reserved:health" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=health identityLabels="reserved:health" isNew=false subsys=identity-cache
level=debug msg="UpdateIdentities: Skipping add of an existing identical identity" identity=health subsys=policy
level=debug msg="Waiting for proxy updates to complete..." subsys=endpoint-manager
level=debug msg="Wait time for proxy updates: 49.795µs" subsys=endpoint-manager
level=debug msg="Upserting IP into ipcache layer" identity="{health custom-resource [] false true}" ipAddr=10.244.193.245/32 key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{health custom-resource [] false true}" ipAddr="{10.244.193.245 ffffffff}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Controller func execution time: 765.427µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run succeeded; waiting for next controller update or stop" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller func execution time: 3.23µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Controller run succeeded; waiting for next controller update or stop" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=debug msg="Running 'ip6tables -t nat -S' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_mangle -j OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D POSTROUTING -m comment --comment cilium-feeder: CILIUM_POST_mangle -j OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D OLD_CILIUM_PRE_mangle -m socket --transparent -m comment --comment cilium: any->pod redirect proxied traffic to host proxy -j MARK --set-xmark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D OLD_CILIUM_PRE_mangle -p tcp -m mark --mark 0xf9b60200 -m comment --comment cilium: TPROXY to host cilium-dns-egress proxy -j TPROXY --on-port 46841 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -D OLD_CILIUM_PRE_mangle -p udp -m mark --mark 0xf9b60200 -m comment --comment cilium: TPROXY to host cilium-dns-egress proxy -j TPROXY --on-port 46841 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff' command" subsys=iptables
level=debug msg="Running 'ip6tables -t mangle -S' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D PREROUTING -m comment --comment cilium-feeder: CILIUM_PRE_raw -j OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT_raw -j OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment cilium: NOTRACK for proxy return traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment cilium: NOTRACK for proxy return traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0x800/0xe00 -m comment --comment cilium: NOTRACK for L7 proxy upstream traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0x800/0xe00 -m comment --comment cilium: NOTRACK for L7 proxy upstream traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -D OLD_CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment cilium: NOTRACK for proxy traffic -j CT --notrack' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D INPUT -m comment --comment cilium-feeder: CILIUM_INPUT -j OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D FORWARD -m comment --comment cilium-feeder: CILIUM_FORWARD -j OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OUTPUT -m comment --comment cilium-feeder: CILIUM_OUTPUT -j OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_FORWARD -o cilium_host -m comment --comment cilium: any->cluster on cilium_host forward accept -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_FORWARD -i cilium_host -m comment --comment cilium: cluster->any on cilium_host forward accept (nodeport) -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_FORWARD -i lxc+ -m comment --comment cilium: cluster->any on lxc+ forward accept -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_FORWARD -i cilium_net -m comment --comment cilium: cluster->any on cilium_net forward accept (nodeport) -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment cilium: ACCEPT for proxy traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_OUTPUT -m mark --mark 0xa00/0xfffffeff -m comment --comment cilium: ACCEPT for proxy return traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_OUTPUT -m mark --mark 0x800/0xe00 -m comment --comment cilium: ACCEPT for l7 proxy upstream traffic -j ACCEPT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -D OLD_CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0xa00/0xe00 -m mark ! --mark 0x800/0xe00 -m mark ! --mark 0xf00/0xf00 -m comment --comment cilium: host->any mark as from host -j MARK --set-xmark 0xc00/0xf00' command" subsys=iptables
level=debug msg="Processed new health status" status="Status{ModuleID: , Level: OK, Since: 5.071µs ago, Message: }" subsys=hive
level=debug msg="Running 'ip6tables -t filter -S' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -F OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -X OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_INPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -F OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -X OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_OUTPUT' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -F OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -X OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S OLD_CILIUM_OUTPUT_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -F OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -X OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'ip6tables -t nat -S OLD_CILIUM_POST_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -F OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -X OLD_CILIUM_OUTPUT_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -S OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -F OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t nat -X OLD_CILIUM_PRE_nat' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -F OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -X OLD_CILIUM_POST_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -S OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -F OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t mangle -X OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'ip6tables -t mangle -S OLD_CILIUM_PRE_mangle' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -S OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -F OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t raw -X OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'ip6tables -t raw -S OLD_CILIUM_PRE_raw' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -S OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -F OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'iptables -t filter -X OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'ip6tables -t filter -S OLD_CILIUM_FORWARD' command" subsys=iptables
level=debug msg="Running 'ipset create cilium_node_set_v4 iphash family inet -exist' command" subsys=iptables
level=info msg="Iptables rules installed" subsys=iptables
level=info msg="Adding new proxy port rules for cilium-dns-egress:46841" id=cilium-dns-egress subsys=proxy
level=debug msg="Running 'iptables -t mangle -S' command" subsys=iptables
level=info msg="Iptables proxy rules installed" subsys=iptables
level=debug msg="AckProxyPort: acked proxy port 46841 ({true dns false 46841 1 true 46841 true})" id=cilium-dns-egress subsys=proxy
level=debug msg="Starting new controller" name=sync-host-ips subsys=controller uuid=9a5c54bd-a478-4f38-8f15-df68de692487
level=debug msg="Controller func execution time: 584.222µs" name=sync-host-ips subsys=controller uuid=9a5c54bd-a478-4f38-8f15-df68de692487
level=debug msg="Merged labels for reserved:host identity" labels="reserved:host" subsys=ipcache
level=debug msg="Merged labels for reserved:host identity" labels="reserved:host" subsys=ipcache
level=debug msg="Merged labels for reserved:host identity" labels="reserved:host" subsys=ipcache
level=debug msg="Resolving identity" identityLabels="reserved:world" subsys=identity-cache
level=debug msg="Resolved reserved identity" identity=world identityLabels="reserved:world" isNew=false subsys=identity-cache
level=debug msg="UpdateIdentities: Skipping add of an existing identical identity" identity=host subsys=policy
level=debug msg="UpdateIdentities: Skipping add of an existing identical identity" identity=world subsys=policy
level=debug msg="Waiting for proxy updates to complete..." subsys=endpoint-manager
level=debug msg="Wait time for proxy updates: 53.133µs" subsys=endpoint-manager
level=debug msg="Upserting IP into ipcache layer" identity="{world local [] false true}" ipAddr=0.0.0.0/0 key=0 subsys=ipcache
level=debug msg="Daemon notified of IP-Identity cache state change" identity="{world local [] false true}" ipAddr="{0.0.0.0 00000000}" modification=Upsert subsys=datapath-ipcache
level=debug msg="Controller func execution time: 742.581µs" name=ipcache-inject-labels subsys=controller uuid=20784c2f-594d-4949-a946-d4c1cd87d7c3
level=info msg="Initializing daemon" subsys=daemon
...

I hope the information helps.

@github-actions github-actions bot added info-completed The GH issue has received a reply from the author and removed need-more-info More information is required to further debug or fix the issue. labels Apr 30, 2024
@youngnick youngnick added sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages. feature/egress-gateway Impacts the egress IP gateway feature. labels May 1, 2024
@julianwiedmann julianwiedmann removed the feature/egress-gateway Impacts the egress IP gateway feature. label May 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
info-completed The GH issue has received a reply from the author kind/bug This is a bug in the Cilium logic. kind/community-report This was reported by a user in the Cilium community, eg via Slack. needs/triage This issue requires triaging to establish severity and next steps. sig/datapath Impacts bpf/ or low-level forwarding details, including map management and monitor messages.
Projects
None yet
Development

No branches or pull requests

3 participants