Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using the newly created container group IP pool, the source IP will be lost when accessing the service. #6088

Open
ccyuvin opened this issue Apr 29, 2024 · 0 comments

Comments

@ccyuvin
Copy link

ccyuvin commented Apr 29, 2024

Operating system information
Cloud virtual machine, Ubuntu22.04, 4C/16G

Kubernetes version information
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}

Container runtime
Docker Engine 24.0.6

KubeSphere version
v3.4.1. Install online. Use kk to install.

what is the problem
Service IP segment 10.188.0.0/18 IP range 10.188.0.0 - 10.188.63.255
Initial Pod IP segment: 10.188.192.0/18 IP range 10.188.192.0 - 10.188.255.255
The container group IP pool function is enabled after installation and a new ippool-dev IP pool is created in the console.
ippool-dev IP segment: 10.189.0.0/20 IP range 10.189.0.0 - 10.189.15.255
Now there are two IP pools on the container IP pool page, default-ipv4-ippool and ippool-dev.
image
Pod A calls Pod B through Service

Pod A → Service → Pod B

Pod B is an nginx pod. Check the access log and find that
image
If Pod A is assigned the ip of the default-ipv4-ippool IP pool, then the access log shows that it is the container IP

If Pod A is assigned the IP of the newly created ippool-dev IP pool, then the access log shows two situations:
1. The IP of eth0 of the server itself (when Pod A and Pod B are on the same server)
2. The ip of the calico virtual tun device (when Pod A and Pod B are on different servers)

The source IP of the container is lost and SNAT

This does not conform to conventional logic: calling services within the cluster should not be NAT

And why is it normal when using the default default-ipv4-ippool IP pool?

I tried assigning two Pods to the same project (namespace), but the same problem occurred.

May I ask where my configuration is wrong? How can I modify it to have the same effect as the default-ipv4-ippool IP pool? Thank you.

service:

kind: Service
apiVersion: v1
metadata:
  name: font
  namespace: dev
  labels:
    app: font
  annotations:
    kubesphere.io/creator: admin
spec:
  ports:
    - name: tcp-80
      protocol: TCP
      port: 80
      targetPort: 80
  selector:
    app: font
  clusterIP: 10.188.21.35
  clusterIPs:
    - 10.188.21.35
  type: ClusterIP
  sessionAffinity: None
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  internalTrafficPolicy: Cluster

IP pool:

apiVersion: network.kubesphere.io/v1alpha1
kind: IPPool
metadata:
  annotations:
    kubesphere.io/creator: admin
    kubesphere.io/description: 测试环境
  finalizers:
    - finalizers.network.kubesphere.io/ippool
  labels:
    ippool.network.kubesphere.io/default: ''
    ippool.network.kubesphere.io/id: '4099'
    ippool.network.kubesphere.io/name: ippool-xft-dev
    ippool.network.kubesphere.io/type: calico
  name: ippool-dev
spec:
  cidr: 10.189.0.0/20
  dns: {}
  type: calico
  vlanConfig:
    master: ''
    vlanId: 0
apiVersion: network.kubesphere.io/v1alpha1
kind: IPPool
metadata:
  finalizers:
    - finalizers.network.kubesphere.io/ippool
  labels:
    ippool.network.kubesphere.io/default: ''
    ippool.network.kubesphere.io/id: '4099'
    ippool.network.kubesphere.io/name: default-ipv4-ippool
    ippool.network.kubesphere.io/type: calico
  name: default-ipv4-ippool
spec:
  blockSize: 24
  cidr: 10.188.192.0/18
  dns: {}
  type: calico
  vlanConfig:
    master: ''
    vlanId: 0

kk config part:

network:
    plugin: calico
    kubePodsCIDR: 10.188.192.0/18
    kubeServiceCIDR: 10.188.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false

Any help would be greatly appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant