Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Receive Upstream Timeout from IP which not present on k8s #5445

Open
ibadullaev-inc4 opened this issue Apr 24, 2024 · 8 comments
Open

Receive Upstream Timeout from IP which not present on k8s #5445

ibadullaev-inc4 opened this issue Apr 24, 2024 · 8 comments
Assignees
Labels
needs more info Issues that require more information

Comments

@ibadullaev-inc4
Copy link

Describe the bug
Hi, we are using VirtualServer CRD for configure route on k8s to send traffic to the upstream backend
After restart upstream server (backend) which has temproary IP address, our nginx ingress continue send traffic to the IP address which is not present yet.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy x to '...' using some.yaml
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: pn-front-prod-arbitrum-sepolia-rpc
  namespace: public
spec:
  server-snippets: |
    proxy_request_buffering off;
    ssl_buffer_size 4k;

  host: arbitrum-sepolia-rpc.example.com
  tls:
    secret: example.com
  upstreams:
  - name: backend
    service: pn-backend
    port: 4000
  - name: frontend
    service: pn-frontend
    port: 3000
  routes:

  - path: /api/metrics
    matches:
    - conditions:
      - variable: $request_method
        value: GET
      action:
        redirect:
          url: https://arbitrum-sepolia-rpc.example.com/
          code: 301
    action:
      pass: frontend


  - path: /api
    matches:
    - conditions:
      - variable: $request_method
        value: GET
      action:
        pass: frontend
    action:
      pass: frontend

  - path: /favicon.ico
    matches:
    - conditions:
      - variable: $request_method
        value: GET
      action:
        pass: frontend
    action:
      pass: frontend

  - path: /platforms
    matches:
    - conditions:
      - variable: $request_method
        value: GET
      action:
        pass: frontend
    action:
      pass: frontend

  - path: /_next
    matches:
    - conditions:
      - variable: $request_method
        value: GET
      action:
        pass: frontend
    action:
      pass: frontend

  - path: /
    matches:
    - conditions:
      - header: Upgrade
        value: websocket
      action:
        pass: backend 
    - conditions:
      - variable: $request_method
        value: GET
      action:
        proxy: 
          upstream: frontend
          rewritePath: /arbitrum-sepolia
    action:
      pass: backend
  1. View logs on '....'
date="2024-04-04T14:03:01+00:00" status=200 request_completion=OK msec=1712239381.780 connections_active=48 connections_reading=0 connections_writing=138 connections_waiting=24 connection=21143130 connection_requests=80 connection_time=1423.508 client=10.244.8.117 method=POST request="POST /?testbot&time=1712239321732 HTTP/2.0" request_length=240 status=200 bytes_sent=205 body_bytes_sent=46 referer= user_agent="Go-http-client/2.0" upstream_addr=10.244.21.150:4000, 10.244.24.58:4000 upstream_status=504, 200 request_time=60.008 upstream_response_time=60.002, 0.006 upstream_connect_time=-, 0.000 upstream_header_time=-, 0.006 request_body="{\"jsonrpc\":\"2.0\",\"id\":1,\"method\":\"eth_blockNumber\",\"params\":[]} host="[arbitrum-sepolia-rpc.example.com](http://arbitrum-sepolia-rpc.example.com/)" user_ip="164.90.160.159"
  1. See error
upstream_status=504, 200

Expected behavior

[nariman@notebook new-only-back]$ kubectl -n public get endpoints
NAME                    ENDPOINTS                                                             AGE
pn-backend              10.244.0.11:4001,10.244.0.163:4001,10.244.11.130:4001 + 92 more...    387d

If uptream IP address not present into endpoints why nginx try to send traffic to non-existent IP
Your environment

  • Version of the Ingress Controller - release version or a specific commit
[nariman@notebook new-only-back]$ helm ls -n nginx 
NAME   	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART              	APP VERSION
ingress	nginx    	19      	2024-03-29 12:17:00.734183664 +0400 +04	deployed	nginx-ingress-1.1.0	3.4.0 
  • Version of Kubernetes
[nariman@notebook new-only-back]$ kubectl version
Client Version: v1.29.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.27.10
  • Kubernetes platform (e.g. Mini-kube or GCP)
    DigitalOcean
  • Using NGINX or NGINX Plus
    NGINX

Additional context
Add any other context about the problem here. Any log files you want to share.
Config inside ingress controller

nginx@ingress-nginx-ingress-controller-565c6849d5-4k6kf:/etc/nginx/conf.d$ cat vs_public_pn-front-prod-arbitrum-sepolia-rpc.conf 

upstream vs_public_pn-front-prod-arbitrum-sepolia-rpc_backend {zone vs_public_pn-front-prod-arbitrum-sepolia-rpc_backend 256k;random two least_conn;
    server 10.244.0.11:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.0.163:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.11.130:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.2.141:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.24.217:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.24.99:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.3.157:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.37.252:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.39.214:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.4.89:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.40.54:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.42.195:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.43.186:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.47.216:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.48.184:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.49.232:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.51.146:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.51.85:4000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.52.54:4000 max_fails=1 fail_timeout=10s max_conns=0;
}

upstream vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend {zone vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend 256k;random two least_conn;
    server 10.244.21.9:3000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.3.22:3000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.36.163:3000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.38.151:3000 max_fails=1 fail_timeout=10s max_conns=0;
    server 10.244.5.10:3000 max_fails=1 fail_timeout=10s max_conns=0;
}

map $request_method $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_0_match_0_cond_0 {
    "GET" 1;
    default 0;
}
map $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_0_match_0_cond_0 $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_0 {
    ~^1 /internal_location_matches_0_match_0;
    default /internal_location_matches_0_default;
}
map $request_method $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_1_match_0_cond_0 {
    "GET" 1;
    default 0;
}
map $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_1_match_0_cond_0 $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_1 {
    ~^1 /internal_location_matches_1_match_0;
    default /internal_location_matches_1_default;
}
map $request_method $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_2_match_0_cond_0 {
    "GET" 1;
    default 0;
}
map $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_2_match_0_cond_0 $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_2 {
    ~^1 /internal_location_matches_2_match_0;
    default /internal_location_matches_2_default;
}
map $request_method $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_3_match_0_cond_0 {
    "GET" 1;
    default 0;
}
map $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_3_match_0_cond_0 $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_3 {
    ~^1 /internal_location_matches_3_match_0;
    default /internal_location_matches_3_default;
}
map $request_method $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_4_match_0_cond_0 {
    "GET" 1;
    default 0;
}
map $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_4_match_0_cond_0 $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_4 {
    ~^1 /internal_location_matches_4_match_0;
    default /internal_location_matches_4_default;
}
map $http_Upgrade $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_5_match_0_cond_0 {
    "websocket" 1;
    default 0;
}
map $request_method $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_5_match_1_cond_0 {
    "GET" 1;
    default 0;
}
map $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_5_match_0_cond_0$vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_5_match_1_cond_0 $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_5 {
    ~^1 /internal_location_matches_5_match_0;
    ~^01 /internal_location_matches_5_match_1;
    default /internal_location_matches_5_default;
}
server {
    listen 80;
    listen [::]:80;


    server_name arbitrum-sepolia-rpc.example.com;

    set $resource_type "virtualserver";
    set $resource_name "pn-front-prod-arbitrum-sepolia-rpc";
    set $resource_namespace "public";
    listen 443 ssl;
    listen [::]:443 ssl;

    http2 on;
    ssl_certificate $secret_dir_path/public-example.com;
    ssl_certificate_key $secret_dir_path/public-example.com;

    server_tokens "on";
    location /api/metrics {
        rewrite ^ $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_0 last;
    }
    location /api {
        rewrite ^ $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_1 last;
    }
    location /favicon.ico {
        rewrite ^ $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_2 last;
    }
    location /platforms {
        rewrite ^ $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_3 last;
    }
    location /_next {
        rewrite ^ $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_4 last;
    }
    location / {
        rewrite ^ $vs_public_pn_front_prod_arbitrum_sepolia_rpc_matches_5 last;
    }

    

    
    location /internal_location_matches_0_match_0 {
        set $service "";

        
        error_page 418 =301 "https://arbitrum-sepolia-rpc.example.com/";
        proxy_intercept_errors on;
        proxy_pass http://unix:/var/lib/nginx/nginx-418-server.sock;
    }
    location /internal_location_matches_0_default {
        set $service "pn-frontend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_1_match_0 {
        set $service "pn-frontend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_1_default {
        set $service "pn-frontend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_2_match_0 {
        set $service "pn-frontend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_2_default {
        set $service "pn-frontend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_3_match_0 {
        set $service "pn-frontend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_3_default {
        set $service "pn-frontend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_4_match_0 {
        set $service "pn-frontend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_4_default {
        set $service "pn-frontend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_5_match_0 {
        set $service "pn-backend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_backend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_5_match_1 {
        set $service "pn-frontend";
        internal;

        
        rewrite ^ $request_uri_no_args;
        rewrite "^/(.*)$" "/arbitrum-sepolia$1" break;
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_frontend;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
    location /internal_location_matches_5_default {
        set $service "pn-backend";
        internal;

        
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        client_max_body_size 100m;

        proxy_buffering off;
        proxy_http_version 1.1;
        set $default_connection_header close;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $vs_connection_header;
        proxy_pass_request_headers on;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host "$host";
        proxy_pass http://vs_public_pn-front-prod-arbitrum-sepolia-rpc_backend$request_uri;
        proxy_next_upstream error timeout;
        proxy_next_upstream_timeout 0s;
        proxy_next_upstream_tries 0;
    }
        
	location @grpc_deadline_exceeded {
        default_type application/grpc;
        add_header content-type application/grpc;
        add_header grpc-status 4;
        add_header grpc-message 'deadline exceeded';
        return 204;
    }

    location @grpc_permission_denied {
        default_type application/grpc;
        add_header content-type application/grpc;
        add_header grpc-status 7;
        add_header grpc-message 'permission denied';
        return 204;
    }

    location @grpc_resource_exhausted {
        default_type application/grpc;
        add_header content-type application/grpc;
        add_header grpc-status 8;
        add_header grpc-message 'resource exhausted';
        return 204;
    }

    location @grpc_unimplemented {
        default_type application/grpc;
        add_header content-type application/grpc;
        add_header grpc-status 12;
        add_header grpc-message unimplemented;
        return 204;
    }

    location @grpc_internal {
        default_type application/grpc;
        add_header content-type application/grpc;
        add_header grpc-status 13;
        add_header grpc-message 'internal error';
        return 204;
    }

    location @grpc_unavailable {
        default_type application/grpc;
        add_header content-type application/grpc;
        add_header grpc-status 14;
        add_header grpc-message unavailable;
        return 204;
    }

    location @grpc_unauthenticated {
        default_type application/grpc;
        add_header content-type application/grpc;
        add_header grpc-status 16;
        add_header grpc-message unauthenticated;
        return 204;
    }

	    
    
}
Copy link

Hi @ibadullaev-inc4 thanks for reporting!

Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this 🙂

Cheers!

@brianehlert
Copy link
Collaborator

After restart upstream server (backend) which has temproary IP address, our nginx ingress continue send traffic to the IP address which is not present yet.

NGINX Ingress Controller configures upstreams using endpointSlices and only those endpoints that also are 'ready'.
The exception to this would be externalName services, these rely on DNS resolution and the NGINX resolver.

Can you help me understand your scenario a bit deeper?
Are these back-end K8s services? ExternalName services?

If it is a timing issue we recommend using a healthcheck
https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/#upstreamhealthcheck

@ibadullaev-inc4
Copy link
Author

ibadullaev-inc4 commented Apr 25, 2024

Hi, thank you for your response
Upstream server is k8s pods
ExternalName services? No we do not use ExternalName services

[nariman@notebook new-only-back]$ kubectl -n public get svc pn-backend -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"pn-backend","namespace":"public"},"spec":{"ports":[{"name":"http","port":4000,"protocol":"TCP","targetPort":4000},{"name":"grpc","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app":"pn-backend"},"type":"NodePort"}}
    kubernetes.io/change-cause: kubectl edit svc pn-backend --context=fra --namespace=public
      --record=true
  creationTimestamp: "2023-04-03T13:36:05Z"
  name: pn-backend
  namespace: public
  resourceVersion: "227720878"
  uid: eb76e588-b3a4-4299-bf85-ee1e6e818ada
spec:
  clusterIP: 10.245.106.220
  clusterIPs:
  - 10.245.106.220
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    nodePort: 30569
    port: 4000
    protocol: TCP
    targetPort: 4000
  - name: ws
    nodePort: 30073
    port: 4001
    protocol: TCP
    targetPort: 4001
  - name: grpc
    nodePort: 30022
    port: 9090
    protocol: TCP
    targetPort: 9090
  - name: web
    nodePort: 30754
    port: 9091
    protocol: TCP
    targetPort: 9091
  - name: web-ws
    nodePort: 30693
    port: 9092
    protocol: TCP
    targetPort: 9092
  selector:
    app: pn-backend
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
[nariman@notebook new-only-back]$ kubectl -n public get endpointslices.discovery.k8s.io pn-backend-h9fwc -o yaml
addressType: IPv4
apiVersion: discovery.k8s.io/v1
endpoints:
- addresses:
  - 10.244.48.184
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jjfif
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-5sqcb
    namespace: public
    uid: 363edb7a-aa40-4468-ba30-5cfaf712262a
- addresses:
  - 10.244.0.11
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jggse
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-9szzn
    namespace: public
    uid: 350bbdef-de2d-455d-8841-306337e8ad47
- addresses:
  - 10.244.40.54
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jo28y
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-9hztz
    namespace: public
    uid: c82f3d93-9a50-4f7e-9ce7-cef6d55cf2ef
- addresses:
  - 10.244.2.141
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-j6x9c
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-gfzmd
    namespace: public
    uid: 03245b86-2905-4741-8674-7239efa3175c
- addresses:
  - 10.244.4.89
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-j6x98
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-pj9ph
    namespace: public
    uid: b54f1bcc-af6a-4e46-b7e6-7c923a3989c3
- addresses:
  - 10.244.3.157
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-j6x9u
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-k8vgf
    namespace: public
    uid: 6e83f356-836c-4872-b125-884e7f4c1d76
- addresses:
  - 10.244.51.85
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jb9u4
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-mxtq4
    namespace: public
    uid: 60167a0c-90a7-400a-8797-8f743a99a751
- addresses:
  - 10.244.52.54
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jb9c5
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-zvslm
    namespace: public
    uid: 47e7e6f2-8473-4ab9-86d8-2d87180c83f9
- addresses:
  - 10.244.49.232
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jjy7i
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-6m87p
    namespace: public
    uid: 0462dc9d-bd8c-4048-b3d9-1464c3b617a1
- addresses:
  - 10.244.0.163
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jggsa
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-rvhfc
    namespace: public
    uid: f111baaf-5701-400e-b696-49a4be3dc803
- addresses:
  - 10.244.11.130
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jols6
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-fvk98
    namespace: public
    uid: 09393517-cc36-4315-82b5-7a80c804bbfc
- addresses:
  - 10.244.24.99
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jols2
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-2b5fc
    namespace: public
    uid: e1985bea-331c-4235-9a16-9d9f7d9d01e9
- addresses:
  - 10.244.47.216
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jjfiq
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-5h56d
    namespace: public
    uid: f246aeaf-0408-450a-822c-28e861eea19b
- addresses:
  - 10.244.39.214
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jo2ns
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-t99pc
    namespace: public
    uid: 8508225f-8cf2-48e6-bdff-9304964c82bf
- addresses:
  - 10.244.37.252
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jol98
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-7dcj8
    namespace: public
    uid: 21e8293b-4a51-4701-9c42-d45619919c95
- addresses:
  - 10.244.43.186
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jo28r
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-h45nr
    namespace: public
    uid: 17a9ff72-b098-4f7f-a17e-a154a987113a
- addresses:
  - 10.244.42.195
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jo28j
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-84vqk
    namespace: public
    uid: c3db49b2-a931-4647-ac50-0ead08719f60
- addresses:
  - 10.244.51.146
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jb9ui
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-bfk8d
    namespace: public
    uid: 30f9461c-69f6-43b9-9be2-8489e8bcd13b
- addresses:
  - 10.244.24.217
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jolii
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-bxvxn
    namespace: public
    uid: 94b6cfdb-dbe5-48e0-ae06-a333be26e9c5
kind: EndpointSlice
metadata:
  annotations:
    endpoints.kubernetes.io/last-change-trigger-time: "2024-04-24T22:12:00Z"
  creationTimestamp: "2023-04-03T13:36:05Z"
  generateName: pn-backend-
  generation: 40139
  labels:
    endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
    kubernetes.io/service-name: pn-backend
  name: pn-backend-h9fwc
  namespace: public
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    controller: true
    kind: Service
    name: pn-backend
    uid: eb76e588-b3a4-4299-bf85-ee1e6e818ada
  resourceVersion: "249331656"
  uid: cb4a980f-e75f-4dfe-a26f-aa472daa0b93
ports:
- name: grpc
  port: 9090
  protocol: TCP
- name: web
  port: 9091
  protocol: TCP
- name: web-ws
  port: 9092
  protocol: TCP
- name: ws
  port: 4001
  protocol: TCP
- name: http
  port: 4000
  protocol: TCP

@ibadullaev-inc4
Copy link
Author

ibadullaev-inc4 commented May 7, 2024

Hi @brianehlert

Than you for your previous response

Is it not possible to add health check if I don't use Nginx Plus ?

Warning  Rejected  28s   nginx-ingress-controller  VirtualServer public/pn-front-prod-arbitrum-nova-rpc was rejected with error: spec.upstreams[0].healthCheck: Forbidden: active health checks are only supported in NGINX Plus
[nariman@notebook nginx-health]$ kubectl -n public get virtualserver pn-front-prod-arbitrum-nova-rpc 
NAME                              STATE     HOST                               IP    PORTS   AGE
pn-front-prod-arbitrum-nova-rpc   Invalid   arbitrum-nova-rpc.example.com                 41d

@danielnginx danielnginx added the needs more info Issues that require more information label May 7, 2024
@shaun-nx shaun-nx self-assigned this May 7, 2024
@brianehlert
Copy link
Collaborator

Passive health checks are always present. But Active health checks are a capability that is specific to NGINX Plus.
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/

By default, NGINX Ingress Controller won't add pods to the service upstream group until the pod reports ready.
So, the alternative to using the enterprise version of this project is to improve the ready probe on your service pods.

@ibadullaev-inc4
Copy link
Author

ibadullaev-inc4 commented May 8, 2024

Hello,

Yes my deployment is configure with live and read probe
Also as you mentioned passive health is automatically enabled by nginx virtual host template
But I am faced this problem after pod is not present
And nginx try to send traffic to pod which died 10-20 minute before

[nariman@notebook nginx-ingress]$ kubectl -n public get deployments.apps pn-backend -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: pn-backend
    tags.datadoghq.com/service: pn-backend
  name: pn-backend
  namespace: public
spec:
  selector:
    matchLabels:
      app: pn-backend
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 10%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        ad.datadoghq.com/pn-backend.logs: '[{"source":"pn-backend","service":"pn-backend","auto_multi_line_detection":true}]'
      creationTimestamp: null
      labels:
        app: pn-backend
        tags.datadoghq.com/env: prod-fra
        tags.datadoghq.com/service: pn-backend
    spec:
      containers:
      - name: pn-backend
        image: public/pn-backend:fe1db1c
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /api/healthcheck
            port: 4000
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 3
        ports:
        - containerPort: 4000
          protocol: TCP
        - containerPort: 9090
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /api/healthcheck
            port: 4000
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 3
        resources:
          limits:
            cpu: "12"
            memory: 16Gi
          requests:
            cpu: "10"
            memory: 8Gi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/run/datadog
          name: apmsocketpath
        - mountPath: /app/geo_config.json
          name: cluster-configs-volume
          readOnly: true
          subPath: geo_config.json
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: docker-registry
      initContainers:
      - command:
        - /bin/sh
        - -c
        - |
          sysctl -w net.core.somaxconn=64000
          sysctl -w net.ipv4.ip_local_port_range="1024 65535"
        image: busybox
        imagePullPolicy: Always
        name: init-sysctl
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /var/run/datadog/
          type: ""
        name: apmsocketpath
      - name: cluster-configs-volume
        secret:
          defaultMode: 420
          secretName: cluster-configs-f9c45ad972b6b8559cdc924581631d693f53d5d0

@brianehlert
Copy link
Collaborator

brianehlert commented May 8, 2024

The deployment doesn't give us much information to assist with.
We would need you to share your configuration resources. The VirtualServer, VirtualServerRoute, TransportServer, or Ingress

If a pod of a service no longer exists, it should be removed from the ingress controller upstream group for that service.
Unless there is a configuration error (such as through snippets or customizations) that is preventing NGINX from being updated.

@ibadullaev-inc4
Copy link
Author

ibadullaev-inc4 commented May 9, 2024

Hi, thank you for fast response
If i forgot something please inform me.

  1. We installed our nginx via helm chart
[nariman@notebook ~]$ helm ls -n nginx
NAME   	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART              	APP VERSION
ingress	nginx    	19      	2024-03-29 12:17:00.734183664 +0400 +04	deployed	nginx-ingress-1.1.0	3.4.0
  1. Nginx ConfigMap
[nariman@notebook ~]$ kubectl -n nginx get cm ingress-nginx-ingress -o yaml
apiVersion: v1
data:
  client-max-body-size: 100m
  http2: "true"
  log-format: date="$time_iso8601" status=$status request_completion=$request_completion
    msec=$msec connections_active=$connections_active connections_reading=$connections_reading
    connections_writing=$connections_writing connections_waiting=$connections_waiting
    connection=$connection connection_requests=$connection_requests connection_time=$connection_time
    client=$remote_addr method=$request_method request="$request" request_length=$request_length
    status=$status bytes_sent=$bytes_sent body_bytes_sent=$body_bytes_sent referer=$http_referer
    user_agent="$http_user_agent" upstream_addr=$upstream_addr upstream_status=$upstream_status
    request_time=$request_time upstream_response_time=$upstream_response_time upstream_connect_time=$upstream_connect_time
    upstream_header_time=$upstream_header_time request_body="$request_body host="$host"
    user_ip="$http_x_forwarded_for"
  log-format-escaping: json
  proxy-buffering: "false"
  proxy-request-buffering: "off"
  redirect-to-https: "true"
  ssl_buffer_size: 4k
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: ingress
    meta.helm.sh/release-namespace: nginx
  creationTimestamp: "2024-03-26T14:37:04Z"
  labels:
    app.kubernetes.io/instance: ingress
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/version: 3.4.0
    helm.sh/chart: nginx-ingress-1.1.0
  name: ingress-nginx-ingress
  namespace: nginx
  resourceVersion: "222418677"
  uid: 958b1e7e-d36e-44cf-bfc1-d5aee88b767d
  1. Service in nginx namespace, we do not use LoadBalancer type, because before Nginx we use CloudFlare tunnel
[nariman@notebook ~]$ kubectl -n nginx get svc
NAME                                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-ingress-controller           NodePort    10.245.73.204   <none>        80:32098/TCP,443:32685/TCP   43d
ingress-nginx-ingress-prometheus-service   ClusterIP   None            <none>        9113/TCP                     43d
  1. Nginx deployment
[nariman@notebook ~]$ kubectl -n nginx get deployments.apps ingress-nginx-ingress-controller -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "10"
    meta.helm.sh/release-name: ingress
    meta.helm.sh/release-namespace: nginx
  creationTimestamp: "2024-03-26T14:37:05Z"
  generation: 158
  labels:
    app.kubernetes.io/instance: ingress
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/version: 3.4.0
    helm.sh/chart: nginx-ingress-1.1.0
  name: ingress-nginx-ingress-controller
  namespace: nginx
  resourceVersion: "250051917"
  uid: 6cda1863-0a5a-4b90-b419-74e6f26540b2
spec:
  progressDeadlineSeconds: 600
  replicas: 10
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: ingress
      app.kubernetes.io/name: nginx-ingress
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: "2024-03-29T10:34:14+04:00"
        prometheus.io/port: "9113"
        prometheus.io/scheme: http
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: ingress
        app.kubernetes.io/name: nginx-ingress
    spec:
      automountServiceAccountToken: true
      containers:
      - args:
        - -nginx-plus=false
        - -nginx-reload-timeout=60000
        - -enable-app-protect=false
        - -enable-app-protect-dos=false
        - -nginx-configmaps=$(POD_NAMESPACE)/ingress-nginx-ingress
        - -ingress-class=nginx
        - -health-status=false
        - -health-status-uri=/nginx-health
        - -nginx-debug=false
        - -v=1
        - -nginx-status=true
        - -nginx-status-port=8080
        - -nginx-status-allow-cidrs=127.0.0.1
        - -report-ingress-status
        - -enable-leader-election=true
        - -leader-election-lock-name=nginx-ingress-leader
        - -enable-prometheus-metrics=true
        - -prometheus-metrics-listen-port=9113
        - -prometheus-tls-secret=
        - -enable-service-insight=false
        - -service-insight-listen-port=9114
        - -service-insight-tls-secret=
        - -enable-custom-resources=true
        - -enable-snippets=false
        - -include-year=false
        - -disable-ipv6=false
        - -enable-tls-passthrough=false
        - -enable-cert-manager=false
        - -enable-oidc=false
        - -enable-external-dns=false
        - -default-http-listener-port=80
        - -default-https-listener-port=443
        - -ready-status=true
        - -ready-status-port=8081
        - -enable-latency-metrics=true
        - -ssl-dynamic-reload=true
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        image: nginx/nginx-ingress:3.4.0
        imagePullPolicy: IfNotPresent
        name: nginx-ingress
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 9113
          name: prometheus
          protocol: TCP
        - containerPort: 8081
          name: readiness-port
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /nginx-ready
            port: readiness-port
            scheme: HTTP
          periodSeconds: 1
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: "2"
            memory: 2Gi
          requests:
            cpu: "1"
            memory: 512Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          readOnlyRootFilesystem: false
          runAsNonRoot: true
          runAsUser: 101
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      serviceAccount: ingress-nginx-ingress
      serviceAccountName: ingress-nginx-ingress
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 10
  conditions:
  - lastTransitionTime: "2024-03-26T14:37:05Z"
    lastUpdateTime: "2024-03-29T06:34:21Z"
    message: ReplicaSet "ingress-nginx-ingress-controller-565c6849d5" has successfully
      progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2024-04-10T16:13:10Z"
    lastUpdateTime: "2024-04-10T16:13:10Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 158
  readyReplicas: 10
  replicas: 10
  updatedReplicas: 10

Follow manifest related our service

  1. VirtualServer manifest
[nariman@notebook ~]$ kubectl -n public get virtualservers.k8s.nginx.org pn-front-prod-arbitrum-sepolia-rpc 
NAME                                 STATE   HOST                                  IP    PORTS   AGE
pn-front-prod-arbitrum-sepolia-rpc   Valid   arbitrum-sepolia-rpc.public.com                 43d
[nariman@notebook ~]$ kubectl -n public get virtualservers.k8s.nginx.org pn-front-prod-arbitrum-sepolia-rpc -o yaml
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"k8s.nginx.org/v1","kind":"VirtualServer","metadata":{"annotations":{},"name":"pn-front-prod-arbitrum-sepolia-rpc","namespace":"public"},"spec":{"host":"arbitrum-sepolia-rpc.public.com","routes":[{"action":{"pass":"frontend"},"matches":[{"action":{"redirect":{"code":301,"url":"https://arbitrum-sepolia-rpc.public.com/"}},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/api/metrics"},{"action":{"pass":"frontend"},"matches":[{"action":{"pass":"frontend"},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/api"},{"action":{"pass":"frontend"},"matches":[{"action":{"pass":"frontend"},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/favicon.ico"},{"action":{"pass":"frontend"},"matches":[{"action":{"pass":"frontend"},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/platforms"},{"action":{"pass":"frontend"},"matches":[{"action":{"pass":"frontend"},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/_next"},{"action":{"pass":"backend"},"matches":[{"action":{"pass":"backend"},"conditions":[{"header":"Upgrade","value":"websocket"}]},{"action":{"proxy":{"rewritePath":"/arbitrum-sepolia","upstream":"frontend"}},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/"}],"server-snippets":"proxy_request_buffering off;\nssl_buffer_size 4k;\n","tls":{"secret":"public.com"},"upstreams":[{"name":"backend","port":4000,"service":"pn-backend"},{"name":"frontend","port":3000,"service":"pn-frontend"}]}}
  creationTimestamp: "2024-03-26T14:42:08Z"
  generation: 1
  name: pn-front-prod-arbitrum-sepolia-rpc
  namespace: public
  resourceVersion: "222416877"
  uid: e616e0dc-3433-4be4-807d-00786e8a217d
spec:
  host: arbitrum-sepolia-rpc.public.com
  routes:
  - action:
      pass: frontend
    matches:
    - action:
        redirect:
          code: 301
          url: https://arbitrum-sepolia-rpc.publicn.com/
      conditions:
      - value: GET
        variable: $request_method
    path: /api/metrics
  - action:
      pass: frontend
    matches:
    - action:
        pass: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /api
  - action:
      pass: frontend
    matches:
    - action:
        pass: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /favicon.ico
  - action:
      pass: frontend
    matches:
    - action:
        pass: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /platforms
  - action:
      pass: frontend
    matches:
    - action:
        pass: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /_next
  - action:
      pass: backend
    matches:
    - action:
        pass: backend
      conditions:
      - header: Upgrade
        value: websocket
    - action:
        proxy:
          rewritePath: /arbitrum-sepolia
          upstream: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /
  server-snippets: |
    proxy_request_buffering off;
    ssl_buffer_size 4k;
  tls:
    secret: public.com
  upstreams:
  - name: backend
    port: 4000
    service: pn-backend
  - name: frontend
    port: 3000
    service: pn-frontend
status:
  message: 'Configuration for public/pn-front-prod-arbitrum-sepolia-rpc was added
    or updated '
  reason: AddedOrUpdated
  state: Valid
  • virtualserverroutes
[nariman@notebook ~]$ kubectl -n public get virtualserverroutes.k8s.nginx.org 
No resources found in public namespace.
  • transportservers
[nariman@notebook ~]$ kubectl -n public get transportservers.k8s.nginx.org 
No resources found in public namespace.
  • ingress
[nariman@notebook ~]$ kubectl -n public get ingress
No resources found in public namespace.
[nariman@notebook ~]$ kubectl -n public get svc
NAME                    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                      AGE
pn-backend              NodePort   10.245.106.220   <none>        4000:30569/TCP,4001:30073/TCP,9090:30022/TCP,9091:30754/TCP,9092:30693/TCP   401d
pn-connections-broker   NodePort   10.245.65.60     <none>        8888:30605/TCP,9999:32137/TCP                                                71d
pn-cron                 NodePort   10.245.206.66    <none>        5005:32416/TCP                                                               196d
pn-frontend             NodePort   10.245.253.158   <none>        3000:31404/TCP                                                               401d
pn-internal-stats       NodePort   10.245.116.36    <none>        4000:30191/TCP,4444:32162/TCP                                                174d

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs more info Issues that require more information
Projects
Status: Todo
Development

No branches or pull requests

4 participants