Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exiting due to SVC_UNREACHABLE: service not available: no running pod for service story-service found #17027

Open
AKASH-2998 opened this issue Aug 9, 2023 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@AKASH-2998
Copy link

What Happened?

I have tied to run the service in minikube but unfortunately i got this error message like this, I have attached the error detail's, TIA.

akash@Master-AK:~/Downloads/kub-data-01-starting-setup$ kubectl get service
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP      10.96.0.1      <none>        443/TCP        65m
story-service   LoadBalancer   10.98.48.153   <pending>     80:31525/TCP   15s
akash@Master-AK:~/Downloads/kub-data-01-starting-setup$ minikube service story-service
|-----------|---------------|-------------|-----------------------------|
| NAMESPACE |     NAME      | TARGET PORT |             URL             |
|-----------|---------------|-------------|-----------------------------|
| default   | story-service |          80 | http://192.168.59.103:31525 |
|-----------|---------------|-------------|-----------------------------|

❌  Exiting due to SVC_UNREACHABLE: service not available: no running pod for service story-service found

Attach the log file

I have followed the minikube logs --file=log.txt command but i didn't find any log's, Same that senario also i have attached below, TIA.

akash@Master-AK:~/Downloads/kub-data-01-starting-setup$ minikube logs --file=log.txt
akash@Master-AK:~/Downloads/kub-data-01-starting-setup$ minikube version
minikube version: v1.31.1
commit: fd3f3801765d093a485d255043149f92ec0a695f
akash@Master-AK:~/Downloads/kub-data-01-starting-setup$ 

Operating System

Ubuntu

Driver

VirtualBox

@AmitBhandari7777
Copy link

kubectl logs podname

Let me know if it helps

@imouahrani
Copy link

hi, did you resolve this problem please ??

@vineeth-r15
Copy link

facing same issue

@henrikzhupani
Copy link

I faced the same issue, the solution that worked for me :

  • The binding between the service and the pod was not correct. In the yaml file, the name that you are using to create the pod on Deployment, should be the same name with the Service - spec - selector app: name

example

apiVersion: apps/v1
kind: Deployment
metadata:
name: test-dep
labels:
app: demo-k8s
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: test/demo:latest
ports:
- containerPort: {testPort}


apiVersion: v1
kind: Service
metadata:
name: demo-k8s-service
spec:
selector:
app: demo
type: LoadBalancer
ports:
- protocol: TCP
port: {testPort}
targetPort: {testPort}
nodePort: {testNodePort}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 14, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 15, 2024
@harshan89
Copy link

This usually happens when service unable to find deployment please refer provided simple setup

your app

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  selector:
    matchLabels:
      run: webapp
      app: webapp
  replicas: 1
  template:
    metadata:
      labels:
        run: webapp
        app: webapp
    spec:
      containers:
        - name: webapp
          image: webapp:latest
          imagePullPolicy: Never
          ports:
            - containerPort: 80

your service

apiVersion: v1
kind: Service
metadata:
  name: webapp
spec:
  type: NodePort
  selector:
    app: webapp
  ports:
    - port: 80
      protocol: TCP
      targetPort: 80
      nodePort: 31000

@francoo98
Copy link

Services and pods are matched matching service's spec.selector.app to pod's app label.
You should make sure that service.spec.selector.app and deployment.spec.template.metadata.labels.app have the same value.

@yvonnekw
Copy link

The above suggestion works for me.

@vermabhaskar99
Copy link

I am also facing same issue. Any suggestion for this -

Deployment.yaml -

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: nginx
tier: front-end
spec:
replicas: 3
selector:
matchLabels:
env: production # this sould match with labels env
template:
metadata:
name: bhaskar-pod
labels:
app: my-app # pls see this and place in other place
tier: front-end
env: production # this sould match with matchlabels env
spec:
containers:
- name: nginx-container
image: nginx

Service.yaml -

apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: my-app
tier: front-end
env: production
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30004

@henrikzhupani
Copy link

hey @vermabhaskar99 - make sure to pput on the
Deployment - spec: containers: - name: nginx-container
the same name as
Service - spec: - selector - app: my-app

they both should have the same name, and yours dont have the same name

@brian-henderson69
Copy link

I had this issue and my names were correct / matched. However, I had another entry (tier: front-end which had no proper match). Once I removed it the service worked properly. Sometimes its not what you are looking at directly that causes the issue. Hope this helps...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests