Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent pod failed #137

Open
barryzhounb opened this issue Dec 31, 2020 · 4 comments
Open

Agent pod failed #137

barryzhounb opened this issue Dec 31, 2020 · 4 comments

Comments

@barryzhounb
Copy link

Hi Team,

I have installed version 0.1.1

kubectl debug --version
debug version v0.0.0-master+$Format:%h$

It failed to debug my pods, then I checked agent-pod, it didn't run successfully. The following is its status

oc get pods -n default
NAME                                                   READY     STATUS             RESTARTS   AGE
debug-agent-pod-563e0614-4b0a-11eb-be05-dca904978b2f   0/1       ImagePullBackOff   0          44m
debug-agent-pod-5dc047e6-4b0d-11eb-9bc4-dca904978b2f   0/1       PodFitsHostPorts   0          22m
debug-agent-pod-a6c36144-4b0d-11eb-b6a7-dca904978b2f   0/1       PodFitsHostPorts   0          20m

Could you let me know:
(1) Failed to Pulling image "aylei/debug-agent:latest", how to fix it? Or What is the credential for pulling down image?
(2) How to fix PodFitsHostPorts?

In addition, the below is my configuration:

# debug agent listening port(outside container)
# default to 10027
agentPort: 10027

# whether using agentless mode
# default to true
agentless: true
# namespace of debug-agent pod, used in agentless mode
# default to 'default'
agentPodNamespace: default
# prefix of debug-agent pod, used in agentless mode
# default to  'debug-agent-pod'
agentPodNamePrefix: debug-agent-pod
# image of debug-agent pod, used in agentless mode
# default to 'aylei/debug-agent:latest'
agentImage: aylei/debug-agent:latest

# daemonset name of the debug-agent, used in port-forward
# default to 'debug-agent'
debugAgentDaemonset: debug-agent
# daemonset namespace of the debug-agent, used in port-forwad
# default to 'default'
debugAgentNamespace: kube-system
# whether using port-forward when connecting debug-agent
# default true
portForward: true
# image of the debug container
# default as showed
image: nicolaka/netshoot:latest
# start command of the debug container
# default ['bash']
command:
- '/bin/bash'
- '-l'
# private docker registry auth kuberntes secret
# default registrySecretName is kubectl-debug-registry-secret
# default registrySecretNamespace is default
registrySecretName: my-debug-secret
registrySecretNamespace: debug
# in agentless mode, you can set the agent pod's resource limits/requests:
# default is not set
agentCpuRequests: ""
agentCpuLimits: ""
agentMemoryRequests: ""
agentMemoryLimits: ""
# in fork mode, if you want the copied pod retains the labels of the original pod, you can change this params
# format is []string
# If not set, this parameter is empty by default (Means that any labels of the original pod are not retained, and the labels of the copied pods are empty.)
forkPodRetainLabels: []
# You can disable SSL certificate check when communicating with image registry by
# setting registrySkipTLSVerify to true.
registrySkipTLSVerify: false
# You can set the log level with the verbosity setting
verbosity : 0

@aylei
Copy link
Owner

aylei commented Jan 2, 2021

(1) Failed to Pulling image "aylei/debug-agent:latest", how to fix it? Or What is the credential for pulling down image?

It might be your node did not have network access to dockerhub or it was rate-limited at that time, you can use kubectl describe po <pod-name> to find out the exact reason

How to fix PodFitsHostPorts?

Agent port claims a host port, so you can only run one agent-pod in a Node, and in your it is the one that suffering ImagePullBackoff

@barryzhounb
Copy link
Author

Thanks @aylei

Now I fixed the above issues.
When I ran command "kubectl debug ecgateway-86d67cb79d-8vhnc", a new issue came up, any ideas for that? How to fix it.

Agent Pod info: [Name:debug-agent-pod-d0bebc4e-4d62-11eb-9f4f-dca904978b2f, Namespace:default, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-d0bebc4e-4d62-11eb-9f4f-dca904978b2f to run...
pod ecgateway-86d67cb79d-8vhnc PodIP 10.254.16.200, agentPodIP 10.16.36.245
wait for forward port to debug agent ready...
Forwarding from 127.0.0.1:10027 -> 10027
Forwarding from [::1]:10027 -> 10027
Handling connection for 10027
                             Start deleting agent pod ecgateway-86d67cb79d-8vhnc 
end port-forward...
                   error execute remote, unable to upgrade connection: Failed to construct RuntimeManager.  Error- only docker and containerd container runtimes are suppored right now
error: unable to upgrade connection: Failed to construct RuntimeManager.  Error- only docker and containerd container runtimes are suppored right now

@aylei
Copy link
Owner

aylei commented Jan 3, 2021

what's your container runtime?

@barryzhounb
Copy link
Author

Sorry, I don't know what is "container runtime", so I could not answer you. Because I use default configurations, I wish it works, unfortunately it failed. Is there any configurations that I need to change in order to fix it? I am sorry for spending too much your time, I am not sure if it is possible to have a video chat in order to solve this issue such as Webex or Zoom, I live in Canada.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants