Replies: 1 comment 1 reply
-
i/o timeout is almost always an underlying network issue, such as network policy/firewall blocking or other lack of connectivity. I know you said thats not the issue here but I really cannot imagine much else that could cause it.. The one other thing could be iptables rules in the node. This traffic should bypass them, but maybe something is going wrong there. You could view the counters on the iptables rules (https://stackoverflow.com/questions/17548383/how-can-i-check-the-hit-count-for-each-rule-in-iptables, etc) perhaps |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
How can I diagnose this issue?
Two days ago, this wasn't a problem. This pod had been up for 74 days with no issues, then yesterday started getting this. Nothing in the workload definitions have changed since November.
It is the ONLY pod in the entire cluster with the issue. I tried to bounce both the workload and the istio pods, I then recycled every node in the cluster, resulting in rescheduling ALL workloads.
It is still the only pod with the issue.
I'm running on managed k8s on DigitalOcean.
Removing the istio label on the namespace to remove the sidecar results in everything working fine, so it's not a network issue on the underlying infrastructure.
Works fine on 41 pods, just not this one.
Beta Was this translation helpful? Give feedback.
All reactions