Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Program should stop retrying port-forwarding on Exit Code 1 #11

Open
pophilpo opened this issue Oct 27, 2023 · 2 comments
Open

Program should stop retrying port-forwarding on Exit Code 1 #11

pophilpo opened this issue Oct 27, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@pophilpo
Copy link
Contributor

The current implementation retries port-forwarding regardless of the exit code returned by the kubectl port-forward command. This behavior leads to unnecessary retries, especially when the exit code is 1, which indicates that the kubectl command itself was successful but the port-forwarding operation failed due to permissions or other non-retryable reasons.

If the exit code for the kubectl port-forward command is 1, the program should stop retrying and possibly alert the user, as this is typically a non-recoverable issue.

@pophilpo
Copy link
Contributor Author

@sunsided do we really want to handle these separately ? Again, I feel like these are non-recoverable errors that we can just ignore with a message.

TODO: Handle `Error from server (NotFound): pods "foo-78b4c5d554-6z55j" not found")`
TODO: Handle `Unable to listen on port 5012: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 127.1.0.1:5012: bind: address already in use]`

Currently my PR handles these as well, while also providing a descriptive error message

Spawning child processes:
#2: Unable to listen on port 5010: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 127.0.0.1:5010: bind: address already in use]
#2: error: unable to listen on any of the requested ports: [{5010 80}]
#2: Process exited with exit status: 1 - shutting process down

@sunsided
Copy link
Owner

The core issue is that of forwarding to a service, not a pod directly. If all the available pods of a service disappear temporarily, say because they are recreated, then we do want to keep retrying until they're back. If we can guarantee that kubectl does not exit with code 1 (or exits with a different nonzero) when it could indeed recover from this situation, we're good. Otherwise we may want to introduce an option / configuration --max-retries=N or similar.

@sunsided sunsided added the enhancement New feature or request label Dec 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants