Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to limit retry write errors by timeout #663

Open
JozoVilcek opened this issue Jun 12, 2023 · 0 comments
Open

Allow to limit retry write errors by timeout #663

JozoVilcek opened this issue Jun 12, 2023 · 0 comments

Comments

@JozoVilcek
Copy link

When write failure occur, connect task handles it by backoff sleep and retry until writes recovers. In my case, when facing infrastructure problems and writing to HDFS, write HDFS pipeline is pinned to try through same nodes. When infrastructure takes long to recover, connect task gains delay.

I would like to be able to set an upper bound for retries in which case operation is reset and temp file recreated. This will initiate new write pipeline and have a chance to complete write via different HDFS nodes.

Currently, this scenario is working effectively when using e.g. WALL time based partitioner, where triggered rotation will very likely trigger and error while attempting to close open file and initiate a reset. With record based time partitioner which does not work because "time do not move"

JozoVilcek pushed a commit to JozoVilcek/kafka-connect-hdfs that referenced this issue Jun 13, 2023
JozoVilcek pushed a commit to JozoVilcek/kafka-connect-hdfs that referenced this issue Jun 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant