Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in a Kerberized environment a day after renew ticket #589

Open
dinegri opened this issue Sep 22, 2021 · 2 comments
Open

Issue in a Kerberized environment a day after renew ticket #589

dinegri opened this issue Sep 22, 2021 · 2 comments

Comments

@dinegri
Copy link

dinegri commented Sep 22, 2021

Hi,

I have a problem in a Kerberized environment: when I start the connector everything is working fine, I obtain my Kerberos credentials and the connector start writing without issues. The problem begin when a day after the ticket from Kerberos is renewed and the connector crash immediatly with this error:

ERROR Recovery failed at state RECOVERY_PARTITION_PAUSED (io.confluent.connect.hdfs.TopicPartitionWriter:221) org.apache.kafka.connect.errors.ConnectException: java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "abc/10.72.176.21"; destination host is: "abc":8020; at io.confluent.connect.hdfs.wal.FSWAL.apply(FSWAL.java:131) at io.confluent.connect.hdfs.TopicPartitionWriter.applyWAL(TopicPartitionWriter.java:519) at io.confluent.connect.hdfs.TopicPartitionWriter.recover(TopicPartitionWriter.java:204) at io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:234) at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:234) at io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:91) at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:287) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:176) at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.iteration(WorkerSinkTaskThread.java:90) at org.apache.kafka.connect.runtime.WorkerSinkTaskThread.execute(WorkerSinkTaskThread.java:58) at org.apache.kafka.connect.util.ShutdownableThread.run(ShutdownableThread.java:82)

I am using Java 11 e kafka-connect-hdfs version 10.0.0

@dinegri
Copy link
Author

dinegri commented Sep 22, 2021

After digging a little bit more I have found this here in the issues: https://issues.apache.org/jira/browse/HDFS-16165

I will migrate for HDFS Sink 3 once we have a annual contract with Confluent

After migration I will update here if this solution worked

@dinegri
Copy link
Author

dinegri commented Dec 1, 2021

HDFS Sink 3 does not fix this issue. Is is available only using a commercial license

Following instruction here #225 fix the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant