You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the JDBC sink connector it would be useful if we could delete rows on tombstone messages.
This could be very useful when doing CDC from another database.
I implemented the case with PG-> Debezium-> Kafka-> JDBC sink -> Mysql
And the pipeline works with "soft deletes" (adding a column __is_deleted and setting it to true) but not with hard deletes. That can be achieved using the New Record State transformation and changing the transforms.unwrap.delete.handling.mode parameter
We could add it as additional parameter not to break compatibility
The text was updated successfully, but these errors were encountered:
I wonder if it is possible to reach the same capacity to delete rows as confluent jdbc driver. Basically having delete.enabled=true in combination with pk_mode=record_key is a well documented way to process tombstones.
This PR adds the handle.tombstone configuration to the JDBC Sink Connector, allowing users to control how tombstone records (representing deletions in Kafka) are handled. When set to true, these null records are skipped.
#165
When using the JDBC sink connector it would be useful if we could delete rows on tombstone messages.
This could be very useful when doing CDC from another database.
I implemented the case with PG-> Debezium-> Kafka-> JDBC sink -> Mysql
And the pipeline works with "soft deletes" (adding a column
__is_deleted
and setting it totrue
) but not with hard deletes. That can be achieved using the New Record State transformation and changing thetransforms.unwrap.delete.handling.mode
parameterWe could add it as additional parameter not to break compatibility
The text was updated successfully, but these errors were encountered: