Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full backup was deleted when empty incremental backup was upload and keep_backups_remote > 0 #871

Open
tman5 opened this issue Mar 14, 2024 · 6 comments
Assignees
Milestone

Comments

@tman5
Copy link

tman5 commented Mar 14, 2024

Running clickhouse-backup is only producing incremental backups of 0B in size

chi-clickhouse-replicated-0-0-0:/# clickhouse-backup list
2024/03/14 18:19:50.606100  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 18:19:50.616301  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 18:19:50.616333  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 18:19:50.648241  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 18:19:50.664521  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 18:19:50.683604  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 18:19:50.686955  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 18:19:50.688697  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 18:19:50.691628  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 18:19:50.745233 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 3 elements logger=s3
2024/03/14 18:19:50.751677 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 3 elements logger=s3
chi-clickhouse-replicated-0-0-increment-2024-03-12-03-00-02   0B   12/03/2024 03:00:07   remote      tar, regular
chi-clickhouse-replicated-0-0-increment-2024-03-13-03-00-01   0B   13/03/2024 03:00:06   remote      tar, regular
chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04   0B   14/03/2024 03:00:09   remote      tar, regular
2024/03/14 18:19:50.752570  info clickhouse connection closed logger=clickhouse

Configs

              env:
                - name: CLICKHOUSE_PASSWORD
                  value: password
                - name: LOG_LEVEL
                  value: "debug"
                - name: ALLOW_EMPTY_BACKUPS
                  value: "true"
                - name: API_LISTEN
                  value: "0.0.0.0:7171"
                # INSERT INTO system.backup_actions to execute backup
                - name: API_CREATE_INTEGRATION_TABLES
                  value: "true"
                - name: BACKUPS_TO_KEEP_REMOTE
                  value: "3"
                # change it for production S3
                - name: REMOTE_STORAGE
                  value: "s3"
                - name: S3_ACL
                  value: "private"
                - name: S3_ENDPOINT
                  value: https://minio.local
                - name: S3_BUCKET
                  value: clickhouse-backups
                # {shard} macro defined by clickhouse-operator
                - name: S3_PATH
                  value: backup/shard-{shard}
                - name: S3_ACCESS_KEY
                  value: clickhouse_backups_rw
                - name: S3_DISABLE_CERT_VERIFICATION
                  value: "true"
                - name: S3_SECRET_KEY
                  valueFrom:
                    secretKeyRef:
                      name: config-mgmt-secrets-clickhouse-backup-rw
                      key: clickhouse_backup_rw
                - name: S3_FORCE_PATH_STYLE
                  value: "true"
                # remove it for production S3
                # - name: S3_DISABLE_SSL
                #   value: "true"
                # - name: S3_DEBUG
                #   value: "true"
                # require to avoid double scraping clickhouse and clickhouse-backup containers
              ports:
                - name: backup-rest
                  containerPort: 7171
              env:
                # use first replica in each shard, use `kubectl get svc | grep test-backups`
                - name: CLICKHOUSE_SERVICES
                  value: chi-clickhouse-replicated-0-0
                - name: CLICKHOUSE_PORT
                  value: "9000"
                - name: BACKUP_USER
                  value: backup
                - name: BACKUP_PASSWORD
                  value: "password"
                # change to 1, if you want to make full backup only in $FULL_BACKUP_WEEKDAY (1 - Mon, 7 - Sun)
                - name: MAKE_INCREMENT_BACKUP
                  value: "1"
                - name: FULL_BACKUP_WEEKDAY
                  value: "1"

Logs:

2024/03/14 03:00:06.035357 debug api.status.Start -> status.commands[30] == {ActionRowStatus:{Command:list Status:in progress Start:2024-03-14 03:00:06 Finish: Error:} Ctx:context.Background.WithCancel Cancel:0x4a7b80} logger=status
2024/03/14 03:00:06.035439  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:06.057905  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:06.057940  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 03:00:06.060727  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:06.118332  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:06.199114  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:06.199157  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:06.200382  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:06.200407  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:06.210617  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:06.214859  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:06.224617  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:06.426352 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 3 elements logger=s3
2024/03/14 03:00:06.506201 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 3 elements logger=s3
2024/03/14 03:00:06.506413  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:06.506493 debug api.status.stop -> status.commands[30] == {ActionRowStatus:{Command:list Status:success Start:2024-03-14 03:00:06 Finish:2024-03-14 03:00:06 Error:} Ctx:<nil> Cancel:<nil>} logger=status
2024/03/14 03:00:06.648105  info API call HEAD /backup/list logger=server
2024/03/14 03:00:06.648548  info API call GET /backup/list logger=server
2024/03/14 03:00:06.649647 debug api.status.Start -> status.commands[31] == {ActionRowStatus:{Command:list Status:in progress Start:2024-03-14 03:00:06 Finish: Error:} Ctx:context.Background.WithCancel Cancel:0x4a7b80} logger=status
2024/03/14 03:00:06.649685  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:06.651949  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:06.651982  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 03:00:06.654119  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:06.670613  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:06.728107  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:06.728155  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:06.730394  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:06.730425  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:06.735337  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:06.736913  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:06.750820  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:06.780723 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 3 elements logger=s3
2024/03/14 03:00:06.797838 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 3 elements logger=s3
2024/03/14 03:00:06.798154  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:06.798226 debug api.status.stop -> status.commands[31] == {ActionRowStatus:{Command:list Status:success Start:2024-03-14 03:00:06 Finish:2024-03-14 03:00:06 Error:} Ctx:<nil> Cancel:<nil>} logger=status
2024/03/14 03:00:06.943812  info API call HEAD /backup/list logger=server
2024/03/14 03:00:06.944373  info API call GET /backup/list logger=server
2024/03/14 03:00:06.945465 debug api.status.Start -> status.commands[32] == {ActionRowStatus:{Command:list Status:in progress Start:2024-03-14 03:00:06 Finish: Error:} Ctx:context.Background.WithCancel Cancel:0x4a7b80} logger=status
2024/03/14 03:00:06.945498  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:06.948013  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:06.948036  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 03:00:06.951338  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:06.967592  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:06.974011  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:06.974061  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:06.975257  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:06.975285  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:07.001690  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:07.005853  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:07.021335  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:07.041554 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 3 elements logger=s3
2024/03/14 03:00:07.055737 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 3 elements logger=s3
2024/03/14 03:00:07.055941  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:07.056029 debug api.status.stop -> status.commands[32] == {ActionRowStatus:{Command:list Status:success Start:2024-03-14 03:00:06 Finish:2024-03-14 03:00:07 Error:} Ctx:<nil> Cancel:<nil>} logger=status
2024/03/14 03:00:07.169133  info API call POST /backup/actions logger=server
2024/03/14 03:00:07.180978  info /backup/actions call: create chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 logger=server
2024/03/14 03:00:07.181030 debug api.status.inProgress -> len(status.commands)=33, inProgress=false logger=status
2024/03/14 03:00:07.181057 debug api.status.Start -> status.commands[33] == {ActionRowStatus:{Command:create chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 Status:in progress Start:2024-03-14 03:00:07 Finish: Error:} Ctx:context.Background.WithCancel Cancel:0x4a7b80} logger=status
2024/03/14 03:00:07.183715  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:07.186791  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:07.186821  info SELECT metadata_path FROM system.tables WHERE database = 'system' AND metadata_path!='' LIMIT 1; logger=clickhouse
2024/03/14 03:00:07.194748  info SELECT name, engine FROM system.databases WHERE NOT match(name,'^(system|INFORMATION_SCHEMA|information_schema|_temporary_and_external_tables)$') logger=clickhouse
2024/03/14 03:00:07.207620  info SHOW CREATE DATABASE `default` logger=clickhouse
2024/03/14 03:00:07.211730  info SELECT name, count(*) as is_present FROM system.settings WHERE name IN (?, ?) GROUP BY name with args [show_table_uuid_in_table_create_query_if_not_nil display_secrets_in_show_and_select] logger=clickhouse
2024/03/14 03:00:07.231550  info SELECT name FROM system.databases WHERE engine IN ('MySQL','PostgreSQL','MaterializedPostgreSQL') logger=clickhouse
2024/03/14 03:00:07.234726  info    SELECT     countIf(name='data_path') is_data_path_present,     countIf(name='data_paths') is_data_paths_present,     countIf(name='uuid') is_uuid_present,     countIf(name='create_table_query') is_create_table_query_present,     countIf(name='total_bytes') is_total_bytes_present    FROM system.columns WHERE database='system' AND table='tables'   logger=clickhouse
2024/03/14 03:00:07.252597  info SELECT database, name, engine , data_paths , uuid , create_table_query , coalesce(total_bytes, 0) AS total_bytes   FROM system.tables WHERE is_temporary = 0 ORDER BY total_bytes DESC SETTINGS show_table_uuid_in_table_create_query_if_not_nil=1 logger=clickhouse
2024/03/14 03:00:07.305921  info API call HEAD /backup/actions logger=server
2024/03/14 03:00:07.306310  info API call GET /backup/actions logger=server
2024/03/14 03:00:07.344546  info SELECT metadata_path FROM system.tables WHERE database = 'system' AND metadata_path!='' LIMIT 1; logger=clickhouse
2024/03/14 03:00:07.350158  info SELECT count() as cnt FROM system.columns WHERE database='system' AND table='functions' AND name='create_query' SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:07.389182  info SELECT name, create_query FROM system.functions WHERE create_query!='' logger=clickhouse
2024/03/14 03:00:07.400927  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 03:00:07.402691  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:07.407675  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:07.422612  info SELECT value FROM `system`.`build_options` where name='VERSION_DESCRIBE' logger=clickhouse
2024/03/14 03:00:07.425463  info done                      backup=chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 duration=242ms logger=backuper operation=create
2024/03/14 03:00:07.425589  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:07.426828 debug api.status.stop -> status.commands[33] == {ActionRowStatus:{Command:create chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 Status:success Start:2024-03-14 03:00:07 Finish:2024-03-14 03:00:07 Error:} Ctx:<nil> Cancel:<nil>} logger=status
2024/03/14 03:00:07.426917  info Update backup metrics start (onlyLocal=true) logger=server
2024/03/14 03:00:07.426964  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:07.429346  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:07.429369  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 03:00:07.433299  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:07.447954  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:07.454692  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:08.464783  info API call HEAD /backup/actions logger=server
2024/03/14 03:00:08.465388  info API call GET /backup/actions logger=server
2024/03/14 03:00:08.586013  info API call HEAD /backup/actions logger=server
2024/03/14 03:00:08.586296  info API call GET /backup/actions logger=server
2024/03/14 03:00:08.713301  info API call POST /backup/actions logger=server
2024/03/14 03:00:08.716007  info /backup/actions call: upload --diff-from-remote=chi-clickhouse-replicated-0-0-increment-2024-03-13-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 logger=server
2024/03/14 03:00:08.716069 debug api.status.inProgress -> len(status.commands)=34, inProgress=false logger=status
2024/03/14 03:00:08.716103 debug api.status.Start -> status.commands[34] == {ActionRowStatus:{Command:upload --diff-from-remote=chi-clickhouse-replicated-0-0-increment-2024-03-13-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 Status:in progress Start:2024-03-14 03:00:08 Finish: Error:} Ctx:context.Background.WithCancel Cancel:0x4a7b80} logger=status
2024/03/14 03:00:08.718125  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:08.720399  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:08.720427  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 03:00:08.722717  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:08.765350  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:08.777842  info SELECT max(toInt64(bytes_on_disk * 1.02)) AS max_file_size FROM system.parts logger=clickhouse
2024/03/14 03:00:08.808971  info API call HEAD /backup/actions logger=server
2024/03/14 03:00:08.808990  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:08.809590  info API call GET /backup/actions logger=server
2024/03/14 03:00:08.814605  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:08.817335  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:08.824811  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:08.861104 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 3 elements logger=s3
2024/03/14 03:00:08.871834 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 3 elements logger=s3
2024/03/14 03:00:08.872269 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 3 elements logger=s3
2024/03/14 03:00:08.884236 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 3 elements logger=s3
2024/03/14 03:00:08.994942 debug prepare table concurrent semaphore with concurrency=3 len(tablesForUpload)=0 backup=chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 operation=upload
2024/03/14 03:00:09.047660  info done                      backup=chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 duration=330ms operation=upload size=521B
2024/03/14 03:00:09.048066 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 3 elements logger=s3
2024/03/14 03:00:09.079170 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 4 elements logger=s3
2024/03/14 03:00:09.079410  info calculate backup list for delete remote duration=32ms logger=s3 operation=RemoveOldBackupsRemote
2024/03/14 03:00:09.079472  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:09.091406  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:09.173829  info done                      backup=chi-clickhouse-replicated-0-0-full-2024-03-11-03-00-01 duration=94ms location=remote logger=s3 operation=RemoveOldBackupsRemote
2024/03/14 03:00:09.173873  info done                      duration=126ms logger=s3 operation=RemoveOldBackupsRemote
2024/03/14 03:00:09.173941  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:09.173983 debug api.status.stop -> status.commands[34] == {ActionRowStatus:{Command:upload --diff-from-remote=chi-clickhouse-replicated-0-0-increment-2024-03-13-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 Status:success Start:2024-03-14 03:00:08 Finish:2024-03-14 03:00:09 Error:} Ctx:<nil> Cancel:<nil>} logger=status
2024/03/14 03:00:09.174016  info Update backup metrics start (onlyLocal=false) logger=server
2024/03/14 03:00:09.174050  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:09.175818  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:09.175875  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 03:00:09.178034  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:09.191145  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:09.200375  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:09.200410  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:09.208339  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:09.208364  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:09.227371  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:09.245567  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:09.267081  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:09.306755 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 4 elements logger=s3
2024/03/14 03:00:09.322388 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 3 elements logger=s3
2024/03/14 03:00:09.322694 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 3 elements logger=s3
2024/03/14 03:00:09.334041 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 3 elements logger=s3
2024/03/14 03:00:09.334252  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:09.334275  info Update backup metrics finish LastBackupCreateLocal=2024-03-14 03:00:07.422602626 +0000 UTC LastBackupCreateRemote=2024-03-14 03:00:07.422602626 +0000 UTC LastBackupSizeLocal=0 LastBackupSizeRemote=0 LastBackupUpload=2024-03-14 03:00:09 +0000 UTC NumberBackupsLocal=1 NumberBackupsRemote=3 duration=160ms logger=server
2024/03/14 03:00:13.947204  info API call HEAD /backup/actions logger=server
2024/03/14 03:00:13.947665  info API call GET /backup/actions logger=server
2024/03/14 03:00:14.087682  info API call HEAD /backup/actions logger=server
2024/03/14 03:00:14.088037  info API call GET /backup/actions logger=server
2024/03/14 03:00:14.218777  info API call POST /backup/actions logger=server
2024/03/14 03:00:14.221474  info /backup/actions call: delete local chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 logger=server
2024/03/14 03:00:14.221515 debug api.status.inProgress -> len(status.commands)=35, inProgress=false logger=status
2024/03/14 03:00:14.221537 debug api.status.Start -> status.commands[35] == {ActionRowStatus:{Command:delete local chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 Status:in progress Start:2024-03-14 03:00:14 Finish: Error:} Ctx:context.Background.WithCancel Cancel:0x4a7b80} logger=status
2024/03/14 03:00:14.222335  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:14.227623  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:14.227653  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 03:00:14.231109  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:14.242031  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:14.251882  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:14.258905  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:14.261038  info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros'  SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/03/14 03:00:14.266355  info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/03/14 03:00:14.410194 debug /tmp/.clickhouse-backup-metadata.cache.S3 load 3 elements logger=s3
2024/03/14 03:00:14.451925 debug /tmp/.clickhouse-backup-metadata.cache.S3 save 3 elements logger=s3
2024/03/14 03:00:14.452114 debug b.skipIfTheSameRemoteBackupPresent return skip=true logger=RemoveBackupLocal
2024/03/14 03:00:14.452139 debug remove '/var/lib/clickhouse/backup/chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04' logger=RemoveBackupLocal
2024/03/14 03:00:14.452347  info done                      backup=chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 duration=230ms location=local logger=RemoveBackupLocal operation=delete
2024/03/14 03:00:14.452450  info clickhouse connection closed logger=clickhouse
2024/03/14 03:00:14.452483 debug api.status.stop -> status.commands[35] == {ActionRowStatus:{Command:delete local chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 Status:success Start:2024-03-14 03:00:14 Finish:2024-03-14 03:00:14 Error:} Ctx:<nil> Cancel:<nil>} logger=status
2024/03/14 03:00:14.452493  info DELETED                   logger=server
2024/03/14 03:00:14.452602  info Update backup metrics start (onlyLocal=true) logger=server
2024/03/14 03:00:14.452702  info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/03/14 03:00:14.454930  info clickhouse connection open: tcp://localhost:9000 logger=clickhouse
2024/03/14 03:00:14.454952  info SELECT value FROM `system`.`build_options` where name='VERSION_INTEGER' logger=clickhouse
2024/03/14 03:00:14.456776  info SELECT countIf(name='type') AS is_disk_type_present, countIf(name='free_space') AS is_free_space_present, countIf(name='disks') AS is_storage_policy_present FROM system.columns WHERE database='system' AND table IN ('disks','storage_policies')  logger=clickhouse
2024/03/14 03:00:14.461726  info SELECT d.path, any(d.name) AS name, any(d.type) AS type, min(d.free_space) AS free_space, groupUniqArray(s.policy_name) AS storage_policies FROM system.disks AS d  INNER JOIN (SELECT policy_name, arrayJoin(disks) AS disk FROM system.storage_policies) AS s ON s.disk = d.name GROUP BY d.path logger=clickhouse
2024/03/14 03:00:14.466416  info clickhouse connection closed logger=clickhouse
@Slach
Copy link
Collaborator

Slach commented Mar 16, 2024

Could you share results of the following command?

kubectl exec -n <your-namespace>  chi-clickhouse-replicated-0-0 -c clickhouse-pod -- clickhouse-client -q "SELECT * FROM system.backup_actions"

@tman5
Copy link
Author

tman5 commented Mar 18, 2024

list	2024-03-09 03:00:03	2024-03-09 03:00:04	success	
list	2024-03-09 03:00:04	2024-03-09 03:00:04	success	
list	2024-03-09 03:00:04	2024-03-09 03:00:04	success	
create chi-clickhouse-replicated-0-0-increment-2024-03-09-03-00-01	2024-03-09 03:00:04	2024-03-09 03:00:05	success	
upload --diff-from-remote=chi-clickhouse-replicated-0-0-increment-2024-03-08-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-09-03-00-01	2024-03-09 03:00:06	2024-03-09 03:00:07	success	
delete local chi-clickhouse-replicated-0-0-increment-2024-03-09-03-00-01	2024-03-09 03:00:11	2024-03-09 03:00:11	success	
list	2024-03-10 03:00:02	2024-03-10 03:00:03	success	
list	2024-03-10 03:00:03	2024-03-10 03:00:03	success	
list	2024-03-10 03:00:03	2024-03-10 03:00:04	success	
create chi-clickhouse-replicated-0-0-increment-2024-03-10-03-00-01	2024-03-10 03:00:04	2024-03-10 03:00:04	success	
upload --diff-from-remote=chi-clickhouse-replicated-0-0-increment-2024-03-09-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-10-03-00-01	2024-03-10 03:00:05	2024-03-10 03:00:05	success	
delete local chi-clickhouse-replicated-0-0-increment-2024-03-10-03-00-01	2024-03-10 03:00:10	2024-03-10 03:00:11	success	
list	2024-03-11 03:00:02	2024-03-11 03:00:03	success	
list	2024-03-11 03:00:03	2024-03-11 03:00:03	success	
list	2024-03-11 03:00:03	2024-03-11 03:00:03	success	
create chi-clickhouse-replicated-0-0-full-2024-03-11-03-00-01	2024-03-11 03:00:03	2024-03-11 03:00:04	success	
upload  chi-clickhouse-replicated-0-0-full-2024-03-11-03-00-01	2024-03-11 03:00:05	2024-03-11 03:00:05	success	
delete local chi-clickhouse-replicated-0-0-full-2024-03-11-03-00-01	2024-03-11 03:00:10	2024-03-11 03:00:10	success	
list	2024-03-12 03:00:05	2024-03-12 03:00:05	success	
list	2024-03-12 03:00:05	2024-03-12 03:00:05	success	
list	2024-03-12 03:00:05	2024-03-12 03:00:05	success	
create chi-clickhouse-replicated-0-0-increment-2024-03-12-03-00-02	2024-03-12 03:00:05	2024-03-12 03:00:06	success	
upload --diff-from-remote=chi-clickhouse-replicated-0-0-full-2024-03-11-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-12-03-00-02	2024-03-12 03:00:07	2024-03-12 03:00:08	success	
delete local chi-clickhouse-replicated-0-0-increment-2024-03-12-03-00-02	2024-03-12 03:00:12	2024-03-12 03:00:12	success	
list	2024-03-13 03:00:03	2024-03-13 03:00:04	success	
list	2024-03-13 03:00:04	2024-03-13 03:00:04	success	
list	2024-03-13 03:00:04	2024-03-13 03:00:04	success	
create chi-clickhouse-replicated-0-0-increment-2024-03-13-03-00-01	2024-03-13 03:00:04	2024-03-13 03:00:05	success	
upload --diff-from-remote=chi-clickhouse-replicated-0-0-increment-2024-03-12-03-00-02 chi-clickhouse-replicated-0-0-increment-2024-03-13-03-00-01	2024-03-13 03:00:06	2024-03-13 03:00:06	success	
delete local chi-clickhouse-replicated-0-0-increment-2024-03-13-03-00-01	2024-03-13 03:00:11	2024-03-13 03:00:11	success	
list	2024-03-14 03:00:06	2024-03-14 03:00:06	success	
list	2024-03-14 03:00:06	2024-03-14 03:00:06	success	
list	2024-03-14 03:00:06	2024-03-14 03:00:07	success	
create chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04	2024-03-14 03:00:07	2024-03-14 03:00:07	success	
upload --diff-from-remote=chi-clickhouse-replicated-0-0-increment-2024-03-13-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04	2024-03-14 03:00:08	2024-03-14 03:00:09	success	
delete local chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04	2024-03-14 03:00:14	2024-03-14 03:00:14	success	
list	2024-03-15 03:00:02	2024-03-15 03:00:03	success	
list	2024-03-15 03:00:03	2024-03-15 03:00:04	success	
list	2024-03-15 03:00:04	2024-03-15 03:00:04	success	
create chi-clickhouse-replicated-0-0-full-2024-03-15-03-00-01	2024-03-15 03:00:04	2024-03-15 03:00:04	success	
upload  chi-clickhouse-replicated-0-0-full-2024-03-15-03-00-01	2024-03-15 03:00:05	2024-03-15 03:00:06	success	
delete local chi-clickhouse-replicated-0-0-full-2024-03-15-03-00-01	2024-03-15 03:00:11	2024-03-15 03:00:11	success	
list	2024-03-16 03:00:02	2024-03-16 03:00:02	success	
list	2024-03-16 03:00:02	2024-03-16 03:00:03	success	
list	2024-03-16 03:00:03	2024-03-16 03:00:03	success	
create chi-clickhouse-replicated-0-0-increment-2024-03-16-03-00-01	2024-03-16 03:00:03	2024-03-16 03:00:03	success	
upload --diff-from-remote=chi-clickhouse-replicated-0-0-full-2024-03-15-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-16-03-00-01	2024-03-16 03:00:04	2024-03-16 03:00:05	success	
delete local chi-clickhouse-replicated-0-0-increment-2024-03-16-03-00-01	2024-03-16 03:00:10	2024-03-16 03:00:10	success	
list	2024-03-17 03:00:02	2024-03-17 03:00:02	success	
list	2024-03-17 03:00:03	2024-03-17 03:00:03	success	
list	2024-03-17 03:00:03	2024-03-17 03:00:03	success	
create chi-clickhouse-replicated-0-0-increment-2024-03-17-03-00-01	2024-03-17 03:00:03	2024-03-17 03:00:03	success	
upload --diff-from-remote=chi-clickhouse-replicated-0-0-increment-2024-03-16-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-17-03-00-01	2024-03-17 03:00:05	2024-03-17 03:00:05	success	
delete local chi-clickhouse-replicated-0-0-increment-2024-03-17-03-00-01	2024-03-17 03:00:10	2024-03-17 03:00:10	success	
list	2024-03-18 03:00:03	2024-03-18 03:00:04	success	
list	2024-03-18 03:00:04	2024-03-18 03:00:05	success	
list	2024-03-18 03:00:05	2024-03-18 03:00:05	success	
create chi-clickhouse-replicated-0-0-full-2024-03-18-03-00-02	2024-03-18 03:00:05	2024-03-18 03:00:05	success	
upload  chi-clickhouse-replicated-0-0-full-2024-03-18-03-00-02	2024-03-18 03:00:06	2024-03-18 03:00:07	success	
delete local chi-clickhouse-replicated-0-0-full-2024-03-18-03-00-02	2024-03-18 03:00:12	2024-03-18 03:00:12	success

@Slach
Copy link
Collaborator

Slach commented Mar 18, 2024

Is SELECT * FROM system.backup_list contains chi-clickhouse-replicated-0-0-full-2024-03-18-03-00-02 currently?

@Slach
Copy link
Collaborator

Slach commented Mar 18, 2024

According to logs
you successfully upload incremental backup

2024/03/14 03:00:08.716007 info /backup/actions call: upload --diff-from-remote=chi-clickhouse-replicated-0-0-increment-2024-03-13-03-00-01 chi-clickhouse-replicated-0-0-increment-2024-03-14-03-00-04 logger=server

and delete old full backup

2024/03/14 03:00:09.173829 info done backup=chi-clickhouse-replicated-0-0-full-2024-03-11-03-00-01 duration=94ms location=remote logger=s3 operation=RemoveOldBackupsRemote

weird why your incemental backup have 0B size

do you have any data in your tables?

@Slach
Copy link
Collaborator

Slach commented Mar 21, 2024

@tman5 any news from your side?

@Slach Slach changed the title Full backup disappeared Full backup was deleted when empty incremental backup was upload and keep_backups_remote > 0 Mar 21, 2024
@Slach Slach added this to the 2.6.0 milestone Mar 21, 2024
@Slach Slach self-assigned this Mar 21, 2024
@tman5
Copy link
Author

tman5 commented Mar 21, 2024

It looks like the data wasn't in there. But now that the data has been put back it's showing a 29.79kb backup so it looks like it's working now. Thanks for your help

@tman5 tman5 closed this as completed Mar 21, 2024
@Slach Slach reopened this Mar 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants