You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Forgive me if this is a newbie question: Could someone elaborate on the best practice for scaling up a cluster?
We have a cluster generated from a recovery, and the documentation states that when doing this it's best to start with one instance, make a snapshot, then add replicas.
However with 1.23.0 in one cluster we have one creating pods in a crash loop, and a second independent cluster that doesn't do anything at all after restarts or reloads.
Would like help on how to scale up, and if something goes wrong, where is the best place to look for help, i.e. which logs, or resource status might help?
Thanks.
Actually it looks like I'm getting an error with the wal archive, but can't confidently say what's going on.
{"level":"debug","ts":"2024-04-26T15:02:24Z","msg":"pg_controldata stdout","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","stdout":"pg_control version number: 1300\nCatalog version number: 202307071\nDatabase system identifier: 7345552334997741588\nDatabase cluster state: shut down\npg_control last modified: Fri Apr 26 15:01:24 2024\nLatest checkpoint location: 59D/52000028\nLatest checkpoint's REDO location: 59D/52000028\nLatest checkpoint's REDO WAL file: 000000060000059D00000052\nLatest checkpoint's TimeLineID: 6\nLatest checkpoint's PrevTimeLineID: 6\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:181526095\nLatest checkpoint's NextOID: 148741096\nLatest checkpoint's NextMultiXactId: 1\nLatest checkpoint's NextMultiOffset: 0\nLatest checkpoint's oldestXID: 722\nLatest checkpoint's oldestXID's DB: 1\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 1\nLatest checkpoint's oldestMulti's DB: 1\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint: Fri Apr 26 15:01:24 2024\nFake LSN counter for unlogged rels: 0/3E8\nMinimum recovery ending location: 0/0\nMin recovery ending loc's timeline: 0\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: no\nwal_level setting: logical\nwal_log_hints setting: on\nmax_connections setting: 1200\nmax_worker_processes setting: 32\nmax_wal_senders setting: 10\nmax_prepared_xacts setting: 0\nmax_locks_per_xact setting: 64\ntrack_commit_timestamp setting: off\nMaximum data alignment: 8\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nWAL block size: 8192\nBytes per WAL segment: 16777216\nMaximum length of identifiers: 64\nMaximum columns in an index: 32\nMaximum size of a TOAST chunk: 1996\nSize of a large-object chunk: 2048\nDate/time type storage: 64-bit integers\nFloat8 argument passing: by value\nData page checksum version: 0\nMock authentication nonce: b2ae8cbe39949b76539a9f04f6903695bba7311148df10284169aa832cc898ae\n"}
{"level":"info","ts":"2024-04-26T15:02:24Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:24Z","msg":"Extracting pg_controldata information","logging_pod":"pgv2-recovery3-9","reason":"postmaster start up"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"pg_controldata","msg":"pg_control version number: 1300\nCatalog version number: 202307071\nDatabase system identifier: 7345552334997741588\nDatabase cluster state: shut down\npg_control last modified: Fri 26 Apr 2024 03:01:24 PM UTC\nLatest checkpoint location: 59D/52000028\nLatest checkpoint's REDO location: 59D/52000028\nLatest checkpoint's REDO WAL file: 000000060000059D00000052\nLatest checkpoint's TimeLineID: 6\nLatest checkpoint's PrevTimeLineID: 6\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:181526095\nLatest checkpoint's NextOID: 148741096\nLatest checkpoint's NextMultiXactId: 1\nLatest checkpoint's NextMultiOffset: 0\nLatest checkpoint's oldestXID: 722\nLatest checkpoint's oldestXID's DB: 1\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 1\nLatest checkpoint's oldestMulti's DB: 1\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint: Fri 26 Apr 2024 03:01:24 PM UTC\nFake LSN counter for unlogged rels: 0/3E8\nMinimum recovery ending location: 0/0\nMin recovery ending loc's timeline: 0\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: no\nwal_level setting: logical\nwal_log_hints setting: on\nmax_connections setting: 1200\nmax_worker_processes setting: 32\nmax_wal_senders setting: 10\nmax_prepared_xacts setting: 0\nmax_locks_per_xact setting: 64\ntrack_commit_timestamp setting: off\nMaximum data alignment: 8\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nWAL block size: 8192\nBytes per WAL segment: 16777216\nMaximum length of identifiers: 64\nMaximum columns in an index: 32\nMaximum size of a TOAST chunk: 1996\nSize of a large-object chunk: 2048\nDate/time type storage: 64-bit integers\nFloat8 argument passing: by value\nData page checksum version: 0\nMock authentication nonce: b2ae8cbe39949b76539a9f04f6903695bba7311148df10284169aa832cc898ae\n","pipe":"stdout","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","msg":"Instance is still down, will retry in 1 second","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"9393b45d-e23c-4651-9212-2919e10d250d","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"2024-04-26 15:02:24.897 UTC [21] LOG: redirecting log output to logging collector process","pipe":"stderr","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"2024-04-26 15:02:24.897 UTC [21] HINT: Future log output will appear in directory "/controller/log".","pipe":"stderr","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.897 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"1","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"ending log output to stderr","hint":"Future log output will go to log destination "csvlog".","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.897 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"2","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"starting PostgreSQL 16.2 (Debian 16.2-1.pgdg110+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.897 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"3","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on IPv4 address "0.0.0.0", port 5432","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.897 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"4","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on IPv6 address "::", port 5432","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"2024-04-26 15:02:24.897 UTC [21] LOG: ending log output to stderr","source":"/controller/log/postgres","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"2024-04-26 15:02:24.897 UTC [21] HINT: Future log output will go to log destination "csvlog".","source":"/controller/log/postgres","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.901 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"5","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on Unix socket "/controller/run/.s.PGSQL.5432"","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.912 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"1","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"database system was shut down at 2024-04-26 15:01:24 UTC","backend_type":"startup","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:24Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"debug","ts":"2024-04-26T15:02:24Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.941 UTC","user_name":"postgres","database_name":"postgres","process_id":"34","connection_from":"[local]","session_id":"662bc200.22","session_line_num":"1","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"wal-restore","msg":"end-of-wal-stream flag found.Exiting with error once to let Postgres try switching to streaming replication","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:25.008 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"2","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"entering standby mode","backend_type":"startup","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"wal-restore","msg":"Restored WAL file","logging_pod":"pgv2-recovery3-9","walName":"00000006.history","startTime":"2024-04-26T15:02:25Z","endTime":"2024-04-26T15:02:25Z","elapsedWalTime":0.374075893}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"wal-restore","msg":"WAL restore command completed (parallel)","logging_pod":"pgv2-recovery3-9","walName":"00000006.history","maxParallel":20,"successfulWalRestore":1,"failedWalRestore":19,"endOfWALStream":false,"startTime":"2024-04-26T15:02:25Z","downloadStartTime":"2024-04-26T15:02:25Z","downloadTotalTime":0.374262588,"totalTime":0.447201732}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:25.477 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"3","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"restored log file "00000006.history" from archive","backend_type":"startup","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Reconciling Cluster","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"2637f0d9-bc9b-4736-b96e-b9582d41e1d1","caller":"internal/management/controller/instance_controller.go:111","logging_pod":"pgv2-recovery3-9","cluster":{"apiVersion":"postgresql.cnpg.io/v1","kind":"Cluster","namespace":"pgdb","name":"pgv2-recovery3"}}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Reconciling custom monitoring queries","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"2637f0d9-bc9b-4736-b96e-b9582d41e1d1","caller":"internal/management/controller/instance_controller.go:753","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:25Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:25Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.255 UTC","user_name":"postgres","database_name":"postgres","process_id":"91","connection_from":"[local]","session_id":"662bc202.5b","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.267 UTC","user_name":"postgres","database_name":"postgres","process_id":"92","connection_from":"[local]","session_id":"662bc202.5c","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:26Z","msg":"DB not available, will retry","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"2637f0d9-bc9b-4736-b96e-b9582d41e1d1","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.319 UTC","user_name":"postgres","database_name":"postgres","process_id":"93","connection_from":"[local]","session_id":"662bc202.5d","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.531 UTC","user_name":"postgres","database_name":"postgres","process_id":"94","connection_from":"[local]","session_id":"662bc202.5e","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.574 UTC","user_name":"postgres","database_name":"postgres","process_id":"95","connection_from":"[local]","session_id":"662bc202.5f","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.667 UTC","user_name":"postgres","database_name":"postgres","process_id":"96","connection_from":"[local]","session_id":"662bc202.60","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.971 UTC","user_name":"postgres","database_name":"postgres","process_id":"97","connection_from":"[local]","session_id":"662bc202.61","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:27Z","msg":"Reconciling Cluster","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"100c303a-39ef-4aea-8ec6-b7f9548338ff","caller":"internal/management/controller/instance_controller.go:111","logging_pod":"pgv2-recovery3-9","cluster":{"apiVersion":"postgresql.cnpg.io/v1","kind":"Cluster","namespace":"pgdb","name":"pgv2-recovery3"}}
{"level":"debug","ts":"2024-04-26T15:02:27Z","msg":"Reconciling custom monitoring queries","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"100c303a-39ef-4aea-8ec6-b7f9548338ff","caller":"internal/management/controller/instance_controller.go:753","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:27Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:27Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:27Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:27.567 UTC","user_name":"postgres","database_name":"postgres","process_id":"99","connection_from":"[local]","session_id":"662bc203.63","session_line_num":"1","session_start_time":"2024-04-26 15:02:27 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:27Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:27.587 UTC","user_name":"postgres","database_name":"postgres","process_id":"100","connection_from":"[local]","session_id":"662bc203.64","session_line_num":"1","session_start_time":"2024-04-26 15:02:27 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:27Z","msg":"DB not available, will retry","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"100c303a-39ef-4aea-8ec6-b7f9548338ff","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.362 UTC","user_name":"postgres","database_name":"postgres","process_id":"101","connection_from":"[local]","session_id":"662bc204.65","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.563 UTC","user_name":"postgres","database_name":"postgres","process_id":"102","connection_from":"[local]","session_id":"662bc204.66","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Reconciling Cluster","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"3f344928-44eb-43b0-b2b8-61e91cede67f","caller":"internal/management/controller/instance_controller.go:111","logging_pod":"pgv2-recovery3-9","cluster":{"apiVersion":"postgresql.cnpg.io/v1","kind":"Cluster","namespace":"pgdb","name":"pgv2-recovery3"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Reconciling custom monitoring queries","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"3f344928-44eb-43b0-b2b8-61e91cede67f","caller":"internal/management/controller/instance_controller.go:753","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.589 UTC","user_name":"postgres","database_name":"postgres","process_id":"103","connection_from":"[local]","session_id":"662bc204.67","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.669 UTC","user_name":"postgres","database_name":"postgres","process_id":"105","connection_from":"[local]","session_id":"662bc204.69","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.724 UTC","user_name":"postgres","database_name":"postgres","process_id":"106","connection_from":"[local]","session_id":"662bc204.6a","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.726 UTC","user_name":"postgres","database_name":"postgres","process_id":"107","connection_from":"[local]","session_id":"662bc204.6b","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"DB not available, will retry","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"3f344928-44eb-43b0-b2b8-61e91cede67f","logging_pod":"pgv2-recovery3-9","err":"failed to connect to host=/controller/run user=postgres database=postgres: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"wal-restore","msg":"Restored WAL file","logging_pod":"pgv2-recovery3-9","walName":"000000060000059D00000052","startTime":"2024-04-26T15:02:25Z","endTime":"2024-04-26T15:02:28Z","elapsedWalTime":3.208852673}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"wal-restore","msg":"Set end-of-wal-stream flag as one of the WAL files to be prefetched was not found","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"wal-restore","msg":"WAL restore command completed (parallel)","logging_pod":"pgv2-recovery3-9","walName":"000000060000059D00000052","maxParallel":20,"successfulWalRestore":1,"failedWalRestore":19,"endOfWALStream":true,"startTime":"2024-04-26T15:02:25Z","downloadStartTime":"2024-04-26T15:02:25Z","downloadTotalTime":3.209284358,"totalTime":3.284845411}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.784 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"4","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"restored log file "000000060000059D00000052" from archive","backend_type":"startup","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.798 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"5","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"invalid resource manager ID in checkpoint record","backend_type":"startup","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.798 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"6","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"PANIC","sql_state_code":"XX000","message":"could not locate a valid checkpoint record","backend_type":"startup","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.803 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"6","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"startup process (PID 25) was terminated by signal 6: Aborted","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.803 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"7","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"aborting startup due to startup process failure","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.806 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"8","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"database system is shut down","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Extracting pg_controldata information","logging_pod":"pgv2-recovery3-9","reason":"postmaster has exited"}
{"level":"error","ts":"2024-04-26T15:02:28Z","msg":"PostgreSQL process exited with errors","logging_pod":"pgv2-recovery3-9","error":"exit status 1","stacktrace":"github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/instance/run/lifecycle.(*PostgresLifecycle).Start\n\tinternal/cmd/manager/instance/run/lifecycle/lifecycle.go:98\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/manager/runnable_group.go:223"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for non leader election runnables"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for leader election runnables"}
{"level":"error","ts":"2024-04-26T15:02:28Z","msg":"error received after stop sequence was engaged","error":"exit status 1","stacktrace":"sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/manager/internal.go:490"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Webserver exited","logging_pod":"pgv2-recovery3-9","address":":8000"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Webserver exited","logging_pod":"pgv2-recovery3-9","address":"localhost:8010"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"roles_reconciler","msg":"Terminated RoleSynchronizer loop","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Webserver exited","logging_pod":"pgv2-recovery3-9","address":":9187"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Exited log pipe","fileName":"/controller/log/postgres.json","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Exited log pipe","fileName":"/controller/log/postgres.csv","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Exited log pipe","fileName":"/controller/log/postgres","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"All workers finished","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"All workers finished","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"All workers finished","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for caches"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"pkg/mod/k8s.io/client-go@v0.29.4/tools/cache/reflector.go:229: watch of *v1.Cluster ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for webhooks"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for HTTP servers"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Wait completed, proceeding to shutdown the manager"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"pg_controldata","msg":"pg_control version number: 1300\nCatalog version number: 202307071\nDatabase system identifier: 7345552334997741588\nDatabase cluster state: shut down\npg_control last modified: Fri 26 Apr 2024 03:01:24 PM UTC\nLatest checkpoint location: 59D/52000028\nLatest checkpoint's REDO location: 59D/52000028\nLatest checkpoint's REDO WAL file: 000000060000059D00000052\nLatest checkpoint's TimeLineID: 6\nLatest checkpoint's PrevTimeLineID: 6\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:181526095\nLatest checkpoint's NextOID: 148741096\nLatest checkpoint's NextMultiXactId: 1\nLatest checkpoint's NextMultiOffset: 0\nLatest checkpoint's oldestXID: 722\nLatest checkpoint's oldestXID's DB: 1\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 1\nLatest checkpoint's oldestMulti's DB: 1\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint: Fri 26 Apr 2024 03:01:24 PM UTC\nFake LSN counter for unlogged rels: 0/3E8\nMinimum recovery ending location: 0/0\nMin recovery ending loc's timeline: 0\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: no\nwal_level setting: logical\nwal_log_hints setting: on\nmax_connections setting: 1200\nmax_worker_processes setting: 32\nmax_wal_senders setting: 10\nmax_prepared_xacts setting: 0\nmax_locks_per_xact setting: 64\ntrack_commit_timestamp setting: off\nMaximum data alignment: 8\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nWAL block size: 8192\nBytes per WAL segment: 16777216\nMaximum length of identifiers: 64\nMaximum columns in an index: 32\nMaximum size of a TOAST chunk: 1996\nSize of a large-object chunk: 2048\nDate/time type storage: 64-bit integers\nFloat8 argument passing: by value\nData page checksum version: 0\nMock authentication nonce: b2ae8cbe39949b76539a9f04f6903695bba7311148df10284169aa832cc898ae\n","pipe":"stdout","logging_pod":"pgv2-recovery3-9"}
Stream closed EOF for pgdb/pgv2-recovery3-9 (postgres)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Forgive me if this is a newbie question: Could someone elaborate on the best practice for scaling up a cluster?
We have a cluster generated from a recovery, and the documentation states that when doing this it's best to start with one instance, make a snapshot, then add replicas.
However with 1.23.0 in one cluster we have one creating pods in a crash loop, and a second independent cluster that doesn't do anything at all after restarts or reloads.
Would like help on how to scale up, and if something goes wrong, where is the best place to look for help, i.e. which logs, or resource status might help?
Thanks.
Actually it looks like I'm getting an error with the wal archive, but can't confidently say what's going on.
{"level":"debug","ts":"2024-04-26T15:02:24Z","msg":"pg_controldata stdout","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","stdout":"pg_control version number: 1300\nCatalog version number: 202307071\nDatabase system identifier: 7345552334997741588\nDatabase cluster state: shut down\npg_control last modified: Fri Apr 26 15:01:24 2024\nLatest checkpoint location: 59D/52000028\nLatest checkpoint's REDO location: 59D/52000028\nLatest checkpoint's REDO WAL file: 000000060000059D00000052\nLatest checkpoint's TimeLineID: 6\nLatest checkpoint's PrevTimeLineID: 6\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:181526095\nLatest checkpoint's NextOID: 148741096\nLatest checkpoint's NextMultiXactId: 1\nLatest checkpoint's NextMultiOffset: 0\nLatest checkpoint's oldestXID: 722\nLatest checkpoint's oldestXID's DB: 1\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 1\nLatest checkpoint's oldestMulti's DB: 1\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint: Fri Apr 26 15:01:24 2024\nFake LSN counter for unlogged rels: 0/3E8\nMinimum recovery ending location: 0/0\nMin recovery ending loc's timeline: 0\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: no\nwal_level setting: logical\nwal_log_hints setting: on\nmax_connections setting: 1200\nmax_worker_processes setting: 32\nmax_wal_senders setting: 10\nmax_prepared_xacts setting: 0\nmax_locks_per_xact setting: 64\ntrack_commit_timestamp setting: off\nMaximum data alignment: 8\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nWAL block size: 8192\nBytes per WAL segment: 16777216\nMaximum length of identifiers: 64\nMaximum columns in an index: 32\nMaximum size of a TOAST chunk: 1996\nSize of a large-object chunk: 2048\nDate/time type storage: 64-bit integers\nFloat8 argument passing: by value\nData page checksum version: 0\nMock authentication nonce: b2ae8cbe39949b76539a9f04f6903695bba7311148df10284169aa832cc898ae\n"}
{"level":"info","ts":"2024-04-26T15:02:24Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:24Z","msg":"Extracting pg_controldata information","logging_pod":"pgv2-recovery3-9","reason":"postmaster start up"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"pg_controldata","msg":"pg_control version number: 1300\nCatalog version number: 202307071\nDatabase system identifier: 7345552334997741588\nDatabase cluster state: shut down\npg_control last modified: Fri 26 Apr 2024 03:01:24 PM UTC\nLatest checkpoint location: 59D/52000028\nLatest checkpoint's REDO location: 59D/52000028\nLatest checkpoint's REDO WAL file: 000000060000059D00000052\nLatest checkpoint's TimeLineID: 6\nLatest checkpoint's PrevTimeLineID: 6\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:181526095\nLatest checkpoint's NextOID: 148741096\nLatest checkpoint's NextMultiXactId: 1\nLatest checkpoint's NextMultiOffset: 0\nLatest checkpoint's oldestXID: 722\nLatest checkpoint's oldestXID's DB: 1\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 1\nLatest checkpoint's oldestMulti's DB: 1\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint: Fri 26 Apr 2024 03:01:24 PM UTC\nFake LSN counter for unlogged rels: 0/3E8\nMinimum recovery ending location: 0/0\nMin recovery ending loc's timeline: 0\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: no\nwal_level setting: logical\nwal_log_hints setting: on\nmax_connections setting: 1200\nmax_worker_processes setting: 32\nmax_wal_senders setting: 10\nmax_prepared_xacts setting: 0\nmax_locks_per_xact setting: 64\ntrack_commit_timestamp setting: off\nMaximum data alignment: 8\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nWAL block size: 8192\nBytes per WAL segment: 16777216\nMaximum length of identifiers: 64\nMaximum columns in an index: 32\nMaximum size of a TOAST chunk: 1996\nSize of a large-object chunk: 2048\nDate/time type storage: 64-bit integers\nFloat8 argument passing: by value\nData page checksum version: 0\nMock authentication nonce: b2ae8cbe39949b76539a9f04f6903695bba7311148df10284169aa832cc898ae\n","pipe":"stdout","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","msg":"Instance is still down, will retry in 1 second","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"9393b45d-e23c-4651-9212-2919e10d250d","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"2024-04-26 15:02:24.897 UTC [21] LOG: redirecting log output to logging collector process","pipe":"stderr","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"2024-04-26 15:02:24.897 UTC [21] HINT: Future log output will appear in directory "/controller/log".","pipe":"stderr","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.897 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"1","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"ending log output to stderr","hint":"Future log output will go to log destination "csvlog".","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.897 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"2","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"starting PostgreSQL 16.2 (Debian 16.2-1.pgdg110+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.897 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"3","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on IPv4 address "0.0.0.0", port 5432","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.897 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"4","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on IPv6 address "::", port 5432","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"2024-04-26 15:02:24.897 UTC [21] LOG: ending log output to stderr","source":"/controller/log/postgres","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"2024-04-26 15:02:24.897 UTC [21] HINT: Future log output will go to log destination "csvlog".","source":"/controller/log/postgres","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.901 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"5","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"listening on Unix socket "/controller/run/.s.PGSQL.5432"","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.912 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"1","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"database system was shut down at 2024-04-26 15:01:24 UTC","backend_type":"startup","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:24Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"debug","ts":"2024-04-26T15:02:24Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:24Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:24.941 UTC","user_name":"postgres","database_name":"postgres","process_id":"34","connection_from":"[local]","session_id":"662bc200.22","session_line_num":"1","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"wal-restore","msg":"end-of-wal-stream flag found.Exiting with error once to let Postgres try switching to streaming replication","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:25.008 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"2","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"entering standby mode","backend_type":"startup","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"wal-restore","msg":"Restored WAL file","logging_pod":"pgv2-recovery3-9","walName":"00000006.history","startTime":"2024-04-26T15:02:25Z","endTime":"2024-04-26T15:02:25Z","elapsedWalTime":0.374075893}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"wal-restore","msg":"WAL restore command completed (parallel)","logging_pod":"pgv2-recovery3-9","walName":"00000006.history","maxParallel":20,"successfulWalRestore":1,"failedWalRestore":19,"endOfWALStream":false,"startTime":"2024-04-26T15:02:25Z","downloadStartTime":"2024-04-26T15:02:25Z","downloadTotalTime":0.374262588,"totalTime":0.447201732}
{"level":"info","ts":"2024-04-26T15:02:25Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:25.477 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"3","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"restored log file "00000006.history" from archive","backend_type":"startup","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Cached object request received","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9"}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Reconciling Cluster","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"2637f0d9-bc9b-4736-b96e-b9582d41e1d1","caller":"internal/management/controller/instance_controller.go:111","logging_pod":"pgv2-recovery3-9","cluster":{"apiVersion":"postgresql.cnpg.io/v1","kind":"Cluster","namespace":"pgdb","name":"pgv2-recovery3"}}
{"level":"debug","ts":"2024-04-26T15:02:25Z","msg":"Reconciling custom monitoring queries","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"2637f0d9-bc9b-4736-b96e-b9582d41e1d1","caller":"internal/management/controller/instance_controller.go:753","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:25Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:25Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.255 UTC","user_name":"postgres","database_name":"postgres","process_id":"91","connection_from":"[local]","session_id":"662bc202.5b","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.267 UTC","user_name":"postgres","database_name":"postgres","process_id":"92","connection_from":"[local]","session_id":"662bc202.5c","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:26Z","msg":"DB not available, will retry","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"2637f0d9-bc9b-4736-b96e-b9582d41e1d1","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.319 UTC","user_name":"postgres","database_name":"postgres","process_id":"93","connection_from":"[local]","session_id":"662bc202.5d","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.531 UTC","user_name":"postgres","database_name":"postgres","process_id":"94","connection_from":"[local]","session_id":"662bc202.5e","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.574 UTC","user_name":"postgres","database_name":"postgres","process_id":"95","connection_from":"[local]","session_id":"662bc202.5f","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.667 UTC","user_name":"postgres","database_name":"postgres","process_id":"96","connection_from":"[local]","session_id":"662bc202.60","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:26Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:26Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:26.971 UTC","user_name":"postgres","database_name":"postgres","process_id":"97","connection_from":"[local]","session_id":"662bc202.61","session_line_num":"1","session_start_time":"2024-04-26 15:02:26 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:27Z","msg":"Reconciling Cluster","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"100c303a-39ef-4aea-8ec6-b7f9548338ff","caller":"internal/management/controller/instance_controller.go:111","logging_pod":"pgv2-recovery3-9","cluster":{"apiVersion":"postgresql.cnpg.io/v1","kind":"Cluster","namespace":"pgdb","name":"pgv2-recovery3"}}
{"level":"debug","ts":"2024-04-26T15:02:27Z","msg":"Reconciling custom monitoring queries","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"100c303a-39ef-4aea-8ec6-b7f9548338ff","caller":"internal/management/controller/instance_controller.go:753","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:27Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:27Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:27Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:27.567 UTC","user_name":"postgres","database_name":"postgres","process_id":"99","connection_from":"[local]","session_id":"662bc203.63","session_line_num":"1","session_start_time":"2024-04-26 15:02:27 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:27Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:27.587 UTC","user_name":"postgres","database_name":"postgres","process_id":"100","connection_from":"[local]","session_id":"662bc203.64","session_line_num":"1","session_start_time":"2024-04-26 15:02:27 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:27Z","msg":"DB not available, will retry","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"100c303a-39ef-4aea-8ec6-b7f9548338ff","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.362 UTC","user_name":"postgres","database_name":"postgres","process_id":"101","connection_from":"[local]","session_id":"662bc204.65","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.563 UTC","user_name":"postgres","database_name":"postgres","process_id":"102","connection_from":"[local]","session_id":"662bc204.66","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Reconciling Cluster","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"3f344928-44eb-43b0-b2b8-61e91cede67f","caller":"internal/management/controller/instance_controller.go:111","logging_pod":"pgv2-recovery3-9","cluster":{"apiVersion":"postgresql.cnpg.io/v1","kind":"Cluster","namespace":"pgdb","name":"pgv2-recovery3"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Reconciling custom monitoring queries","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"3f344928-44eb-43b0-b2b8-61e91cede67f","caller":"internal/management/controller/instance_controller.go:753","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.589 UTC","user_name":"postgres","database_name":"postgres","process_id":"103","connection_from":"[local]","session_id":"662bc204.67","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Updated replication settings","logging_pod":"pgv2-recovery3-9","filename":"override.conf"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.669 UTC","user_name":"postgres","database_name":"postgres","process_id":"105","connection_from":"[local]","session_id":"662bc204.69","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"debug","ts":"2024-04-26T15:02:28Z","msg":"Instance status probe failing","caller":"pkg/management/log/log.go:178","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.724 UTC","user_name":"postgres","database_name":"postgres","process_id":"106","connection_from":"[local]","session_id":"662bc204.6a","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.726 UTC","user_name":"postgres","database_name":"postgres","process_id":"107","connection_from":"[local]","session_id":"662bc204.6b","session_line_num":"1","session_start_time":"2024-04-26 15:02:28 UTC","transaction_id":"0","error_severity":"FATAL","sql_state_code":"57P03","message":"the database system is starting up","backend_type":"client backend","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"DB not available, will retry","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster","Cluster":{"name":"pgv2-recovery3","namespace":"pgdb"},"namespace":"pgdb","name":"pgv2-recovery3","reconcileID":"3f344928-44eb-43b0-b2b8-61e91cede67f","logging_pod":"pgv2-recovery3-9","err":"failed to connect to
host=/controller/run user=postgres database=postgres
: server error (FATAL: the database system is starting up (SQLSTATE 57P03))"}{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"wal-restore","msg":"Restored WAL file","logging_pod":"pgv2-recovery3-9","walName":"000000060000059D00000052","startTime":"2024-04-26T15:02:25Z","endTime":"2024-04-26T15:02:28Z","elapsedWalTime":3.208852673}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"wal-restore","msg":"Set end-of-wal-stream flag as one of the WAL files to be prefetched was not found","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"wal-restore","msg":"WAL restore command completed (parallel)","logging_pod":"pgv2-recovery3-9","walName":"000000060000059D00000052","maxParallel":20,"successfulWalRestore":1,"failedWalRestore":19,"endOfWALStream":true,"startTime":"2024-04-26T15:02:25Z","downloadStartTime":"2024-04-26T15:02:25Z","downloadTotalTime":3.209284358,"totalTime":3.284845411}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.784 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"4","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"restored log file "000000060000059D00000052" from archive","backend_type":"startup","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.798 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"5","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"invalid resource manager ID in checkpoint record","backend_type":"startup","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.798 UTC","process_id":"25","session_id":"662bc200.19","session_line_num":"6","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"PANIC","sql_state_code":"XX000","message":"could not locate a valid checkpoint record","backend_type":"startup","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.803 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"6","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"startup process (PID 25) was terminated by signal 6: Aborted","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.803 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"7","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"aborting startup due to startup process failure","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"postgres","msg":"record","logging_pod":"pgv2-recovery3-9","record":{"log_time":"2024-04-26 15:02:28.806 UTC","process_id":"21","session_id":"662bc200.15","session_line_num":"8","session_start_time":"2024-04-26 15:02:24 UTC","transaction_id":"0","error_severity":"LOG","sql_state_code":"00000","message":"database system is shut down","backend_type":"postmaster","query_id":"0"}}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Extracting pg_controldata information","logging_pod":"pgv2-recovery3-9","reason":"postmaster has exited"}
{"level":"error","ts":"2024-04-26T15:02:28Z","msg":"PostgreSQL process exited with errors","logging_pod":"pgv2-recovery3-9","error":"exit status 1","stacktrace":"github.com/cloudnative-pg/cloudnative-pg/pkg/management/log.(*logger).Error\n\tpkg/management/log/log.go:125\ngithub.com/cloudnative-pg/cloudnative-pg/internal/cmd/manager/instance/run/lifecycle.(*PostgresLifecycle).Start\n\tinternal/cmd/manager/instance/run/lifecycle/lifecycle.go:98\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/manager/runnable_group.go:223"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for non leader election runnables"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for leader election runnables"}
{"level":"error","ts":"2024-04-26T15:02:28Z","msg":"error received after stop sequence was engaged","error":"exit status 1","stacktrace":"sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.17.3/pkg/manager/internal.go:490"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Shutdown signal received, waiting for all workers to finish","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Webserver exited","logging_pod":"pgv2-recovery3-9","address":":8000"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Webserver exited","logging_pod":"pgv2-recovery3-9","address":"localhost:8010"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"roles_reconciler","msg":"Terminated RoleSynchronizer loop","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Webserver exited","logging_pod":"pgv2-recovery3-9","address":":9187"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Exited log pipe","fileName":"/controller/log/postgres.json","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Exited log pipe","fileName":"/controller/log/postgres.csv","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Exited log pipe","fileName":"/controller/log/postgres","logging_pod":"pgv2-recovery3-9"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"All workers finished","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"All workers finished","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"All workers finished","controller":"cluster","controllerGroup":"postgresql.cnpg.io","controllerKind":"Cluster"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for caches"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"pkg/mod/k8s.io/client-go@v0.29.4/tools/cache/reflector.go:229: watch of *v1.Cluster ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for webhooks"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Stopping and waiting for HTTP servers"}
{"level":"info","ts":"2024-04-26T15:02:28Z","msg":"Wait completed, proceeding to shutdown the manager"}
{"level":"info","ts":"2024-04-26T15:02:28Z","logger":"pg_controldata","msg":"pg_control version number: 1300\nCatalog version number: 202307071\nDatabase system identifier: 7345552334997741588\nDatabase cluster state: shut down\npg_control last modified: Fri 26 Apr 2024 03:01:24 PM UTC\nLatest checkpoint location: 59D/52000028\nLatest checkpoint's REDO location: 59D/52000028\nLatest checkpoint's REDO WAL file: 000000060000059D00000052\nLatest checkpoint's TimeLineID: 6\nLatest checkpoint's PrevTimeLineID: 6\nLatest checkpoint's full_page_writes: on\nLatest checkpoint's NextXID: 0:181526095\nLatest checkpoint's NextOID: 148741096\nLatest checkpoint's NextMultiXactId: 1\nLatest checkpoint's NextMultiOffset: 0\nLatest checkpoint's oldestXID: 722\nLatest checkpoint's oldestXID's DB: 1\nLatest checkpoint's oldestActiveXID: 0\nLatest checkpoint's oldestMultiXid: 1\nLatest checkpoint's oldestMulti's DB: 1\nLatest checkpoint's oldestCommitTsXid:0\nLatest checkpoint's newestCommitTsXid:0\nTime of latest checkpoint: Fri 26 Apr 2024 03:01:24 PM UTC\nFake LSN counter for unlogged rels: 0/3E8\nMinimum recovery ending location: 0/0\nMin recovery ending loc's timeline: 0\nBackup start location: 0/0\nBackup end location: 0/0\nEnd-of-backup record required: no\nwal_level setting: logical\nwal_log_hints setting: on\nmax_connections setting: 1200\nmax_worker_processes setting: 32\nmax_wal_senders setting: 10\nmax_prepared_xacts setting: 0\nmax_locks_per_xact setting: 64\ntrack_commit_timestamp setting: off\nMaximum data alignment: 8\nDatabase block size: 8192\nBlocks per segment of large relation: 131072\nWAL block size: 8192\nBytes per WAL segment: 16777216\nMaximum length of identifiers: 64\nMaximum columns in an index: 32\nMaximum size of a TOAST chunk: 1996\nSize of a large-object chunk: 2048\nDate/time type storage: 64-bit integers\nFloat8 argument passing: by value\nData page checksum version: 0\nMock authentication nonce: b2ae8cbe39949b76539a9f04f6903695bba7311148df10284169aa832cc898ae\n","pipe":"stdout","logging_pod":"pgv2-recovery3-9"}
Stream closed EOF for pgdb/pgv2-recovery3-9 (postgres)
Beta Was this translation helpful? Give feedback.
All reactions