You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched for an existing issue, and could not find anything. I believe this is a new bug.
I have read the troubleshooting guide
I have read the troubleshooting guide and I think this is a new bug.
I am running a supported version of CloudNativePG
I have read the troubleshooting guide and I think this is a new bug.
Contact Details
No response
Version
1.23.0
What version of Kubernetes are you using?
1.27
What is your Kubernetes environment?
Cloud: Azure AKS
How did you install the operator?
YAML manifest
What happened?
Using (1.23.1 cnpg, postgres image 13.14-18). When creating a relica, increasing the instance count by 1, I first create an online volume snapshot. Taking an online volume snapshot appears to create an issue with the WAL files. The replica spins up using the volume snapshot, but then gets stuck processing the WAL file created after the backup finished.
Restored WAL file","logging_pod":"cnpg-test2-2","walName":"000000060000027D00000033"Set end-of-wal-stream flag as one of the WAL files to be prefetched was not found"WAL restore command completed (parallel)","logging_pod":"cnpg-test2-2","walName":"000000060000027D00000033"restored log file \"000000060000027D00000033\invalid resource manager ID 88 at 27D/CC0000A0invalid resource manager ID 88 at 27D/CC0000A0terminating walreceiver process due to administrator commandend-of-wal-stream flag found. Exiting with error once to let Postgres try switching to streaming replication*repeats forever*
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Is this server under workload? Have you tried running the backup a few minutes after the server is up (my fear is that you don't have yet the WAL file containing the checkpoint coming from the primary when you run that backup and when you create the replica).
Can you please share the backup and the volume snapshot resources too? Thanks.
Is there an existing issue already for this bug?
I have read the troubleshooting guide
I am running a supported version of CloudNativePG
Contact Details
No response
Version
1.23.0
What version of Kubernetes are you using?
1.27
What is your Kubernetes environment?
Cloud: Azure AKS
How did you install the operator?
YAML manifest
What happened?
Using (1.23.1 cnpg, postgres image 13.14-18). When creating a relica, increasing the instance count by 1, I first create an online volume snapshot. Taking an online volume snapshot appears to create an issue with the WAL files. The replica spins up using the volume snapshot, but then gets stuck processing the WAL file created after the backup finished.
Cluster resource
Relevant log output
Code of Conduct
The text was updated successfully, but these errors were encountered: