Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exponential (GCP) CloudStorage growth #108

Open
Armyjeffries opened this issue May 8, 2024 · 1 comment
Open

Exponential (GCP) CloudStorage growth #108

Armyjeffries opened this issue May 8, 2024 · 1 comment

Comments

@Armyjeffries
Copy link

Armyjeffries commented May 8, 2024

After noticing my GCP CloudStorage costs starting to increase, I found that the pi-hole is being backed up ~2/daily. While it was unnoticeable at first with the image being so small to start (~5mb), but as the pi-hole persisted and was logging, so did it's size. From Oct 31 to May 6 the size of the pi-hole image had reached >186mb. It also appears that there is no retention policy outlined in the ansible script (nor by default in GCP?). These balloned to nearly 40gb in ~6mos, and potentially would continuing growing indefinitely.

Digging into the GCP playbook; there is a section named Pihole to backup timer with a [Timer] OnUnitActiveSec=12h

    - name: Pihole to backup timer
      ansible.builtin.blockinfile:
        path: /etc/systemd/system/pihole-to-backup.timer
        create: true
        owner: root
        group: root
        mode: '0644'
        block: |
          [Unit]
          Description=Archives and copies pihole to cloud storage

          [Timer]
          OnUnitActiveSec=12h
          Unit=pihole-to-backup.service

          [Install]
          WantedBy=multi-user.target

This value is "ok" (I changed it to 48hrs, though arguably 24hrs is more appropriate).
I believe adding a retention "policy" to the ansible script would be appropriate.
I have began testing an added field to the pihole-to-backup.yml file:

    - name: Delete old backups (keep last 30 days)
        shell: |
          gsutil ls gs://YOUR-BUCKET-NAME/pihole | sort -r | tail -n +31 | while read file; do 
            gsutil rm "$file"
        done
@langburd
Copy link

langburd commented Jun 3, 2024

I can confirm the same problem exists also in Oracle Cloud.

Bucket: cloudblock-bucket
Approximate Object Count: 145 versioned objects
Approximate Size: 15.19 GiB

Possible solutions:

  • Disable object versioning for the created bucket at all
  • Decrease the amount of time to store old versions

https://github.com/chadgeary/cloudblock/blob/67cd65dc8c1c5959fadf90124a6319b69bb9105f/oci/oci-storage.tf#L15C1-L27C2

resource "oci_objectstorage_object_lifecycle_policy" "ph-bucket-lifecycle" {
  bucket    = "${var.ph_prefix}-bucket"
  namespace = data.oci_objectstorage_namespace.ph-bucket-namespace.namespace
  rules {
    action      = "DELETE"
    is_enabled  = true
    name        = "${var.ph_prefix}-bucket-lifecycle"
    target      = "previous-object-versions"
    time_amount = 30
    time_unit   = "DAYS"
  }
  depends_on = [oci_identity_policy.ph-id-storageobject-policy, oci_objectstorage_bucket.ph-bucket]
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants