[BUG] 40Gb of longhorn metadata in a pv? #8472
Replies: 5 comments 7 replies
-
Can you elaborate more on your question? |
Beta Was this translation helpful? Give feedback.
-
Hello @derekbit , The issue is: I do not understand such discrepancy between the real usage of the pv , and the actual size displayed by longhorn. I believe 40Gb difference isn't normal. It isn't trimmable, it isn't an accumulating of snapshot issue. |
Beta Was this translation helpful? Give feedback.
-
For the actual size, please see the official document https://longhorn.io/docs/1.6.1/nodes-and-volumes/volumes/volume-size/ |
Beta Was this translation helpful? Give feedback.
-
Yes I'm aware of that therefore I tried running trim filesystem multiple but it won't trim. And I do not understand why there is this 40Gb overhead.. that won't go away |
Beta Was this translation helpful? Give feedback.
-
You have a 90 GiB snapshot. That's why your actual size is larger than 90 GiB.
|
Beta Was this translation helpful? Give feedback.
-
Describe the bug
Unclear size of my PV , when it should be around 50gb it is 90gb.
To Reproduce
Import a database dump in a longhorn backed database workload and then perform some setup operations
Expected behavior
I expect the pv to be close to the size of the database not +30/+40Gb
Support bundle for troubleshooting
supportbundle_0e2fe36f-873b-4027-bc13-635cc8f1d41c_2024-04-29T07-40-29Z.zip
Environment
pvc-97034b0b-17a5-4c6d-9629-6e7bb029c501
Additional context
Steps already tried to "shrink" the actual used size are:
tried to create a snapshot and then delete manually a snapshot without luck too..
I expect longhorn volume to be close to these values:
Beta Was this translation helpful? Give feedback.
All reactions