Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable issue_discards=1 in /etc/lvm/lvm.conf? #514

Open
stormi opened this issue Sep 15, 2020 · 9 comments
Open

enable issue_discards=1 in /etc/lvm/lvm.conf? #514

stormi opened this issue Sep 15, 2020 · 9 comments

Comments

@stormi
Copy link
Contributor

stormi commented Sep 15, 2020

Several users have been asking us at XCP-ng to set issue_discards to 1 instead of 0 by default in lvm.conf. One of them having used it on all their XenServer and XCP-ng servers for long.

See xcp-ng/xcp#123

This would only impact storage that handles discards, and would benefit to at least a significant part of them such as SSDs or Ceph storage.

Why is it set to 0 by default?

One of the users assumes that it may be related to hardware that suffers from performance impacts when discards are issued (based on the following extract from Citrix Hypervisor's documentation) and wonders if current days storage hardware still suffers from such penalty.

Note: Reclaiming freed space is an intensive operation and can affect the performance of the storage array. You should perform this operation only when space reclamation is required on the array. Citrix recommends that you schedule this work outside the peak array demand hours

@MarkSymsCtx
Copy link
Contributor

MarkSymsCtx commented Sep 15, 2020

It is set to 0 by default due to data corruption caused by SANs that implement the discard functionality incorrectly so it isn't safe to have on by default.

The performance issue is also valid. Some SANs are understood to lock an entire LUN or RAID set in order to complete discard which is obviously undesirable with active dynamic workloads.

Even if it could be proved that neither of these issues affected any of the current storage that we have available in our test labs or storage certified on the HCL it would potentially be embarrassing to impact a customer with a previously working environment.

@stormi
Copy link
Contributor Author

stormi commented Sep 15, 2020

Thanks. Is there a known list of such SANs?

@MarkSymsCtx
Copy link
Contributor

Thanks. Is there a known list of such SANs?

Not that I'm aware of, this predates my involvement so all I have to go on is the reasons I was given when asking the same questions of previous maintainers.

@MarkSymsCtx
Copy link
Contributor

It might not be unreasonable to allow this to be enabled more easily than modifying a config file, e.g. via a key in the xapi database and then customers could enable (at their own risk) to ascertain whether it works correctly in their environment.

@stormi
Copy link
Contributor Author

stormi commented Sep 15, 2020

Is that a setting that could be enabled on a per storage basis?

@MarkSymsCtx
Copy link
Contributor

Is that a setting that could be enabled on a per storage basis?

It could, by optionally passing (something like) to the lvremove command

--config devices{issue_discards=1}

this is exposed in the command api by the config_param parameter to the lvutil.remove method, so a call of lvutil.remove(<path>, config_param="issue_discards=1") would probably do the right thing.

@edwintorok
Copy link
Contributor

Would the trim plugin help here?

@MarkSymsCtx
Copy link
Contributor

That's sort of what the trim plugin does but it's a manual operation and it trims the empty space of the VG by creating an LV that fills the entire freespace and then discarding it and removing it.

@nagilum99
Copy link

It might not be unreasonable to allow this to be enabled more easily than modifying a config file, e.g. via a key in the xapi database and then customers could enable (at their own risk) to ascertain whether it works correctly in their environment.

Any updates on that? SCSI discard is implemented for VMware IMHO and should really belog to supported features in the 21st century. It's fine if not enabled by default, but a switch via XAPI management (to be used by Xen Center/XOA...) should really be offered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants