Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support loading eetq quantized model #393

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

thincal
Copy link
Contributor

@thincal thincal commented Apr 5, 2024

What does this PR do?

Fixes #391

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Was this discussed/approved via a Github issue or the discord / slack channel? Please add a link
    to it if that's the case.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@tgaddair thanks

@thincal thincal marked this pull request as draft April 9, 2024 08:40
@thincal
Copy link
Contributor Author

thincal commented Apr 9, 2024

will revisit again after the #399 resolved.

@thincal
Copy link
Contributor Author

thincal commented Apr 12, 2024

@SidaZh is that possible help have a check this integration?

@@ -226,6 +226,15 @@ def get_multi_weights_col(self, prefixes: List[Union[str, Tuple]], quantize: str

bits, groupsize = self._get_gptq_params()
weight = (qweight, qzeros, scales, g_idx, bits, groupsize, False)
elif quantize == "eetq":
try:
qweight = torch.cat(self.get_sharded_list("qweight", prefixes, dim=1), dim=1)
Copy link

@SidaZh SidaZh Apr 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to merge the weight parameters of multiple cards here? The quantization of eetq includes two steps: quantization and cutlass relayout. If the tensor is sliced or concat, the layout will be destroyed and output will be wrong.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two options: 1. Save and load using the same tp and pp strategies; 2. Load basic per-channel quantization parameters and relayout when initializing EETQLinear which requires some development. The following two interfaces will help:

from EETQ import quant_weights, preprocess_weights

unprocessed_quantized_weight, processed_quantized_weight, scales = quant_weights(unquantized_weight, torch.int8, True)    # quantize and relayout
processed_quantized_weight = preprocess_weights(unprocessed_quantized_weight)   # relayout

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Supporting inference with EETQ quantized model
2 participants