Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a2t bug #754

Open
ko120 opened this issue Oct 12, 2023 · 6 comments
Open

a2t bug #754

ko120 opened this issue Oct 12, 2023 · 6 comments

Comments

@ko120
Copy link

ko120 commented Oct 12, 2023

Describe the bug
A clear and concise description of what the bug is.
image

To Reproduce
Steps to reproduce the behavior:

  1. Run following command textattack ...
  2. Run following code ...
  3. See error

Expected behavior
A clear and concise description of what you expected to happen.
It ask us to input truncate_words_to
Screenshots or Traceback
If applicable, add screenshots to help explain your problem. Also, copy and paste tracebacks produced by the bug.
image

System Information (please complete the following information):

  • OS: [e.g. MacOS, Linux, Windows]
  • Library versions (e.g. torch==1.7.0, transformers==3.3.0)
  • Textattack version

Additional context
Add any other context about the problem here.

@qiyanjun
Copy link
Member

qiyanjun commented Nov 5, 2023

@jinyongyoo mind to take a look?

@jinyongyoo
Copy link
Collaborator

@ko120 @qiyanjun

Looks like the issue is with the truncate_words_to keyword argument which isn't part of GreedyWordSwapWIR. The argument was add in PR #747. @qiyanjun Could you share the background behind the PR and why that argument might have been added?

@xcegin
Copy link

xcegin commented Apr 25, 2024

I have come across the same issue yesterday, suprised to see this hanging here from November. Any updates?

@qiyanjun
Copy link
Member

Sorry for the delay.. will take a careful look

PR #747 , added a max length constraint in

search_method = GreedyWordSwapWIR(wir_method="gradient")

    max_len = getattr(model_wrapper, "max_length", None) or min(
        1024, model_wrapper.tokenizer.model_max_length, model_wrapper.model.config.max_position_embeddings - 2
    )
    search_method = GreedyWordSwapWIR(wir_method="gradient", truncate_words_to=max_len)

@xcegin
Copy link

xcegin commented Apr 25, 2024

In the GreedyWordSwapWIR class from the PR here, I dont see any argument truncate_words argument in the __init__ method of the class. Maybe an oversight (I am no contributor though)?

@qiyanjun
Copy link
Member

qiyanjun commented Apr 25, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants