Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

BlenderBotSmall fluency #5084

Open
Lkh97 opened this issue Oct 16, 2023 · 1 comment
Open

BlenderBotSmall fluency #5084

Lkh97 opened this issue Oct 16, 2023 · 1 comment

Comments

@Lkh97
Copy link

Lkh97 commented Oct 16, 2023

Hi there. I have a question about BlenderBot Small 90M.

I have applied a safety framework to blenderbot small to force safe generations. Now I need to measure the "Fluency" of my generated safe answers. The common practice in this case is to use my generations as a label to a larger model and compute perplexity. I tried the same thing with LLAMA2. However, the calculated perplexities are very high in the range of 400k. I assume the reason is the huge gap between the two model sizes (blenderbot small vs LLAMA2). How do you think I could measure the fluency of my generated answers based on blenderbot small?

@mojtaba-komeili
Copy link
Contributor

I believe there must be something wring in your process. The PPL in that order is unreasonable.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants