Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is the calculation result of tensorrt-llm version llava1.5 different from the output of HF? #1572

Open
2 of 4 tasks
bleedingfight opened this issue May 10, 2024 · 6 comments
Assignees
Labels
triaged Issue has been triaged by maintainers

Comments

@bleedingfight
Copy link

System Info

  • tensorrt-llm:0.9.0.dev2024022700
  • GPU:L40S
  • tensorrtl-llm docker
  • driver:535.129.03

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

  1. Train the llava model using official llava code
  2. Convert the official llava model to the huggingface llava model
  3. Convert huggingface llava model to Tensorrt-llm
    benchmark(mmbench_cn):
    official llava as thruth,hf llava 99%(The output results of the hf model are close to 99.9% compared to the official model),llava-trt 93%。(The output of hf-lava in the following statement is taken as the true value)
    I randomly found an example where the output of tensorrt is different from the output of hf。trt-llava:B,hf-llava:A。
    I tried the following centralized methods, but none of them solved the problem:
  4. Is it the calculation error of the clip model that caused the final result to be incorrect(no)?
  5. Hugginfface uses float32 to calculate the output result of visual_tower, and then converts it to float16 input of language_model through mmprojector.Language model's inputs_embeds not same,I use hf's inputs_embeds for tensorrt-llm,the output is still B not A of llava-hf。
    language_model input is same:
    Screenshot_20240510_111827
    hf output:
    Screenshot_20240510_152658
    llava-trt:
    Screenshot_20240510_152650

Expected behavior

llm with same input will be same output:
'\n人们可以使用工程设计过程来解决问题。该过程中的一步是测试潜在解决方案是否符合设计要求。\n下面的段落描述了如何使用工程设计过程来测试解决问题的方案。阅读段落,然
后回答下面的问题。\n\nDevin是一名机械工程师,他正在设计一个记录温度、降水和风速的天气站。这个天气站将被用于一个最高记录温度为40摄氏度的城镇。Devin希望确保即使在异常炎热
的天气下,天气站也能正常工作。\n因此,他将一个室内测试室设置为50摄氏度,湿度低且无风。他将天气站留在测试室过夜。第二天,他检查天气站在50摄氏度下经过24小时后是否显示准确
的测量结果。\n图:一个天气站。\n以下哪项是Devin的测试可能显示的?\nA. 当温度为50°C时天气站是否能正常工作\nB. 当有风时天气站的工作情况如何\n请直接回答选项字母。

actual behavior

trt-llm:output(B)[[ 1, 319, 13563, 1546, 263, 12758, 1404, 322, 385, 23116.......]
(Pdb) output_ids[0,0,:2]
tensor([ 1, 319], device='cuda:0', dtype=torch.int32)
(Pdb) output_ids[0,0,input_lengths[0]:]
tensor([350, 2, 2, ..., 2, 2, 2], device='cuda:0',
dtype=torch.int32)

additional notes

hf:output(A)[tensor([319, 2], device='cuda:0')]

@bleedingfight bleedingfight added the bug Something isn't working label May 10, 2024
@byshiue
Copy link
Collaborator

byshiue commented May 10, 2024

Due to different kernel selection and kernel implementation, it often generates different results. Unless there are obvious accuracy regression, we think it is reasonable.

@byshiue byshiue self-assigned this May 10, 2024
@byshiue byshiue added triaged Issue has been triaged by maintainers and removed bug Something isn't working labels May 10, 2024
@bleedingfight
Copy link
Author

@byshiue I have seen a significant decrease in the accuracy of the output results of TRT on my test set. I would like to know how you have determined that the output of TRT is reasonable.

@byshiue
Copy link
Collaborator

byshiue commented May 17, 2024

We use mmlu and summarization task to evaluate. Could you try reproducing the accuracy on public model and public example, sharing your reproduced steps and let us be easier to reproduce your issue?

@bleedingfight
Copy link
Author

@byshiue Thank you very much for your reply. I'm sorry for the delayed reply. In the past few days, I have been trying to provide a Docker and minimum reproduction code. For the convenience of reproduction, I have upgraded to the latest TRT-LLM(0.11.0.dev2024051400). Now my previous code cannot produce results.
Screenshot_20240521_112648 look like this #1632

@byshiue
Copy link
Collaborator

byshiue commented May 23, 2024

temperature 0.0 is not a valid number in current TensorRT-LLM. Please use greedy search and don't set temperature directly, or set a very small temperature like 1e-6.

@bleedingfight
Copy link
Author

@byshiue oI modified temperature=1e-6 according to your statement, but I found that errors occur at all times except for the first inference that produces output,self.tokenizer.batch_decode(output_ids[0, :, input_lengths[0] :])will produce error:
image
like this #1299 。but my trt-version=0.11.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

2 participants