Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Qwen2.5-VL-7B的问题 #131

Open
ffanyt opened this issue Feb 21, 2025 · 2 comments
Open

Qwen2.5-VL-7B的问题 #131

ffanyt opened this issue Feb 21, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@ffanyt
Copy link

ffanyt commented Feb 21, 2025

我的命令是:
vllm serve /home/share/models/Qwen2.5-VL-3B-Instruct --limit_mm_per_prompt image=5 \ --dtype float16 \ --port 10004 \ --tensor-parallel-size 4 \ --gpu-memory-utilization 0.9 \ --max-model-len 32768

我发现7B的模型会回答感叹号

Image

而3b的模型一切正常

@wangxiyuan wangxiyuan added the bug Something isn't working label Feb 21, 2025
@wangxiyuan
Copy link
Collaborator

@ganyi1996ppo Any update?

@ganyi1996ppo
Copy link
Collaborator

我的命令是: vllm serve /home/share/models/Qwen2.5-VL-3B-Instruct --limit_mm_per_prompt image=5 \ --dtype float16 \ --port 10004 \ --tensor-parallel-size 4 \ --gpu-memory-utilization 0.9 \ --max-model-len 32768

我发现7B的模型会回答感叹号

Image

而3b的模型一切正常

Thanks for reporting this issue, did you use the latest release version of vllm-ascend? If so, maybe you can try to set a specific seed to the request, this might helps on the accuracy issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants