Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vLLM: A Internal IP exposing maybe a threat #1484

Open
llmwesee opened this issue Mar 18, 2024 · 1 comment
Open

vLLM: A Internal IP exposing maybe a threat #1484

llmwesee opened this issue Mar 18, 2024 · 1 comment

Comments

@llmwesee
Copy link

Details
While running my vllm code i checked and noticed a self calling interal ip request in netsat command i.e(netstat -ano) attached below . I forcefully made code to run in localhost only by not calling internal ip by running with this command

HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 python -m vllm.entrypoints.openai.api_server --port=5000 --host=127.0.0.1 --model "/home/abcdt/.cache/huggingface/hub/models--meta-llama--Llama-2-13b-chat-hf/snapshots/c2f3ec81aac798ae26dcc57799a994dfbf521496" --tokenizer=hf-internal-testing/llama-tokenizer --tensor-parallel-size=1 --seed 1234 --max-num-batched-tokens=4096

but it shows 192.* series that i dont want.Please try to fix this and provide me a solution and I want my host address 127.0.0.1 should show as my foreign IP instead of my system ip 192.168.100.17

Screenshot from 2024-03-15 16-46-13

PoC
while running program and then checking connection througfh netstat it shows localhost with some internal ip as well as 192.168..

Impact
Exposing intenral IP via command line. showing vulnerability by displaying internal ip

@pseudotensor pseudotensor changed the title A Internal IP exposing maybe a threat vLLM: A Internal IP exposing maybe a threat Mar 18, 2024
@pseudotensor
Copy link
Collaborator

pseudotensor commented Mar 18, 2024

There's nothing in vLLM that we do special, i's just native vLLM. I'm not aware of any issue with vLLM. You can confirm by using the vLLM docker image instead and raise an issue with vLLM team if you think something is off. Or try a firewall and see what happens if you block those ports.

I'm not sure the ENVs you specify are valid for vLLM. Maybe review: #1186 i.e. use absolute path so it doesn't reach out for the model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants