Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run docker image on any machine which haven't internet connection #1602

Closed
glenbhermon opened this issue May 7, 2024 · 16 comments
Closed

Run docker image on any machine which haven't internet connection #1602

glenbhermon opened this issue May 7, 2024 · 16 comments

Comments

@glenbhermon
Copy link

I built the docker image from the repo and running it using the following commands :

mkdir -p ~/.cache
mkdir -p ~/save
mkdir -p ~/user_path
mkdir -p ~/db_dir_UserData
mkdir -p ~/users
mkdir -p ~/db_nonusers
mkdir -p ~/llamacpp_path
export GRADIO_SERVER_PORT=7860
export OPENAI_SERVER_PORT=5000                             
docker run \
       --gpus '"device=0"' \
       --runtime=nvidia \
       --shm-size=2g \
       -p $GRADIO_SERVER_PORT:$GRADIO_SERVER_PORT \
       -p $OPENAI_SERVER_PORT:$OPENAI_SERVER_PORT \
       --rm --init \                               
       --network host \                               
       -v /etc/passwd:/etc/passwd:ro \
       -v /etc/group:/etc/group:ro \
       -u `id -u`:`id -g` \
       -v "${HOME}"/.cache:/workspace/.cache \
       -v "${HOME}"/save:/workspace/save \
       -v "${HOME}"/user_path:/workspace/user_path \
       -v "${HOME}"/db_dir_UserData:/workspace/db_dir_UserData \
       -v "${HOME}"/users:/workspace/users \
       -v "${HOME}"/db_nonusers:/workspace/db_nonusers \
       -v "${HOME}"/llamacpp_path:/workspace/llamacpp_path \
       -e GRADIO_SERVER_PORT=$GRADIO_SERVER_PORT \
       h2ogpt_1 /workspace/generate.py \
          --base_model=mistralai/Mistral-7B-Instruct-v0.2  \      
          --use_safetensors=True \
          --prompt_type=mistral \
          --save_dir='/workspace/save/' \
          --use_gpu_id=False \
          --user_path=/workspace/user_path \
          --langchain_mode="LLM" \
          --langchain_modes="['UserData', 'LLM', 'MyData']" \                              
          --score_model=None \                                           
          --max_max_new_tokens=2048 \
          --max_new_tokens=1024 \
          --openai_port=$OPENAI_SERVER_PORT

When the container run first time it downloads models, ocr model and some configs from the internet then run successfully. But I want to run this docker image on another machine which haven't internet connection. How can i do that?
Can you provide the steps for creating the docker image that have all the necessary configs, ocr model and required language model inside the images that you can run on any machine without the internet connection? And also share the list of commands for running this new image which have all the required things.

@pseudotensor
Copy link
Collaborator

Hi, can you review this first? Then see what else you need. Thanks.

https://github.com/h2oai/h2ogpt/blob/main/docs/README_offline.md

@glenbhermon
Copy link
Author

I gone through the link you mention. It has all the necessary steps about offline but without docker. I'm asking about to make only one image that have one base model, config, parser etc in it that can run h2ogpt offline successfully.

@glenbhermon
Copy link
Author

The command i ran after creating the docker image:
sudo docker run --gpus '"device=0"' --runtime=nvidia --shm-size=2g -p $GRADIO_SERVER_PORT:$GRADIO_SERVER_PORT -p $OPENAI_SERVER_PORT:$OPENAI_SERVER_PORT --rm --init --network host -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro -u id -u:id -g -v "${HOME}"/.cache:/workspace/.cache -v "${HOME}"/save:/workspace/save -v "${HOME}"/user_path:/workspace/user_path -v "${HOME}"/db_dir_UserData:/workspace/db_dir_UserData -v "${HOME}"/users:/workspace/users -v "${HOME}"/db_nonusers:/workspace/db_nonusers -v "${HOME}"/llamacpp_path:/workspace/llamacpp_path -e GRADIO_SERVER_PORT=$GRADIO_SERVER_PORT h2ogpt_3 /workspace/generate.py --base_model=mistralai/Mistral-7B-Instruct-v0.2 --use_safetensors=True --prompt_type=mistral --save_dir='/workspace/save/' --use_gpu_id=False --user_path=/workspace/user_path --langchain_mode="LLM" --langchain_modes="['UserData', 'LLM', 'MyData']" --score_model=None --max_max_new_tokens=4096 --max_new_tokens=2048 --openai_port=$OPENAI_SERVER_PORT \

And after running it trying to make connection with the internet for downloading the embedding model and thrown the following error:

WARNING: Published ports are discarded when using host network mode
Using Model mistralai/mistral-7b-instruct-v0.2
fatal: not a git repository (or any of the parent directories): .git
git_hash.txt failed to be found: [Errno 2] No such file or directory: 'git_hash.txt'
Traceback (most recent call last):
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
    sock = connection.create_connection(
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/util/connection.py", line 60, in create_connection
    for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/socket.py", line 955, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen
    response = self._make_request(
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
    raise new_e
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
    self._validate_conn(conn)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
    conn.connect()
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connection.py", line 616, in connect
    self.sock = sock = self._new_conn()
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connection.py", line 205, in _new_conn
    raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x71188cb4d780>: Failed to resolve '[huggingface.co](http://huggingface.co/)' ([Errno -3] Temporary failure in name resolution)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 847, in urlopen
    retries = retries.increment(
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='[huggingface.co](http://huggingface.co/)', port=443): Max retries exceeded with url: /api/models/hkunlp/instructor-large (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x71188cb4d780>: Failed to resolve '[huggingface.co](http://huggingface.co/)' ([Errno -3] Temporary failure in name resolution)"))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/workspace/generate.py", line 20, in <module>
    entrypoint_main()
  File "/workspace/generate.py", line 16, in entrypoint_main
    H2O_Fire(main)
  File "/workspace/src/utils.py", line 72, in H2O_Fire
    fire.Fire(component=component, command=args)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/workspace/src/gen.py", line 1882, in main
    model=get_embedding(use_openai_embedding, hf_embedding_model=hf_embedding_model,
  File "/workspace/src/gpt_langchain.py", line 528, in get_embedding
    embedding = HuggingFaceInstructEmbeddings(model_name=hf_embedding_model,
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/langchain_community/embeddings/huggingface.py", line 158, in __init__
    self.client = INSTRUCTOR(
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 87, in __init__
    snapshot_download(model_name_or_path,
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/sentence_transformers/util.py", line 442, in snapshot_download
    model_info = _api.model_info(repo_id=repo_id, revision=revision, token=token)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2300, in model_info
    r = get_session().get(path, headers=headers, timeout=timeout, params=params)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
    return self.request("GET", url, **kwargs)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 66, in send
    return super().send(request, *args, **kwargs)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/adapters.py", line 519, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: (MaxRetryError('HTTPSConnectionPool(host=\'[huggingface.co](http://huggingface.co/)\', port=443): Max retries exceeded with url: /api/models/hkunlp/instructor-large (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x71188cb4d780>: Failed to resolve \'[huggingface.co](http://huggingface.co/)\' ([Errno -3] Temporary failure in name resolution)"))'), '(Request ID: 395f6b0b-82cc-4691-a4ce-1473ce23ebab)')

I also change the embedding model by --hf_embedding_model=sentence-transformers/all-MiniLM-L6-v2 and it already have in my .cache folder inside both the container and local system but then still trying to connect and thrown the error.
Screenshot from 2024-05-08 17-22-35

@pseudotensor
Copy link
Collaborator

Did you follow these kinds of instructions?

https://github.com/h2oai/h2ogpt/blob/93ed01b5d704f668094ede6dffcd6d64f81d6aee/docs/README_offline.md

i.e. this env should be set:

TRANSFORMERS_OFFLINE=1

and useful if pass to h2oGPT: --gradio_offline_level=2 --share=False

@glenbhermon
Copy link
Author

I have been followed all the steps that you mention but it didn't work. This is my docker command with including TRANSFORMERS_OFFLINE=1
and --gradio_offline_level=2 --share=False :

export HF_DATASETS_OFFLINE=1
export TRANSFORMERS_OFFLINE=1
export GRADIO_SERVER_PORT=7860
$export OPENAI_SERVER_PORT=5000
sudo docker run --gpus all --runtime=nvidia --shm-size=2g -p $GRADIO_SERVER_PORT:$GRADIO_SERVER_PORT -p $OPENAI_SERVER_PORT:$OPENAI_SERVER_PORT --rm --init --network host -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro -u `id -u`:`id -g` -v "${HOME}"/.cache:/workspace/.cache -v "${HOME}"/save:/workspace/save -v "${HOME}"/user_path:/workspace/user_path -v "${HOME}"/db_dir_UserData:/workspace/db_dir_UserData -v "${HOME}"/users:/workspace/users -v "${HOME}"/db_nonusers:/workspace/db_nonusers -v "${HOME}"/llamacpp_path:/workspace/llamacpp_path -e GRADIO_SERVER_PORT=$GRADIO_SERVER_PORT h2ogpt_1 /workspace/generate.py --base_model=mistralai/Mistral-7B-Instruct-v0.2 --use_safetensors=True --prompt_type=mistral --save_dir='/workspace/save/' --use_gpu_id=False --user_path=/workspace/user_path --langchain_mode="LLM" --langchain_modes="['UserData', 'MyData', 'LLM']" --score_model=None --max_max_new_tokens=2048 --max_new_tokens=1024  --visible_visible_models=False openai_port=$OPENAI_SERVER_PORT

After running, it thrown the same error. I did also change the embedding model all-MiniLM-L6-v2 but it didn't work also.

@pseudotensor
Copy link
Collaborator

You need to pass the envs as a docker env like you have for the gradio port.

-e GRADIO_SERVER_PORT=$GRADIO_SERVER_PORT \
-e TRANSFORMERS_OFFLINE=$TRANSFORMERS_OFFLINE \

@glenbhermon
Copy link
Author

Modified command:
sudo docker run --gpus all --runtime=nvidia --shm-size=2g -e TRANSFORMERS_OFFLINE=$TRANSFORMERS_OFFLINE -e HF_DATASETS_OFFLINE=$HF_DATASETS_OFFLINE -p $GRADIO_SERVER_PORT:$GRADIO_SERVER_PORT -p $OPENAI_SERVER_PORT:$OPENAI_SERVER_PORT --rm --init --network host -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro -u id -u:id -g -v "${HOME}"/.cache:/workspace/.cache -v "${HOME}"/save:/workspace/save -v "${HOME}"/user_path:/workspace/user_path -v "${HOME}"/db_dir_UserData:/workspace/db_dir_UserData -v "${HOME}"/users:/workspace/users -v "${HOME}"/db_nonusers:/workspace/db_nonusers -v "${HOME}"/llamacpp_path:/workspace/llamacpp_path -e GRADIO_SERVER_PORT=$GRADIO_SERVER_PORT narad_3 /workspace/generate.py --base_model=mistralai/Mistral-7B-Instruct-v0.2 --use_safetensors=True --prompt_type=mistral --save_dir='/workspace/save/' --use_gpu_id=False --user_path=/workspace/user_path --langchain_mode="LLM" --langchain_modes="['UserData', 'MyData', 'LLM']" --score_model=None --max_max_new_tokens=2048 --max_new_tokens=1024 --visible_visible_models=False openai_port=$OPENAI_SERVER_PORT

The error i got this time:

WARNING: Published ports are discarded when using host network mode
Using Model mistralai/mistral-7b-instruct-v0.2
fatal: not a git repository (or any of the parent directories): .git
git_hash.txt failed to be found: [Errno 2] No such file or directory: 'git_hash.txt'
Traceback (most recent call last):
  File "/workspace/generate.py", line 20, in <module>
    entrypoint_main()
  File "/workspace/generate.py", line 16, in entrypoint_main
    H2O_Fire(main)
  File "/workspace/src/utils.py", line 72, in H2O_Fire
    fire.Fire(component=component, command=args)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/workspace/src/gen.py", line 1882, in main
    model=get_embedding(use_openai_embedding, hf_embedding_model=hf_embedding_model,
  File "/workspace/src/gpt_langchain.py", line 528, in get_embedding
    embedding = HuggingFaceInstructEmbeddings(model_name=hf_embedding_model,
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/langchain_community/embeddings/huggingface.py", line 158, in __init__
    self.client = INSTRUCTOR(
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 87, in __init__
    snapshot_download(model_name_or_path,
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/sentence_transformers/util.py", line 442, in snapshot_download
    model_info = _api.model_info(repo_id=repo_id, revision=revision, token=token)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2300, in model_info
    r = get_session().get(path, headers=headers, timeout=timeout, params=params)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
    return self.request("GET", url, **kwargs)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 77, in send
    raise OfflineModeIsEnabled(
huggingface_hub.errors.OfflineModeIsEnabled: Cannot reach https://huggingface.co/api/models/hkunlp/instructor-large: offline mode is enabled. To disable it, please unset the `HF_HUB_OFFLINE` environment variable.

@pseudotensor
Copy link
Collaborator

Good. Now that things function, the issue is that if you are offline, you need the models in place in the cache. Because you do not, it fails. So you need to follow the offline docs for one of the ways to get those models into the cache locations. This can range from running online first and using the product, to running the smart download way.

@glenbhermon
Copy link
Author

  1. I already have all-MiniLM-L6-v2 in my container's .cache folder and when try pass the --hf_embedding_model=sentence-transformers--all-MiniLM-L6-v2 in the command then it also shown the same error just like before. Now it trying to download the all-MiniLM-L6-v2 model instead of hkunlp/instructor-large.
  2. My machine is connected with captive network so that i can't connect with internet.

@pseudotensor
Copy link
Collaborator

It's not related to h2oGPT at this point. Just do in python:

from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-large')

That is what the h2oGPT -> langchain code is doing. I presume this fails same way, and somehow you have to get that model in the right place.

If you place it somewhere manually because you have no internet, that probably won't be right.

It seems you can set SENTENCE_TRANSFORMERS_HOME env to specify the location. Else it should be in the torch cache etc. See the sentence_transformer package.
e.g. it would be located here by default if nothing is set:

from torch.hub import _get_torch_home
torch_cache_home = _get_torch_home()
print(torch_cache_home)

e.g. /home/jon/.cache/torch

which looks like:

l(h2ogpt) jon@pseudotensor:~/h2ogpt$ ls -alrt /home/jon/.cache/torch
total 24
drwx------  6 jon docker 4096 Feb 24 18:17 pyannote/
drwx------  3 jon jon    4096 Feb 26 08:07 hub/
drwxrwxr-x  6 jon jon    4096 Feb 26 08:07 ./
drwxrwxr-x  2 jon jon    4096 Mar 21 13:09 kernels/
drwxrwxr-x  6 jon jon    4096 Apr 30 15:35 sentence_transformers/
drwx------ 47 jon jon    4096 May  9 23:40 ../
(h2ogpt) jon@pseudotensor:~/h2ogpt$ 

@glenbhermon
Copy link
Author

Yes! It already set in the right palace.
Screenshot from 2024-05-10 15-01-37

~/.cache/torch/sentence_transformers$ ls -alrt total 12 drwxrwxr-x 4 sourav sourav 4096 May 6 11:39 .. drwxr-xr-x 4 sourav sourav 4096 May 6 11:41 hkunlp_instructor-large drwxr-xr-x 3 sourav sourav 4096 May 6 11:41 .

@glenbhermon
Copy link
Author

@pseudotensor please guide me to implementing this.

@pseudotensor
Copy link
Collaborator

Hi @glenbhermon I'm really not sure what is wrong if you have the files in the expected place.

UKPLab/sentence-transformers#1725
UKPLab/sentence-transformers#2345

Seems should be no particular issue.

@pseudotensor
Copy link
Collaborator

pseudotensor commented May 13, 2024

You have mistakes in your run line, like missing -- before openai_port and missing ` around the id for user group. Also, that model doesn't use safe tensors, so need to remove that line.

This works for me:

# 1) ensure $HOME/users and $HOME/db_nonusers are writeable by user running docker 

export TRANSFORMERS_OFFLINE=1
export GRADIO_SERVER_PORT=7860
export OPENAI_SERVER_PORT=5000
export HF_HUB_OFFLINE=1
docker run --gpus all \
--runtime=nvidia \
--shm-size=2g \
-e TRANSFORMERS_OFFLINE=$TRANSFORMERS_OFFLINE \
-e HUGGING_FACE_HUB_TOKEN=$HUGGING_FACE_HUB_TOKEN \
-e HF_HUB_OFFLINE=$HF_HUB_OFFLINE \
-e HF_HOME="/workspace/.cache/huggingface/" \
-p $GRADIO_SERVER_PORT:$GRADIO_SERVER_PORT \
-p $OPENAI_SERVER_PORT:$OPENAI_SERVER_PORT \
--rm --init \
--network host \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-u `id -u`:`id -g` \
-v "${HOME}"/.cache/huggingface/:/workspace/.cache/huggingface \
-v "${HOME}"/.cache/torch/:/workspace/.cache/torch \
-v "${HOME}"/.cache/transformers/:/workspace/.cache/transformers \
-v "${HOME}"/save:/workspace/save \
-v "${HOME}"/user_path:/workspace/user_path \
-v "${HOME}"/db_dir_UserData:/workspace/db_dir_UserData \
-v "${HOME}"/users:/workspace/users \
-v "${HOME}"/db_nonusers:/workspace/db_nonusers \
-v "${HOME}"/llamacpp_path:/workspace/llamacpp_path \
-e GRADIO_SERVER_PORT=$GRADIO_SERVER_PORT \
 gcr.io/vorvan/h2oai/h2ogpt-runtime:0.2.0 \
 /workspace/generate.py \
 --base_model=mistralai/Mistral-7B-Instruct-v0.2 \
 --use_safetensors=False \
 --prompt_type=mistral \
 --save_dir='/workspace/save/' \
 --use_gpu_id=False \
 --user_path=/workspace/user_path \
 --langchain_mode="LLM" \
 --langchain_modes="['UserData', 'MyData', 'LLM']" \
 --score_model=None \
 --max_max_new_tokens=2048 \
 --max_new_tokens=1024 \
 --visible_visible_models=False \
 --openai_port=$OPENAI_SERVER_PORT

Depending upon if use links, may require more specific mappings to direct location not linked location that cannot be used

-v "${HOME}"/.cache/huggingface/hub:/workspace/.cache/huggingface/hub \
-v "${HOME}"/.cache:/workspace/.cache \
 -e TRANSFORMERS_CACHE="/workspace/.cache/" \

@glenbhermon
Copy link
Author

I ran exactly the same command that you mention:
mkdir -p ~/.cache mkdir -p ~/save mkdir -p ~/user_path mkdir -p ~/db_dir_UserData mkdir -p ~/users mkdir -p ~/db_nonusers mkdir -p ~/llamacpp_path export TRANSFORMERS_OFFLINE=1 export GRADIO_SERVER_PORT=7860 export OPENAI_SERVER_PORT=5000 export HF_HUB_OFFLINE=1 docker run \ --gpus all \ --runtime=nvidia \ --shm-size=2g \ -e TRANSFORMERS_OFFLINE=$TRANSFORMERS_OFFLINE \ -e HF_HUB_OFFLINE=$HF_HUB_OFFLINE \ -e HF_HOME="/workspace/.cache/huggingface/" \ -p $GRADIO_SERVER_PORT:$GRADIO_SERVER_PORT \ -p $OPENAI_SERVER_PORT:$OPENAI_SERVER_PORT \ --rm --init \ --network host \ -v /etc/passwd:/etc/passwd:ro \ -v /etc/group:/etc/group:ro \ -u id -u:id -g \ -v "${HOME}"/.cache/huggingface/:/workspace/.cache/huggingface \ -v "${HOME}"/.cache/torch/:/workspace/.cache/torch \ -v "${HOME}"/.cache/transformers/:/workspace/.cache/transformers \ -v "${HOME}"/save:/workspace/save \ -v "${HOME}"/user_path:/workspace/user_path \ -v "${HOME}"/db_dir_UserData:/workspace/db_dir_UserData \ -v "${HOME}"/users:/workspace/users \ -v "${HOME}"/db_nonusers:/workspace/db_nonusers \ -v "${HOME}"/llamacpp_path:/workspace/llamacpp_path \ -e GRADIO_SERVER_PORT=$GRADIO_SERVER_PORT \ h2ogpt_3 \ /workspace/generate.py \ --base_model=mistralai/Mistral-7B-Instruct-v0.2 \ --use_safetensors=False \ --prompt_type=mistral \ --save_dir='/workspace/save/' \ --use_gpu_id=False \ --user_path=/workspace/user_path \ --langchain_mode="LLM" \ --langchain_modes="['UserData', 'MyData', 'LLM']" \ --score_model=None \ --max_max_new_tokens=2048 \ --max_new_tokens=1024 \ --visible_visible_models=False \ --openai_port=$OPENAI_SERVER_PORT

And now getting the following error:

WARNING: Published ports are discarded when using host network mode Using Model mistralai/mistral-7b-instruct-v0.2 fatal: not a git repository (or any of the parent directories): .git git_hash.txt failed to be found: [Errno 2] No such file or directory: 'git_hash.txt' load INSTRUCTOR_Transformer max_seq_length 512 Starting get_model: mistralai/Mistral-7B-Instruct-v0.2 /h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_downloadis deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True`.
warnings.warn(
Starting get_model: mistralai/Mistral-7B-Instruct-v0.2
Starting get_model: mistralai/Mistral-7B-Instruct-v0.2
Starting get_model: mistralai/Mistral-7B-Instruct-v0.2
Traceback (most recent call last):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
sock = connection.create_connection(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/util/connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connection.py", line 616, in connect
self.sock = sock = self._new_conn()
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connection.py", line 205, in _new_conn
raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x70cc207fb5b0>: Failed to resolve 'huggingface.co' ([Errno -3] Temporary failure in name resolution)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /mistralai/Mistral-7B-Instruct-v0.2/resolve/main/tf_model.h5 (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x70cc207fb5b0>: Failed to resolve 'huggingface.co' ([Errno -3] Temporary failure in name resolution)"))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/workspace/generate.py", line 20, in
entrypoint_main()
File "/workspace/generate.py", line 16, in entrypoint_main
H2O_Fire(main)
File "/workspace/src/utils.py", line 72, in H2O_Fire
fire.Fire(component=component, command=args)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/workspace/src/gen.py", line 2212, in main
model0, tokenizer0, device = get_model_retry(reward_type=False,
File "/workspace/src/gen.py", line 2564, in get_model_retry
model1, tokenizer1, device1 = get_model(**kwargs)
File "/workspace/src/gen.py", line 3191, in get_model
return get_hf_model(load_8bit=load_8bit,
File "/workspace/src/gen.py", line 3408, in get_hf_model
model = model_loader(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3239, in from_pretrained
if has_file(pretrained_model_name_or_path, TF2_WEIGHTS_NAME, **has_file_kwargs):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 627, in has_file
r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=10)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/api.py", line 100, in head
return request("head", url, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /mistralai/Mistral-7B-Instruct-v0.2/resolve/main/tf_model.h5 (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x70cc207fb5b0>: Failed to resolve 'huggingface.co' ([Errno -3] Temporary failure in name resolution)"))
`
I don't know how it is adding the resolve/main/tf_model.h5 in the model name.

@glenbhermon
Copy link
Author

And when i adding the following arguments:

-v "${HOME}"/.cache/huggingface/hub:/workspace/.cache/huggingface/hub \ -v "${HOME}"/.cache:/workspace/.cache \ -e TRANSFORMERS_CACHE="/workspace/.cache/" \

Then it shows the given error:
WARNING: Published ports are discarded when using host network mode
/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py:124: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.
warnings.warn(
You are offline and the cache for model files in Transformers v4.22.0 has been updated while your local cache seems to be the one of a previous version. It is very likely that all your calls to any from_pretrained() method will fail. Remove the offline mode and enable internet connection to have your cache be updated automatically, then you can go back to offline mode.
0it [00:00, ?it/s]
Using Model mistralai/mistral-7b-instruct-v0.2
fatal: not a git repository (or any of the parent directories): .git
git_hash.txt failed to be found: [Errno 2] No such file or directory: 'git_hash.txt'
load INSTRUCTOR_Transformer
max_seq_length 512
Starting get_model: mistralai/Mistral-7B-Instruct-v0.2
/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
Not using tokenizer from HuggingFace:

Traceback (most recent call last):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 398, in cached_file
resolved_file = hf_hub_download(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1325, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1817, in _raise_on_head_call_error
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/src/gen.py", line 2300, in get_config
config = AutoConfig.from_pretrained(base_model, token=use_auth_token,
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1138, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 631, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 686, in _get_config_dict
resolved_config_file = cached_file(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 441, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like mistralai/Mistral-7B-Instruct-v0.2 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Starting get_model: mistralai/Mistral-7B-Instruct-v0.2
Not using tokenizer from HuggingFace:

Traceback (most recent call last):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 398, in cached_file
resolved_file = hf_hub_download(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1325, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1817, in _raise_on_head_call_error
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/src/gen.py", line 2300, in get_config
config = AutoConfig.from_pretrained(base_model, token=use_auth_token,
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1138, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 631, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 686, in _get_config_dict
resolved_config_file = cached_file(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 441, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like mistralai/Mistral-7B-Instruct-v0.2 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Starting get_model: mistralai/Mistral-7B-Instruct-v0.2
Not using tokenizer from HuggingFace:

Traceback (most recent call last):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 398, in cached_file
resolved_file = hf_hub_download(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1325, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1817, in _raise_on_head_call_error
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/src/gen.py", line 2300, in get_config
config = AutoConfig.from_pretrained(base_model, token=use_auth_token,
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1138, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 631, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 686, in _get_config_dict
resolved_config_file = cached_file(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 441, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like mistralai/Mistral-7B-Instruct-v0.2 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Starting get_model: mistralai/Mistral-7B-Instruct-v0.2
Not using tokenizer from HuggingFace:

Traceback (most recent call last):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 398, in cached_file
resolved_file = hf_hub_download(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1325, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1817, in _raise_on_head_call_error
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/src/gen.py", line 2300, in get_config
config = AutoConfig.from_pretrained(base_model, token=use_auth_token,
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1138, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 631, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 686, in _get_config_dict
resolved_config_file = cached_file(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 441, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like mistralai/Mistral-7B-Instruct-v0.2 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Traceback (most recent call last):
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 398, in cached_file
resolved_file = hf_hub_download(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1325, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1817, in _raise_on_head_call_error
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/generate.py", line 20, in
entrypoint_main()
File "/workspace/generate.py", line 16, in entrypoint_main
H2O_Fire(main)
File "/workspace/src/utils.py", line 72, in H2O_Fire
fire.Fire(component=component, command=args)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/workspace/src/gen.py", line 2212, in main
model0, tokenizer0, device = get_model_retry(reward_type=False,
File "/workspace/src/gen.py", line 2564, in get_model_retry
model1, tokenizer1, device1 = get_model(**kwargs)
File "/workspace/src/gen.py", line 3191, in get_model
return get_hf_model(load_8bit=load_8bit,
File "/workspace/src/gen.py", line 3286, in get_hf_model
config, _, max_seq_len = get_config(base_model, return_model=False, raise_exception=True, **config_kwargs)
File "/workspace/src/gen.py", line 2300, in get_config
config = AutoConfig.from_pretrained(base_model, token=use_auth_token,
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1138, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 631, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/configuration_utils.py", line 686, in _get_config_dict
resolved_config_file = cached_file(
File "/h2ogpt_conda/envs/h2ogpt/lib/python3.10/site-packages/transformers/utils/hub.py", line 441, in cached_file
raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like mistralai/Mistral-7B-Instruct-v0.2 is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants