Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

按给的方式配置,但是模型跑在了CPU上 #21

Open
bmw515i opened this issue Jun 8, 2023 · 7 comments
Open

按给的方式配置,但是模型跑在了CPU上 #21

bmw515i opened this issue Jun 8, 2023 · 7 comments
Labels
good first issue Good for newcomers

Comments

@bmw515i
Copy link

bmw515i commented Jun 8, 2023

使用您提供的配置,但是最后发现字幕生成很慢,cpu占用率很高,CUDA和显存都没有占用。如何设置以确保在GPU中运行?

@alexbar445
Copy link

我也是,但解決了,可以檢查是否安裝了gpu版本的pytorch.

@bmw515i
Copy link
Author

bmw515i commented Jul 7, 2023 via email

@Koenigsegg-One1
Copy link

我也是,但解決了,可以檢查是否安裝了gpu版本的pytorch.

大佬们是按照readme逐步配置执行么?流程走下来报错ValueError: embedded null character

@bmw515i
Copy link
Author

bmw515i commented Aug 7, 2023 via email

@Hulkhao
Copy link

Hulkhao commented Sep 1, 2023

N卡3060,跑stable diffusion正常。
启动软件,CPU 100%跑满。
过会儿报错

【处理中】开始生成字幕,此步骤可能花费较长时间,请耐心等待...
Exception in thread Thread-1:
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\lib\site-packages\urllib3\connectionpool.py", line 700, in urlopen
    self._prepare_proxy(conn)
  File "C:\ProgramData\Miniconda3\lib\site-packages\urllib3\connectionpool.py", line 994, in _prepare_proxy
    conn.connect()
  File "C:\ProgramData\Miniconda3\lib\site-packages\urllib3\connection.py", line 364, in connect
    self.sock = conn = self._connect_tls_proxy(hostname, conn)
  File "C:\ProgramData\Miniconda3\lib\site-packages\urllib3\connection.py", line 499, in _connect_tls_proxy
    socket = ssl_wrap_socket(
  File "C:\ProgramData\Miniconda3\lib\site-packages\urllib3\util\ssl_.py", line 453, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
  File "C:\ProgramData\Miniconda3\lib\site-packages\urllib3\util\ssl_.py", line 495, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock)
  File "C:\ProgramData\Miniconda3\lib\ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "C:\ProgramData\Miniconda3\lib\ssl.py", line 1040, in _create
    self.do_handshake()
  File "C:\ProgramData\Miniconda3\lib\ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1131)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\lib\site-packages\requests\adapters.py", line 440, in send
    resp = conn.urlopen(
  File "C:\ProgramData\Miniconda3\lib\site-packages\urllib3\connectionpool.py", line 785, in urlopen
    retries = retries.increment(
  File "C:\ProgramData\Miniconda3\lib\site-packages\urllib3\util\retry.py", line 592, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "C:\ProgramData\Miniconda3\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "gui.py", line 175, in task
    self.sg.run()
  File "C:\AIGC\video-subtitle-generator\backend\main.py", line 227, in run
    transcript = recognizer(data)
  File "C:\AIGC\video-subtitle-generator\backend\main.py", line 40, in __call__
    _, probs = self.model.detect_language(mel)
  File "C:\ProgramData\Miniconda3\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\AIGC\video-subtitle-generator\backend\whisper\decoding.py", line 35, in detect_language
    tokenizer = get_tokenizer(model.is_multilingual)
  File "C:\AIGC\video-subtitle-generator\backend\whisper\tokenizer.py", line 385, in get_tokenizer
    encoding = get_encoding(name=encoding_name)
  File "C:\AIGC\video-subtitle-generator\backend\whisper\tokenizer.py", line 355, in get_encoding
    pat_str=gpt2()["pat_str"],
  File "C:\ProgramData\Miniconda3\lib\site-packages\tiktoken_ext\openai_public.py", line 11, in gpt2
    mergeable_ranks = data_gym_to_mergeable_bpe_ranks(
  File "C:\ProgramData\Miniconda3\lib\site-packages\tiktoken\load.py", line 75, in data_gym_to_mergeable_bpe_ranks
    vocab_bpe_contents = read_file_cached(vocab_bpe_file).decode()
  File "C:\ProgramData\Miniconda3\lib\site-packages\tiktoken\load.py", line 48, in read_file_cached
    contents = read_file(blobpath)
  File "C:\ProgramData\Miniconda3\lib\site-packages\tiktoken\load.py", line 24, in read_file
    resp = requests.get(blobpath)
  File "C:\ProgramData\Miniconda3\lib\site-packages\requests\api.py", line 75, in get
    return request('get', url, params=params, **kwargs)
  File "C:\ProgramData\Miniconda3\lib\site-packages\requests\api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\ProgramData\Miniconda3\lib\site-packages\requests\sessions.py", line 529, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\ProgramData\Miniconda3\lib\site-packages\requests\sessions.py", line 645, in send
    r = adapter.send(request, **kwargs)
  File "C:\ProgramData\Miniconda3\lib\site-packages\requests\adapters.py", line 517, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /gpt-2/encodings/main/vocab.bpe (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)')))


@bmw515i
Copy link
Author

bmw515i commented Sep 1, 2023 via email

@MillerDong
Copy link

我也是,但解決了,可以檢查是否安裝了gpu版本的pytorch.

感谢大佬!
我遇到了同样的问题,但是在安装了gpu版本的pytorch以后,程序还是跑在了CPU上。最后发现了这个讨论
要先把cpu版本的pytorch卸载了,再安装gpu版本的

pip uninstall torch
pip cache purge
pip install torch -f https://download.pytorch.org/whl/torch_stable.html

希望能帮到后来的人

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

6 participants