Skip to content
BobMaster edited this page Jan 13, 2024 · 5 revisions

If you have any questions, feel free to open an issue or join our matrix room:
https://matrix.to/#/#public:matrix.qqs.tw

Glossary
*: required

parameter
homeserver* your matrix server address
user_id* your matrix account name like @xxx.matrix.org
password account password
device_id* your device id(aka session id)(can use a fake one if login via password)
access_token account access_token
room_id if not set bot will work on the rooms it is in
import_keys_path location of the E2E room keys
import_keys_password E2E room keys password
model_size* Size of the model to use, tiny, tiny.en, base, base.en, small, small.en, medium, medium.en, large-v1, large-v2 or large-v3
device Device to use for computation ("cpu", "cuda", "auto"), default is cpu
compute_type Type to use for computation. See https://opennmt.net/CTranslate2/quantization.html
cpu_threads number of threads to use when running on CPU (4 by default).
num_workers When transcribe() is called from multiple Python threads, having multiple workers enables true parallelism when running the model (concurrent calls to self.model.generate() will run in parallel). This can improve the global throughput at the cost of increased memory usage
download_root Directory where the model should be saved, defaults is ./models

Use either access_token or login_id+password to login


Want to chat with ChatGPT? Try our matrix_chatgpt_bot

Clone this wiki locally