Replies: 1 comment 4 replies
-
Creating 1000+ collections is not recommended, this will create a huge overhead. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
my cluster has 3 nodes, each node has 31Gb Ram and 4cpus. I have 0.15 billion vectors of 768 dimensions. now I write most of them, but one node exit and always exits with code 139 if restarts. my data is group by groupid. at first ,I write my data all in one collection with groupid in the payload. but the search is very low and 30s timeout. so I create one collection for every groupid, just like collectionName_groupid,and there're nearly 1.5 thousand collections. my collection config is as below:
"params": { "vectors": { "size": 768, "distance": "Cosine", "on_disk": true }, "shard_number": 6, "replication_factor": 1, "write_consistency_factor": 1, "on_disk_payload": true }, "hnsw_config": { "m": 16, "ef_construct": 100, "full_scan_threshold": 10000, "max_indexing_threads": 2, "on_disk": true }, "optimizer_config": { "deleted_threshold": 0.2, "vacuum_min_vector_number": 1000, "default_segment_number": 0, "max_segment_size": null, "memmap_threshold": 10000, "indexing_threshold": 10000, "flush_interval_sec": 5, "max_optimization_threads": 1 }, "wal_config": { "wal_capacity_mb": 64, "wal_segments_ahead": 0 }, "quantization_config": null }, "payload_schema": {}
my docker startup commmand is :
docker run -itd --name qdrant03 --oom-kill-disable --memory=24g --cpus=3.5 --restart=always -p 6333:6333 -p 6334:6334 -p 6335:6335\ -v $(pwd)/path/to/data:/qdrant/storage \ -v $(pwd)/path/to/custom_config.yaml:/qdrant/config/production.yaml \ qdrant/qdrant ./qdrant --bootstrap 'http://10.0.0.49:6335'
docker logs:
Recovering collection xd_vector_service_xueqiu_1539 [00:00:00] ██ 12/12 (eta:0s)[2023-06-11T07:45:28.280Z INFO storage::content_manager::toc] Loading collection: xd_vector_service_xueqiu_405 Recovering collection xd_vector_service_xueqiu_405 [00:00:00] █ 108/108 (eta:0s)terminate called without an active exception terminate called recursively
docker inspect:
"State": { "Status": "exited", "Running": false, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 0, "ExitCode": 139, "Error": "", "StartedAt": "2023-06-11T07:16:04.029753941Z", "FinishedAt": "2023-06-11T07:45:33.660368488Z" },
it seems buffer/cache is large to 20Gi, and maybe segment default. so the memory is not enough?
so may I shoud cut off the number of my collecitons by hash routing ?
Beta Was this translation helpful? Give feedback.
All reactions