You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Question] i have deploy hugegraph server( singel node), write occur error: org.apache.hugegraph.exception.ServerException: The rest server is too busy to write
#2517
Closed
1 task done
z7658329 opened this issue
Apr 12, 2024
· 3 comments
我已经确认现有的 Issues 与 FAQ 中没有相同 / 重复问题 (I have confirmed and searched that there are no similar problems in the historical issue and documents)
Environment (环境信息)
Server Version: 1.2.0 (Apache Release Version)
Backend: RocksDB 1 nodes, HDD
OS: 16 CPUs, 32 G RAM, Ubuntu 1.6.0
Data Size: 500W vertices, 1000W edges
Your Question (问题描述)
i have deploy hugegraph server( singel node), use rocksdb store, i want to benchmark my huge server, but write error,
my config:
backend=rocksdb
serializer=binary
store=hugegraph
raft.mode=false
raft.path=./raft-log
raft.safe_read=true
The error you're encountering, org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: The rest server is too busy to write, suggests that the HugeGraph server is experiencing high load, which is impacting its ability to process write requests efficiently. Here are targeted suggestions to address this issue:
Optimize HugeGraph and RocksDB Configurations: Adjust RocksDB settings like max_background_jobs, write_buffer_size, and max_open_files to better match your hardware capabilities, potentially improving performance under load.
Increase Iterator Pool Capacity: The error might be related to the capacity of the iterator pool. Increasing the ITERATOR_POOL_CAPACITY in RocksDBIteratorPool.java could help manage load more effectively by allowing more iterators to be pooled concurrently.
Enable Iterator Reusing: In ReusedRocksIterator.java, changing REUSING_ENABLED from false to true can reduce the overhead associated with iterator creation and destruction, which is beneficial under high load scenarios.
Benchmark Code Optimization: Review your benchmarking approach to ensure it doesn't overwhelm the server. Adjusting the rate of requests or the level of concurrency might prevent the server from becoming too busy to write.
Server Performance Monitoring: Utilize monitoring tools to identify performance bottlenecks. This can guide further optimizations, whether in configuration or hardware adjustments.
Update HugeGraph and RocksDB: Ensure you're using the latest versions as they may include relevant performance improvements or bug fixes.
For specific RocksDB configurations:
max_background_jobs: Consider setting this to a higher value to allow more concurrent background processes.
write_buffer_size: Increasing this can help manage write workload spikes more effectively.
max_open_files: Setting this to -1 or a high number can reduce file operation overhead, aiding in performance under heavy load.
These adjustments are starting points. Monitoring and iterative tuning based on actual performance are crucial for achieving optimal configurations.
Problem Type (问题类型)
None
Before submit
Environment (环境信息)
Your Question (问题描述)
my config:
backend=rocksdb
serializer=binary
store=hugegraph
raft.mode=false
raft.path=./raft-log
raft.safe_read=true
bench code:
hugeClient.graph().addVertices(batch/每500个批次)
Vertex/Edge example (问题点 / 边数据举例)
No response
Schema [VertexLabel, EdgeLabel, IndexLabel] (元数据结构)
No response
The text was updated successfully, but these errors were encountered: