Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

byconity server does not listen to ports #1652

Open
ziydai opened this issue Apr 26, 2024 · 1 comment
Open

byconity server does not listen to ports #1652

ziydai opened this issue Apr 26, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@ziydai
Copy link

ziydai commented Apr 26, 2024

I set up byconity with version 0.3.2 on kubernetes. The modules (fdb, tso, resource manager, server, ...) run successfully.
But the server does not listen to configured ports, including the tpc port, http port. What is the problem?

Below is the server config file:

logger:
level: trace
log: /var/byconity/out.log
errorlog: /var/byconity/err.log
testlog: /var/byconity/test.log
size: 1000M
count: 10
console: true
http_port: 21557
rpc_port: 30605
tcp_port: 52145
ha_tcp_port: 26247
exchange_port: 47447
exchange_status_port: 60611
interserver_http_port: 30491
listen_host: "0.0.0.0"
cnch_type: server
max_connections: 4096
keep_alive_timeout: 3
max_concurrent_queries: 100
uncompressed_cache_size: 8589934592
mark_cache_size: 5368709120
path: /var/byconity/
tmp_path: /var/byconity/tmp_data/
users_config: /config/users.yml
default_profile: default
default_database: default
timezone: Asia/Shanghai
mlock_executable: false
enable_tenant_systemdb: false
macros:
"-incl": macros
"-optional": true
builtin_dictionaries_reload_interval: 3600
max_session_timeout: 3600
default_session_timeout: 60
dictionaries_config: "*_dictionary.xml"
format_schema_path: /var/byconity/format_schemas/

service_discovery:
mode: local
cluster: default
disable_cache: false
cache_timeout: 5
tso:
psm: 21.0.210.72
node:
host: 21.0.210.72
hostname: 21.0.210.72
ports:
port:
-
name: PORT0
value: 18845
-
name: PORT2
value: 9181
resource_manager:
psm: 21.0.210.91
node:
host: 21.0.210.91
hostname: 21.0.210.91
ports:
port:
name: PORT0
value: 28989

daemon_manager:
psm: 11.141.200.198
node:
host: 11.141.200.198
hostname: 11.141.200.198
ports:
port:
name: PORT0
value: 17553
vm:
node:
-
vw_name: vw_default
host: 21.0.211.9
hostname: 21.0.211.9
ports:
port:
-
name: PORT0
value: 52145
-
name: PORT1
value: 30605
-
name: PORT2
value: 21557
-
name: PORT5
value: 47447
-
name: PORT6
value: 60611
-
vw_name: vw_default
host: 21.0.211.12
hostname: 21.0.211.12
ports:
port:
-
name: PORT0
value: 52145
-
name: PORT1
value: 30605
-
name: PORT2
value: 21557
-
name: PORT5
value: 47447
-
name: PORT6
value: 60611
-
vw_name: vw_default
host: 11.141.201.115
hostname: 11.141.201.115
ports:
port:
-
name: PORT0
value: 52145
-
name: PORT1
value: 30605
-
name: PORT2
value: 21557
-
name: PORT5
value: 47447
-
name: PORT6
value: 60611
-
vw_name: vw_write
host: 21.0.211.7
hostname: 21.0.211.7
ports:
port:
-
name: PORT0
value: 52145
-
name: PORT1
value: 30605
-
name: PORT2
value: 21557
-
name: PORT5
value: 47447
-
name: PORT6
value: 60611

catalog:
name_space: default
perQuery: 1
catalog_service:
type: fdb
fdb:
cluster_file: /config/fdb.cluster
storage_configuration:
disks:
local_disk:
path: /var/byconity/data/
type: local
policies:
default:
volumes:
local:
default: local_disk
disk: local_disk
cnch_kafka_log:
database: cnch_system
table: cnch_kafka_log
flush_max_row_count: 10000
flush_interval_milliseconds: 7500
query_log:
database: system
table: query_log
flush_interval_milliseconds: 10000
partition_by: event_date
server_part_log:
database: system
table: server_part_log
flush_interval_milliseconds: 10000
partition_by: event_date
part_merge_log:
database: system
table: part_merge_log
flush_interval_milliseconds: 10000
partition_by: event_date
part_allocation_algorithm: 1
consistent_hash_ring:
num_replicas: 16
num_probes: 21
load_factor: 1.3
cnch_config: "/config/cnch-config.yml"
restrict_tenanted_users_to_system_tables: false
restrict_tenanted_users_to_whitelist_settings: false
additional_services:
FullTextSearch: true

Below is some of the server log:

2024-04-26 10:24:46 2024.04.26 10:24:46.682710 [ 42 ] {} void DB::WorkerStatusManager::heartbeat(): Code: 7114, e.displayText() = DB::Exception: The leader from election result not work well SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):
2024-04-26 10:24:46
2024-04-26 10:24:46 0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int) @ 0x21e5d672 in /opt/byconity/bin/clickhouse
2024-04-26 10:24:46 1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, int, bool) @ 0xea74b00 in /opt/byconity/bin/clickhouse
2024-04-26 10:24:46 2. DB::ResourceManagement::ResourceManagerClient::getAllWorkers(std::__1::vector<DB::ResourceManagement::WorkerNodeResourceData, std::__1::allocatorDB::ResourceManagement::WorkerNodeResourceData >&) @ 0x1b012054 in /opt/byconity/bin/clickhouse
2024-04-26 10:24:46 3. DB::WorkerStatusManager::heartbeat() @ 0x1c2f808a in /opt/byconity/bin/clickhouse
2024-04-26 10:24:46 4. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x1b2afd5e in /opt/byconity/bin/clickhouse
2024-04-26 10:24:46 5. DB::BackgroundSchedulePool::threadFunction() @ 0x1b2b2107 in /opt/byconity/bin/clickhouse
2024-04-26 10:24:46 6. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*, std::__1::shared_ptrDB::CpuSet)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*, std::__1::shared_ptrDB::CpuSet)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x1b2b2917 in /opt/byconity/bin/clickhouse
2024-04-26 10:24:46 7. ThreadPoolImplstd::__1::thread::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xeab4aa0 in /opt/byconity/bin/clickhouse
2024-04-26 10:24:46 8. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct >, void ThreadPoolImplstd::__1::thread::scheduleImpl(std::__1::function<void ()>, int, std::__1::optional)::'lambda0'()> >(void*) @ 0xeab8f1a in /opt/byconity/bin/clickhouse
2024-04-26 10:24:46 9. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so
2024-04-26 10:24:46 10. clone @ 0xfca2f in /lib/x86_64-linux-gnu/libc-2.31.so
2024-04-26 10:24:46 (version 21.8.7.1)
2024-04-26 10:24:46 2024.04.26 10:24:46.682747 [ 42 ] {} worker_status_manager: Execution took 5001 ms.
2024-04-26 10:24:50 2024.04.26 10:24:50.499152 [ 1 ] {} FDBIterator: Transaction timeout or too old, create new transaction
2024-04-26 10:24:55 2024.04.26 10:24:55.499460 [ 1 ] {} FDBIterator: Transaction timeout or too old, create new transaction
2024-04-26 10:24:56 2024.04.26 10:24:56.682851 [ 42 ] {} WorkerStatusManager: update worker status from rm heartbeat

@ziydai ziydai added the bug Something isn't working label Apr 26, 2024
@GallifreyY
Copy link
Collaborator

hi @ziydai, if u followed this to deploy byconity,
then this should be the default config u use:
https://github.com/ByConity/byconity-deploy/blob/de6b1cddc2b2c7e2f07e7f79bf58cff69309b653/chart/byconity/values.yaml#L79
u can check the status of these ports.
And if u changed the ports manually, make sure that it has not been occupied by other process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants