Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while dialing /dns/telemetry.polkadot.io/tcp/443/x-parity-wss/%2Fsubmit: Custom { kind: Other, error: Timeout } #471

Open
imstar15 opened this issue May 26, 2022 · 10 comments

Comments

@imstar15
Copy link

My running node is getting the following error:

2022-05-19 14:13:16 [Parachain] ❌ Error while dialing /dns/telemetry.polkadot.io/tcp/443/x-parity-wss/%2Fsubmit%2F: Custom { kind: Other, error: Timeout }

2022-05-19 14:13:16 failed to associate send_message response to the sender

Environment:
Node Provider: onfinality
Cloud Provider: AWS
Region: Tokyo
Node Type: Fullnode
Syncing status: Synced
Launch params: --telemetry-url='wss://telemetry.polkadot.io/submit 0

I searched for this article and it said the problem is related to the telemetry server's rate limit?
https://forum.subspace.network/t/error-while-dialing-dns-telemetry-polkadot-io-tcp-443-x-parity-wss-submit-custom-kind-other-error-timeout/45

How can I solve this issue please?

Thanks!

@imstar15
Copy link
Author

I just looked at telemetry and the node is on it.

Why does such an unstable state occur?

@jsdw
Copy link
Collaborator

jsdw commented May 26, 2022

Do you get this error once (sometimes) when first starting up your node, or consistently?

I've seen this in the past occasionally, and I've no idea why it happens, but the node always connects successfully on the next attempt when the error does appear. I have tended to put it down to something substrate related, since I have never had issues connecting to telemetry in tests, but I'm not sure.

It is true that telemetry will not show more than a certain number (1000) of nodes for most chains, and so you would be limited in that case.

It would be nice to know why this error occasionally pops up in substrate/polkadot, but it doesn't actually lead to any issues if it's the occasional when-node-is-first-started error I've seen.

@irsal
Copy link

irsal commented May 26, 2022

Hey @jsdw - we've only seen this with this service provider, which they relayed that it might be a rate limiter issue with telemetry.

We are not limited by the 1000 number.

@imstar15
Copy link
Author

These nodes still have timeout errors from time to time.

Please help.

@jsdw
Copy link
Collaborator

jsdw commented May 27, 2022

@irsal what do you mean by "this service provider"? Telemetry doesn't limit connections beyond the 1000-node-per-non-whitelisted-chain number (though if the bandwidth of a connection is unexpectedly high it will also kill the connection iirc, because it indicates some issue (unintentional or otherwise) with the connection.

@imstar15 do the nodes disappear and reappear in telemetry or remain there throughout? How often do the timeout errors occur? I'll need a bunch more information I think to really be able to help. Would you be able to provide some steps to reproduce what you are seeing?

@imstar15
Copy link
Author

@jsdw. Thanks!
These nodes will be in a state of disappearance for a long time.
Currently I don't know the steps to reproduce, I tried to start new node and it communicates smoothly with telemetry.
Those nodes that have been running for a long time will continue to have timeout errors.

If I have any other news, I will let you know.

@irsal
Copy link

irsal commented May 28, 2022

We'll close this out unless we have any additional context to provide.

@jsdw
Copy link
Collaborator

jsdw commented Jul 21, 2022

Thanks! Will close this unless any further information comes in. I'm not sure whether it's related to telemetry or Substrate at present (or just network issues in general). If you guys manage to find a way to reproduce it, please let me know!

@jsdw jsdw closed this as completed Jul 21, 2022
@ltfschoen
Copy link
Contributor

ltfschoen commented May 11, 2023

I've suggest re-opeining this. I've created a reproducable example of this error Error while dialing /dns/telemetry.polkadot.io/tcp/443/x-parity-wss/%2Fsubmit%2F: Custom { kind: Other, error: Timeout }. It happens everytime I run the command in the "Run Cargo Contract Node" section of the README of https://github.com/ltfschoen/InkTest with commit d7c8a98 where i run the following. it should be reproducable since it's running in a Docker container that is based on Parity docker containers and uses specific versions with docker. I've configured network_mode: host in the docker-compose.yml file so all ports should be exposed in the docker container like on the host machine:

git clone  https://github.com/ltfschoen/InkTest
git fetch origin d7c8a98:d7c8a98
git checkout d7c8a98
  • Install and run Docker

  • Generate .env file from sample file

  • Check versions used in Dockerfile:

    • Rust version
    • Cargo Contract
    • Substrate Contracts Node
  • Run from a Docker container and follow the terminal log instructions.

touch .env && cp .env.example .env
./docker.sh
  • Enter the Docker container when its running with:
docker exec -it ink /bin/bash
  • Run Cargo Contract Node
substrate-contracts-node \
	--dev \
	--alice \
	--name "ink-test" \
	--base-path "/tmp/ink" \
	--force-authoring \
	--port 30333 \
	--rpc-port 9933 \
	--ws-port 9944 \
	--ws-external \
	--rpc-methods Unsafe \
	--rpc-cors all \
	--telemetry-url "wss://telemetry.polkadot.io/submit/ 0" \
	-lsync=debug
  • View the warning/error appear in the logs on the last line below
2023-05-11 08:27:18.597  INFO main sc_cli::runner: Substrate Contracts Node    
2023-05-11 08:27:18.597  INFO main sc_cli::runner: ✌️  version 0.23.0-87a3d76c880    
2023-05-11 08:27:18.597  INFO main sc_cli::runner: ❤️  by Parity Technologies <admin@parity.io>, 2021-2023    
2023-05-11 08:27:18.597  INFO main sc_cli::runner: 📋 Chain specification: Development    
2023-05-11 08:27:18.597  INFO main sc_cli::runner: 🏷  Node name: ink-test    
2023-05-11 08:27:18.597  INFO main sc_cli::runner: 👤 Role: AUTHORITY    
2023-05-11 08:27:18.597  INFO main sc_cli::runner: 💾 Database: ParityDb at /tmp/ink/chains/dev/paritydb/full  
2023-05-11 08:27:18.597  INFO main sc_cli::runner: ⛓  Native runtime: substrate-contracts-node-100 (substrate-contracts-node-1.tx1.au1)    
2023-05-11 08:27:33.937  INFO main sc_service::client::client: 🔨 Initializing Genesis block/state (state: 0x0faf…14ef, header-hash: 0x18c5…59af)    
2023-05-11 08:27:34.571  INFO main sub-libp2p: 🏷  Local node identity is: 12D3KooWFikueK53yHtnz7KtoEFUo4Jvd4hiUZwhFRut5Eyje141    
2023-05-11 08:27:38.638 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:27:41.651 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:27:44.566 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:27:47.467 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:27:47.749  INFO                 main sc_sysinfo: 💻 Operating system: linux    
2023-05-11 08:27:47.749  INFO                 main sc_sysinfo: 💻 CPU architecture: x86_64    
2023-05-11 08:27:47.749  INFO                 main sc_sysinfo: 💻 Target environment: gnu    
2023-05-11 08:27:47.749  INFO                 main sc_sysinfo: 💻 CPU: Intel(R) Core(TM) i5-4258U CPU @ 2.40GHz    
2023-05-11 08:27:47.749  INFO                 main sc_sysinfo: 💻 CPU cores: 2    
2023-05-11 08:27:47.749  INFO                 main sc_sysinfo: 💻 Memory: 3933MB    
2023-05-11 08:27:47.749  INFO                 main sc_sysinfo: 💻 Kernel: 5.15.49-linuxkit    
2023-05-11 08:27:47.749  INFO                 main sc_sysinfo: 💻 Linux distribution: Debian GNU/Linux 11 (bullseye)    
2023-05-11 08:27:47.749  INFO                 main sc_sysinfo: 💻 Virtual machine: yes    
2023-05-11 08:27:47.751  INFO                 main sc_service::builder: 📦 Highest known block at #0    
2023-05-11 08:27:47.936  INFO tokio-runtime-worker substrate_prometheus_endpoint: 〽️ Prometheus exporter started at 127.0.0.1:9615    
2023-05-11 08:27:48.032  INFO                 main sc_rpc_server: Running JSON-RPC HTTP server: addr=127.0.0.1:9933, allowed origins=None    
2023-05-11 08:27:48.037  INFO                 main sc_rpc_server: Running JSON-RPC WS server: addr=0.0.0.0:9944, allowed origins=None    
2023-05-11 08:27:50.396 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:27:53.192  INFO tokio-runtime-worker substrate: 💤 Idle (0 peers), best: #0 (0x18c5…59af), finalized #0 (0x18c5…59af), ⬇ 0 ⬆ 0    
2023-05-11 08:27:53.296 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:27:56.197 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:27:58.195  INFO tokio-runtime-worker substrate: 💤 Idle (0 peers), best: #0 (0x18c5…59af), finalized #0 (0x18c5…59af), ⬇ 0 ⬆ 0    
2023-05-11 08:27:59.136 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:28:02.036 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:28:03.197  INFO tokio-runtime-worker substrate: 💤 Idle (0 peers), best: #0 (0x18c5…59af), finalized #0 (0x18c5…59af), ⬇ 0 ⬆ 0    
2023-05-11 08:28:04.938 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:28:07.840 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:28:08.199  INFO tokio-runtime-worker substrate: 💤 Idle (0 peers), best: #0 (0x18c5…59af), finalized #0 (0x18c5…59af), ⬇ 0 ⬆ 0    
2023-05-11 08:28:10.745 DEBUG tokio-runtime-worker sync: Propagating transactions    
2023-05-11 08:28:13.200  INFO tokio-runtime-worker substrate: 💤 Idle (0 peers), best: #0 (0x18c5…59af), finalized #0 (0x18c5…59af), ⬇ 0 ⬆ 0    
2023-05-11 08:28:13.307  WARN tokio-runtime-worker telemetry: ❌ Error while dialing /dns/telemetry.polkadot.io/tcp/443/x-parity-wss/%2Fsubmit%2F: Custom { kind: Other, error: Timeout }  
2023-05-11 08:28:13.653 DEBUG tokio-runtime-worker sync: Propagating transactions 
...  

@jsdw
Copy link
Collaborator

jsdw commented Apr 5, 2024

I'll re-open since this seems to be an ongoing thing, though we don't have much bandwidth to look into it right now I'm afraid!

@jsdw jsdw reopened this Apr 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants