You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Any CTAS query falls to zero speed at parallelism and network levels after a few minutes. Then query fails with error:
io.trino.spi.TrinoException: Could not communicate with the remote task. The node may have crashed or be under too much load. This is probably a transient issue, so please retry your query in a few minutes.
Can anybody help with suggestion or idea?
The text was updated successfully, but these errors were encountered:
Hello everybody.
We migrated our Trino K8S helm deployment to private cloud K8S cluster and faced really strange problem. Now we're using self-hosted S3 instred of on prem HDFS for Delta Lake connector.
Following these docs https://trino.io/docs/current/object-storage/file-formats.html and https://trino.io/docs/current/object-storage/file-system-s3.html we found out that native S3 config can't work for us.
This is our delta lake connector config:
Any CTAS query falls to zero speed at parallelism and network levels after a few minutes. Then query fails with error:
io.trino.spi.TrinoException: Could not communicate with the remote task. The node may have crashed or be under too much load. This is probably a transient issue, so please retry your query in a few minutes.
Can anybody help with suggestion or idea?
The text was updated successfully, but these errors were encountered: