Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

put file failed behind a proxy #7939

Open
markqiu opened this issue Jul 20, 2022 · 9 comments
Open

put file failed behind a proxy #7939

markqiu opened this issue Jul 20, 2022 · 9 comments
Labels

Comments

@markqiu
Copy link

markqiu commented Jul 20, 2022

What happened?:
pachctl put file images_clever-stegodon@master:birthday-cake.jpg -f https://i.imgur.com/FOO9q43.jpg
Get "https://i.imgur.com/FOO9q43.jpg": dial tcp 103.230.123.190:443: i/o timeout
What you expected to happen?:
I have set HTTP_PROXY HTTPS_PROXY environment variable correctly.
How to reproduce it (as minimally and precisely as possible)?:
rerun the command
Anything else we need to know?:
How to set a proxy for pachctl?
Environment?:

  • Kubernetes version (use kubectl version):
  • Pachyderm CLI and pachd server version (use pachctl version):
  • Cloud provider (e.g. aws, azure, gke) or local deployment (e.g. minikube vs dockerized k8s):
  • If you deployed with helm, the values you used (helm get values pachyderm):
  • OS (e.g. from /etc/os-release):
  • Others:
@markqiu markqiu added the bug label Jul 20, 2022
@jrockway
Copy link
Member

Not 100% sure what your setup is, but the GET is actually done by the pachd server, so these environment variables need to be set on the server. The helm value global.proxy defines the proxy to use. (There is also noProxy that sets the $NO_PROXY environment variable.)

@markqiu
Copy link
Author

markqiu commented Jul 21, 2022

Thank you for your response. Tried that, the environment variables are set now, but the pod is not ready now.
image
Error logs:

internal/poll.(*pollDesc).wait(0xc00131c200, 0xc001214000, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00131c200, {0xc001214000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc00131c200, {0xc001214000, 0x0, 0x0})
	/usr/local/go/src/net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc000a5e060, {0xc001214000, 0xc001034680, 0x4})
	/usr/local/go/src/net/net.go:183 +0x45
bufio.(*Reader).Read(0xc000a58480, {0xc00131e2e0, 0x5, 0x3})
	/usr/local/go/src/bufio/bufio.go:227 +0x1b4
io.ReadAtLeast({0x3447560, 0xc000a58480}, {0xc00131e2e0, 0x5, 0x200}, 0x5)
	/usr/local/go/src/io/io.go:328 +0x9a
io.ReadFull(...)
	/usr/local/go/src/io/io.go:347
github.com/lib/pq.(*conn).recvMessage(0xc00131e2c0, 0xc000e16750)
	/home/circleci/go/pkg/mod/github.com/lib/pq@v1.10.2/conn.go:983 +0xca
github.com/lib/pq.(*ListenerConn).listenerConnLoop(0xc000744080)
	/home/circleci/go/pkg/mod/github.com/lib/pq@v1.10.2/notify.go:196 +0xab
github.com/lib/pq.(*ListenerConn).listenerConnMain(0xc000744080)
	/home/circleci/go/pkg/mod/github.com/lib/pq@v1.10.2/notify.go:249 +0x25
created by github.com/lib/pq.newDialListenerConn
	/home/circleci/go/pkg/mod/github.com/lib/pq@v1.10.2/notify.go:138 +0x127
goroutine 339 [select]:
github.com/pachyderm/pachyderm/v2/src/internal/collection.(*postgresWatcher).forwardNotifications(0xc0009a23c0, {0x34b3310, 0xc000a22680})
	src/internal/collection/postgres_listener.go:114 +0xff
github.com/pachyderm/pachyderm/v2/src/internal/collection.(*postgresReadOnlyCollection).watchRoutine(0xc0009f8540, 0xc0009a23c0, {0x0, 0x0, 0x0, 0x0}, 0x0)
	src/internal/collection/postgres_collection.go:584 +0x245
created by github.com/pachyderm/pachyderm/v2/src/internal/collection.(*postgresReadOnlyCollection).watchOne
	src/internal/collection/postgres_collection.go:636 +0x22f
goroutine 341 [select]:
github.com/pachyderm/pachyderm/v2/src/internal/collection.(*postgresWatcher).forwardNotifications(0xc000b3da40, {0x34b3310, 0xc000a22680})
	src/internal/collection/postgres_listener.go:114 +0xff
github.com/pachyderm/pachyderm/v2/src/internal/collection.(*postgresReadOnlyCollection).watchRoutine(0xc0009f8558, 0xc000b3da40, {0x0, 0x0, 0x0, 0x0}, 0x0)
	src/internal/collection/postgres_collection.go:584 +0x245
created by github.com/pachyderm/pachyderm/v2/src/internal/collection.(*postgresReadOnlyCollection).watchOne
	src/internal/collection/postgres_collection.go:636 +0x22f
error setting up External Pachd GRPC Server: error setting up PFS API GRPC Server: unable to write to object storage: 503 Service Unavailable

@markqiu
Copy link
Author

markqiu commented Jul 21, 2022

It works now. I solved the probelm by adding "local" to the noProxy variable. Thanks again! @jrockway

@markqiu markqiu closed this as completed Jul 21, 2022
@jrockway
Copy link
Member

That makes sense. We should document what "noproxy" needs to be (I think that our etcd connection attempts to respect the HTTP_PROXY environment variable, which is probably not what most people want here). I'll open an (internal) issue for that.

@markqiu
Copy link
Author

markqiu commented Jul 22, 2022

Sorry, wrong again.
The console cannot be accessed now, and the error message in the log is as follows. It should be necessary to add some noProxys.
My values.yaml is as follows:

# SPDX-FileCopyrightText: Pachyderm, Inc. <info@pachyderm.com>
# SPDX-License-Identifier: Apache-2.0
deployTarget: "MINIO"

global:
 # Sets the HTTP/S proxy server address for console, pachd, and enteprise server
  proxy: "http://123.103.74.231:20172"
  # If proxy is set, this allows you to set a comma-separated list of destinations that bypass the proxy
  noProxy: "10.1.0.0/16,10.152.183.0/24,localhost,jinniuai,127.0.0.1,172.16.0.0/15,192.168.0.0/16,jinniuai.internal,hub,jinniuai.com,local,postgresql,minio,postgres-0,pachd,etcd-0,etcd,console,pachd-peer,postgres-headless,etcd-headless,pg-bouncer"

pachd:
  storage:
    backend: "MINIO"
    minio:
      bucket: "pachyderm"
      endpoint: "minio.minio.svc.cluster.local:9000"
      id: "rootuser"
      secret: "rootpass123"
      secure: "false"
etcd:
  storageClass: openebs-hostpath
  size: 10Gi

postgresql:
  persistence:
    storageClass: openebs-hostpath
    size: 10Gi

My installation command is as follow:

 microk8s helm3 upgrade --install pachd -f pachyderm-enterprise-member-values.yaml pach/pachyderm --set pachd.enterpriseLicenseKey=$(cat pachyderm-enterprise-license.txt) --namespace pachyderm --create-namespace --set console.enabled=true

The error in the log is as follows:

dotenv-flow: "REACT_APP_RUNTIME_ISSUER_URI" is already defined in `process.env` and will not be overwritten
dotenv-flow: "REACT_APP_RUNTIME_SUBSCRIPTIONS_PREFIX" is already defined in `process.env` and will not be overwritten
dotenv-flow: "REACT_APP_RUNTIME_DISABLE_TELEMETRY" is already defined in `process.env` and will not be overwritten
dotenv-flow: "ISSUER_URI" is already defined in `process.env` and will not be overwritten
dotenv-flow: "OAUTH_REDIRECT_URI" is already defined in `process.env` and will not be overwritten
dotenv-flow: "OAUTH_CLIENT_ID" is already defined in `process.env` and will not be overwritten
dotenv-flow: "OAUTH_CLIENT_SECRET" is already defined in `process.env` and will not be overwritten
dotenv-flow: "GRAPHQL_PORT" is already defined in `process.env` and will not be overwritten
dotenv-flow: "OAUTH_PACHD_CLIENT_ID" is already defined in `process.env` and will not be overwritten
dotenv-flow: "PACHD_ADDRESS" is already defined in `process.env` and will not be overwritten
Persisted queries are enabled and are using an unbounded cache. Your server is vulnerable to denial of service attacks via memory exhaustion. Set `cache: "bounded"` or `persistedQueries: false` in your ApolloServer constructor, or see https://go.apollo.dev/s/cache-backends for other alternatives.
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"level":30,"msg":"Server ready at https://localhost:4000/graphql","time":"2022-07-21T11:31:06.042Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"level":30,"msg":"Websocket server ready at wss://localhost:4000/graphql","time":"2022-07-21T11:31:06.080Z","v":0}
This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason:
Error: 14 UNAVAILABLE: Connection dropped
    at Object.callErrorFromStatus (/usr/src/app/backend/node_modules/@grpc/grpc-js/build/src/call.js:31:26)
    at Object.onReceiveStatus (/usr/src/app/backend/node_modules/@grpc/grpc-js/build/src/client.js:180:52)
    at Object.onReceiveStatus (/usr/src/app/backend/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:365:141)
    at Object.onReceiveStatus (/usr/src/app/backend/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)
    at /usr/src/app/backend/node_modules/@grpc/grpc-js/build/src/call-stream.js:187:78
    at processTicksAndRejections (node:internal/process/task_queues:78:11)
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"Creating pach client","time":"2022-07-21T11:42:44.636Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"whoAmI request started","time":"2022-07-21T11:42:44.637Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":50,"error":"Connection dropped","msg":"whoAmI request failed","time":"2022-07-21T11:42:46.797Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"whoAmI request started","time":"2022-07-21T11:42:47.022Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":50,"error":"Connection dropped","msg":"whoAmI request failed","time":"2022-07-21T11:42:49.494Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"whoAmI request started","time":"2022-07-21T11:42:50.796Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":50,"error":"Connection dropped","msg":"whoAmI request failed","time":"2022-07-21T11:42:52.951Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"whoAmI request started","time":"2022-07-21T11:42:53.766Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":50,"error":"Connection dropped","msg":"whoAmI request failed","time":"2022-07-21T11:42:56.955Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"whoAmI request started","time":"2022-07-21T11:42:59.830Z","v":0}
{"name":"dash-api","hostname":"console-7454cc697f-slncz","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":50,"error":"Connection dropped","msg":"whoAmI request failed","time":"2022-07-21T11:43:01.995Z","v":0}

I've tried to add many values to the noProxy variable with no luck. Any idea?
@jrockway

@markqiu markqiu reopened this Jul 22, 2022
@jrockway
Copy link
Member

jrockway commented Jul 22, 2022

It looks like the grpc library does an exact comparison on the host being connected to, without the port. Since PACHD_ADDRESS is pachd-peer.default.svc.cluster.local:30658, noProxy needs to contain pachd-peer.default.svc.cluster.local.

If that doesn't work, you can do some more extreme debugging of console; kubectl edit deployment console and then add these environment variables:

- name: GRPC_TRACE
   value: proxy
- name: GRPC_VERBOSITY
   value: DEBUG

That should print something like:

D 2022-07-22T02:14:09.498Z | proxy | Proxy server 127.0.0.1:80 set by environment variable https_proxy
D 2022-07-22T02:14:09.499Z | proxy | No proxy server list set by environment variable no_proxy
D 2022-07-22T02:14:09.499Z | proxy | Not using proxy for target in no_proxy list: dns:pachd-peer.default.svc.cluster.local:30653                                                                                                                         

For reference, I got a working console with these values:

deployTarget: "LOCAL"

global:
    proxy: "http://127.0.0.1"
    noProxy: "pachd-peer.default.svc.cluster.local"

127.0.0.1 is not a real proxy, but it does cause failures, which was all I was after.

@markqiu
Copy link
Author

markqiu commented Jul 25, 2022

It works!
Now the UI is OK and the downloading of the images is works. But the console is still not very stable. It restarted occasionally. I have to restart the port-forward command to reconnect to the server after a failure. @jrockway
the Log is as the following:

> @pachyderm/dash-backend@0.0.1 start
> NODE_PATH=dist/ NODE_ENV=production DOTENV_FLOW_PATH=../ node -r module-alias/register -r dotenv-flow/config dist/index.js

dotenv-flow: "REACT_APP_RUNTIME_ISSUER_URI" is already defined in `process.env` and will not be overwritten
dotenv-flow: "REACT_APP_RUNTIME_SUBSCRIPTIONS_PREFIX" is already defined in `process.env` and will not be overwritten
dotenv-flow: "REACT_APP_RUNTIME_DISABLE_TELEMETRY" is already defined in `process.env` and will not be overwritten
dotenv-flow: "ISSUER_URI" is already defined in `process.env` and will not be overwritten
dotenv-flow: "OAUTH_REDIRECT_URI" is already defined in `process.env` and will not be overwritten
dotenv-flow: "OAUTH_CLIENT_ID" is already defined in `process.env` and will not be overwritten
dotenv-flow: "OAUTH_CLIENT_SECRET" is already defined in `process.env` and will not be overwritten
dotenv-flow: "GRAPHQL_PORT" is already defined in `process.env` and will not be overwritten
dotenv-flow: "OAUTH_PACHD_CLIENT_ID" is already defined in `process.env` and will not be overwritten
dotenv-flow: "PACHD_ADDRESS" is already defined in `process.env` and will not be overwritten
Persisted queries are enabled and are using an unbounded cache. Your server is vulnerable to denial of service attacks via memory exhaustion. Set `cache: "bounded"` or `persistedQueries: false` in your ApolloServer constructor, or see https://go.apollo.dev/s/cache-backends for other alternatives.
D 2022-07-25T01:49:17.107Z | proxy | Proxy server 123.103.74.231:20172 set by environment variable https_proxy
D 2022-07-25T01:49:17.108Z | proxy | No proxy server list set by environment variable no_proxy
D 2022-07-25T01:49:17.108Z | proxy | Not using proxy for target in no_proxy list: dns:pachd-peer.pachyderm.svc.cluster.local:30653
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"level":30,"msg":"Server ready at https://localhost:4000/graphql","time":"2022-07-25T01:49:17.139Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"level":30,"msg":"Websocket server ready at wss://localhost:4000/graphql","time":"2022-07-25T01:49:17.168Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"level":30,"msg":"Telemetry stream to rudderstack started","time":"2022-07-25T01:49:17.194Z","v":0}
2022-07-25T01:49:17.677Z [Rudder] error: error status: 400
2022-07-25T01:49:17.678Z [Rudder] error: got error while attempting send for 3 times, dropping 1 events
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"Creating pach client","time":"2022-07-25T01:51:22.985Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","operationId":"f66799b6-60a2-4b39-a64a-ad508a683042","account":{"id":"Cg0wLTM4NS0yODA4OS0wEgR0ZXN0","email":"kilgore@kilgore.trout"},"projectId":"","level":30,"eventSource":"apollo-server","meta":{"operationName":"getEnterpriseInfo"},"msg":"request did start","time":"2022-07-25T01:51:23.031Z","v":0}
D 2022-07-25T01:51:23.046Z | proxy | Proxy server 123.103.74.231:20172 set by environment variable https_proxy
D 2022-07-25T01:51:23.046Z | proxy | No proxy server list set by environment variable no_proxy
D 2022-07-25T01:51:23.046Z | proxy | Not using proxy for target in no_proxy list: dns:pachd-peer.pachyderm.svc.cluster.local:30653
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"getState request started","time":"2022-07-25T01:51:23.047Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","operationId":"74f75660-480d-435b-893f-d39937ee99c6","account":{"id":"Cg0wLTM4NS0yODA4OS0wEgR0ZXN0","email":"kilgore@kilgore.trout"},"projectId":"","level":30,"eventSource":"apollo-server","meta":{"operationName":"getAccount"},"msg":"request did start","time":"2022-07-25T01:51:23.054Z","v":0}
... ...
... ...
2022-07-25T01:57:24.451Z [Rudder] error: error status: 400
2022-07-25T01:57:24.451Z [Rudder] error: got error while attempting send for 3 times, dropping 20 events
2022-07-25T01:57:24.605Z [Rudder] error: error status: 400
2022-07-25T01:57:24.605Z [Rudder] error: got error while attempting send for 3 times, dropping 20 events
2022-07-25T01:57:25.817Z [Rudder] error: error status: 400
2022-07-25T01:57:25.817Z [Rudder] error: got error while attempting send for 3 times, dropping 20 events
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","operationId":"d8973663-4f57-4006-817d-856e2dbe3be2","account":{"id":"Cg0wLTM4NS0yODA4OS0wEgR0ZXN0","email":"kilgore@kilgore.trout"},"projectId":"default","level":30,"eventSource":"apollo-server","meta":{"operationName":"createRepo"},"msg":"request did start","time":"2022-07-25T01:57:27.100Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"createRepo request started","time":"2022-07-25T01:57:27.101Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"listRepo request started","time":"2022-07-25T01:57:27.131Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"listPipeline request started","time":"2022-07-25T01:57:27.132Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"listRepo request completed","time":"2022-07-25T01:57:27.135Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"listPipeline request completed","time":"2022-07-25T01:57:27.136Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","operationId":"644925fb-7602-4a80-95c1-fd67119753cc","account":{"id":"Cg0wLTM4NS0yODA4OS0wEgR0ZXN0","email":"kilgore@kilgore.trout"},"projectId":"default","level":30,"eventSource":"dag resolver","event":"deriving vertices for dag","meta":{"projectId":"default"},"msg":"","time":"2022-07-25T01:57:27.136Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"createRepo request completed","time":"2022-07-25T01:57:27.140Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"inspectRepo request started","time":"2022-07-25T01:57:27.142Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"inspectRepo request completed","time":"2022-07-25T01:57:27.148Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","operationId":"d8973663-4f57-4006-817d-856e2dbe3be2","account":{"id":"Cg0wLTM4NS0yODA4OS0wEgR0ZXN0","email":"kilgore@kilgore.trout"},"projectId":"default","level":30,"eventSource":"apollo-server","meta":{"operationName":"createRepo"},"msg":"will send response","time":"2022-07-25T01:57:27.149Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"listRepo request started","time":"2022-07-25T01:57:30.132Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"listPipeline request started","time":"2022-07-25T01:57:30.132Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"listPipeline request completed","time":"2022-07-25T01:57:30.136Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"eventSource":"grpc client","pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","level":30,"msg":"listRepo request completed","time":"2022-07-25T01:57:30.140Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","operationId":"644925fb-7602-4a80-95c1-fd67119753cc","account":{"id":"Cg0wLTM4NS0yODA4OS0wEgR0ZXN0","email":"kilgore@kilgore.trout"},"projectId":"default","level":30,"eventSource":"dag resolver","event":"deriving vertices for dag","meta":{"projectId":"default"},"msg":"","time":"2022-07-25T01:57:30.141Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"level":30,"EventSource":"Websocket","Event":"Next","meta":{"id":"ba2aebd3-e046-4b31-9742-2fb1856570f6","type":"next"},"msg":"","time":"2022-07-25T01:57:30.141Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","operationId":"9e2cb97e-fc7e-4e34-82b4-604e3f24c0b6","account":{"id":"Cg0wLTM4NS0yODA4OS0wEgR0ZXN0","email":"kilgore@kilgore.trout"},"projectId":"","level":30,"eventSource":"apollo-server","meta":{"operationName":"getAccount"},"msg":"request did start","time":"2022-07-25T01:57:30.263Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"pachdAddress":"pachd-peer.pachyderm.svc.cluster.local:30653","operationId":"9e2cb97e-fc7e-4e34-82b4-604e3f24c0b6","account":{"id":"Cg0wLTM4NS0yODA4OS0wEgR0ZXN0","email":"kilgore@kilgore.trout"},"projectId":"","level":30,"eventSource":"apollo-server","meta":{"operationName":"getAccount"},"msg":"will send response","time":"2022-07-25T01:57:30.264Z","v":0}
{"name":"dash-api","hostname":"console-6547597858-5qxnb","pid":18,"level":30,"EventSource":"Websocket","Event":"Complete","meta":{"id":"ba2aebd3-e046-4b31-9742-2fb1856570f6","type":"complete"},"msg":"","time":"2022-07-25T01:57:31.258Z","v":0}
2022-07-25T01:57:31.603Z [Rudder] error: error status: 400
2022-07-25T01:57:31.603Z [Rudder] error: got error while attempting send for 3 times, dropping 20 events
2022-07-25T01:57:44.923Z [Rudder] error: error status: 400
2022-07-25T01:57:44.923Z [Rudder] error: got error while attempting send for 3 times, dropping 6 events

@markqiu markqiu closed this as completed Jul 25, 2022
@markqiu markqiu reopened this Jul 25, 2022
@markqiu
Copy link
Author

markqiu commented Aug 1, 2022

Does anyone know how to fix it? Thanks

@jrockway
Copy link
Member

jrockway commented Aug 4, 2022

I don't see anything here that obviously looks like console crashing. There are some analytics requests that appear blocked because of the proxying rules; that shouldn't be fatal or anything. (They can be turned off entirely though.)

If console does go away, you will have to restart the port forwarding, that's just how port-forwarding is. An alternative is to configure the services to listen on a NodePort (if local) or a LoadBalancer (if remote). Depending on your local dev environment, one might be easier than the other. With a LoadBalancer service on minikube running on OS X or Windows, minikube tunnel can just make the services available at "localhost". With other setups, a NodePort might be easier.

The 2.3 alphas (soon to be marked as stable) make it very easy to serve everything on a port with proxy.enabled=true in the helm values (but you'll still need to pick NodePort or LoadBalancer as the service type depending on your environment). There is a tutorial available here: https://docs.pachyderm.com/2.3.x/deploy-manage/deploy/deploy-w-proxy/#deploy-pachyderm-with-a-proxy-one-port-for-all-external-traffic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants