You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've deployed the EFK stack on my bare metal kubernetes cluster. Initially, Fluentd works fine and continues to push the logs of 1 of K8s's namespace (ivr-qa) to ES. However after ~2 hours, it stops. Here's the screenshot
Upon inspecting the logs of Fluentd's pods, I get the following warning:
fluentd-pnppg fluentd 2024-01-15 13:42:01 +0000 [warn]: #0 failed to flush the buffer. retry_time=15 next_retry_seconds=2024-01-15 13:42:29 +0000 chunk="60ef933d4304c5c70d0c9ecefc7ac58f" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.kube-logging.svc.cluster.local\", :port=>9200, :scheme=>\"http\", :user=>\"elastic\", :password=>\"obfuscated\"}): read timeout reached"
fluentd-pnppg fluentd 2024-01-15 13:42:01 +0000 [warn]: #0 suppressed same stacktrace
I don't understand why it says read timeout reached. If I search for ES's own logs in Kibana, I get the latest data every time:
Can someone help me on this? I've spend multiple hours on this already.
Thanks.
To Reproduce
---
apiVersion: v1kind: ServiceAccountmetadata:
name: fluentdnamespace: kube-logging
---
apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:
name: fluentdnamespace: kube-loggingrules:
- apiGroups:
- ""resources:
- pods
- namespacesverbs:
- get
- list
- watch
---
kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:
name: fluentdroleRef:
kind: ClusterRolename: fluentdapiGroup: rbac.authorization.k8s.iosubjects:
- kind: ServiceAccountname: fluentdnamespace: kube-logging
---
apiVersion: apps/v1kind: DaemonSetmetadata:
name: fluentdnamespace: kube-logginglabels:
k8s-app: fluentd-loggingversion: v1kubernetes.io/cluster-service: "true"spec:
template:
metadata:
labels:
k8s-app: fluentd-loggingversion: v1kubernetes.io/cluster-service: "true"spec:
serviceAccount: fluentd # if RBAC is enabledserviceAccountName: fluentd # if RBAC is enabledtolerations:
- key: node-role.kubernetes.io/mastereffect: NoSchedulecontainers:
- name: fluentdimage: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1env:
- name: FLUENT_ELASTICSEARCH_HOSTvalue: "elasticsearch.kube-logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORTvalue: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEMEvalue: "http"
- name: FLUENT_ELASTICSEARCH_USER # even if not used they are necessaryvalueFrom:
secretKeyRef:
name: elasticsearch-pv-elastickey: USER_NAME
- name: FLUENT_ELASTICSEARCH_PASSWORD # even if not used they are necessaryvalueFrom:
secretKeyRef:
name: elasticsearch-pv-elastickey: PASSWORDresources:
limits:
cpu: 2memory: 4Girequests:
cpu: 1memory: 2GivolumeMounts:
- name: varlogmountPath: /var/log
- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: true
- name: fluentd-configmountPath: /fluentd/etc # path of fluentd config fileterminationGracePeriodSeconds: 30volumes:
- name: varloghostPath:
path: /var/log
- name: varlibdockercontainershostPath:
path: /var/lib/docker/containers
- name: fluentd-configconfigMap:
name: fluentd-config # name of the config map we will createselector:
matchLabels:
k8s-app: fluentd-loggingversion: v1kubernetes.io/cluster-service: "true"
Expected behavior
Fluentd should be sending latest logs to Elastic search.
This issue has been automatically marked as stale because it has been open 90 days with no activity. Remove stale label or comment or this issue will be closed in 30 days
Describe the bug
Hi,
I've deployed the EFK stack on my bare metal kubernetes cluster. Initially, Fluentd works fine and continues to push the logs of 1 of K8s's namespace (ivr-qa) to ES. However after ~2 hours, it stops. Here's the screenshot
Upon inspecting the logs of Fluentd's pods, I get the following warning:
I don't understand why it says read timeout reached. If I search for ES's own logs in Kibana, I get the latest data every time:
Can someone help me on this? I've spend multiple hours on this already.
Thanks.
To Reproduce
Expected behavior
Fluentd should be sending latest logs to Elastic search.
Your Environment
Your Configuration
Your Error Log
The text was updated successfully, but these errors were encountered: