Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logs are not being sent to splunk #853

Open
kavita1205 opened this issue Mar 7, 2023 · 1 comment
Open

logs are not being sent to splunk #853

kavita1205 opened this issue Mar 7, 2023 · 1 comment

Comments

@kavita1205
Copy link

What happened:
Hi Team,

I have installed helm chart version 1.5.2 for SCK. After Installation , I found that few pods are getting crashloopbackoff with below error logs and the pods which are showing as running status do not show logs in splunk and in splunk getting this logs for these runnning pods.

Crashloopbackoff Error logs

kubectl logs -n splunk-sck -f lv-splunk-logging-76l6d
2023-03-07 13:56:38 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-03-07 13:56:38 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2023-03-07 13:56:38 +0000 [info]: gem 'fluentd' version '1.15.3'
2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-concat' version '2.4.0'
2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-jq' version '0.5.1'
2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '3.1.0'
2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.2'
2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.3.1'
2023-03-07 13:56:38 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2023-03-07 13:56:38 +0000 [INFO]: Reading bearer token from /var/run/secrets/kubernetes.io/serviceaccount/token
2023-03-07 13:56:41 +0000 [error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Invalid Kubernetes API v1 endpoint https://10.96.0.1:443/api: Timed out connecting to server"

Pods running logs

 kubectl logs -n splunk-sck lv-splunk-logging-qvvp2
2023-03-07 13:40:19 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-03-07 13:40:19 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluent.conf"
2023-03-07 13:40:19 +0000 [info]: gem 'fluentd' version '1.15.3'
2023-03-07 13:40:19 +0000 [info]: gem 'fluent-plugin-concat' version '2.4.0'
2023-03-07 13:40:19 +0000 [info]: gem 'fluent-plugin-jq' version '0.5.1'
2023-03-07 13:40:19 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '3.1.0'
2023-03-07 13:40:19 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.2'
2023-03-07 13:40:19 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0'
2023-03-07 13:40:19 +0000 [info]: gem 'fluent-plugin-splunk-hec' version '1.3.1'
2023-03-07 13:40:19 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2'
2023-03-07 13:40:20 +0000 [INFO]: Reading bearer token from /var/run/secrets/kubernetes.io/serviceaccount/token
2023-03-07 13:40:20 +0000 [info]: using configuration file: <ROOT>
  <system>
    log_level info
    root_dir "/tmp/fluentd"
  </system>
  <source>
    @id containers.log
    @type tail
    @label @CONCAT
    tag "tail.containers.*"
    path "/var/log/pods/*.log"
    pos_file "/var/log/splunk-fluentd-containers.log.pos"
    path_key "source"
    read_from_head true
    enable_stat_watcher true
    refresh_interval 60
    <parse>
      @type "json"
      time_format "%Y-%m-%dT%H:%M:%S.%NZ"
      time_key "time"
      time_type string
      localtime false
      unmatched_lines
    </parse>
  </source>
  <source>
    @id tail.file.kube-audit
    @type tail
    @label @CONCAT
    tag "tail.file.kube:apiserver-audit"
    path "/var/log/kube-apiserver-audit.log"
    pos_file "/var/log/splunk-fluentd-kube-audit.pos"
    read_from_head true
    path_key "source"
    <parse>
      @type "regexp"
      expression /^(?<log>.*)$/
      time_key "time"
      time_type string
      time_format "%Y-%m-%dT%H:%M:%SZ"
      unmatched_lines
    </parse>
  </source>
  <source>
    @id journald-docker
    @type systemd
    @label @CONCAT
    tag "journald.kube:docker"
    path "/var/log/journal"
    matches [{"_SYSTEMD_UNIT":"docker.service"}]
    read_from_head true
    <storage>
      @type "local"
      persistent true
      path "/var/log/splunkd-fluentd-journald-docker.pos.json"
    </storage>
    <entry>
      field_map {"MESSAGE":"log","_SYSTEMD_UNIT":"source"}
      field_map_strict true
    </entry>
  </source>
  <source>
    @id journald-kubelet
    @type systemd
    @label @CONCAT
    tag "journald.kube:kubelet"
    path "/var/log/journal"
    matches [{"_SYSTEMD_UNIT":"kubelet.service"}]
    read_from_head true
    <storage>
      @type "local"
      persistent true
      path "/var/log/splunkd-fluentd-journald-kubelet.pos.json"
    </storage>
    <entry>
      field_map {"MESSAGE":"log","_SYSTEMD_UNIT":"source"}
      field_map_strict true
    </entry>
  </source>
  <source>
    @id fluentd-monitor-agent
    @type monitor_agent
    @label @PARSE
    bind "0.0.0.0"
    port 24220
    tag "monitor_agent"
  </source>
  <label @CONCAT>
    <filter tail.containers.var.log.containers.dns-controller*dns-controller*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*sidecar*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*dnsmasq*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-proxy*kube-proxy*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*kubedns*.log>
      @type concat
      key "log"
      timeout_label "@PARSE"
      stream_identity_key "stream"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
      separator ""
      use_first_timestamp true
    </filter>
    <filter journald.kube:kubelet>
      @type concat
      key "log"
      timeout_label "@PARSE"
      multiline_start_regexp "/^\\w[0-1]\\d[0-3]\\d/"
      flush_interval 5
    </filter>
    <match **>
      @type relabel
      @label @PARSE
    </match>
  </label>
  <label @PARSE>
    <filter tail.containers.**>
      @type grep
      <exclude>
        key "log"
        pattern \A\z
      </exclude>
    </filter>
    <filter tail.containers.**>
      @type kubernetes_metadata
      annotation_match [".*"]
      de_dot false
      watch true
      cache_ttl 3600
    </filter>
    <filter tail.containers.**>
      @type record_transformer
      enable_ruby
      <record>
        sourcetype ${record.dig("kubernetes", "annotations", "splunk.com/sourcetype") ? record.dig("kubernetes", "annotations", "splunk.com/sourcetype") : "kube:container:"+record.dig("kubernetes","container_name")}
        container_name ${record.dig("kubernetes","container_name")}
        namespace ${record.dig("kubernetes","namespace_name")}
        pod ${record.dig("kubernetes","pod_name")}
        container_id ${record.dig("docker","container_id")}
        pod_uid ${record.dig("kubernetes","pod_id")}
        container_image ${record.dig("kubernetes","container_image")}
        cluster_name ****-ml-lv
        splunk_index ${record.dig("kubernetes", "annotations", "splunk.com/index."+record.dig("kubernetes","container_name")) ? record.dig("kubernetes", "annotations", "splunk.com/index."+record.dig("kubernetes","container_name")) : record.dig("kubernetes", "annotations", "splunk.com/index") ? record.dig("kubernetes", "annotations", "splunk.com/index") : record.dig("kubernetes", "namespace_annotations", "splunk.com/index") ? (record["kubernetes"]["namespace_annotations"]["splunk.com/index"]) : ("ml_logs")}
        label_app ${record.dig("kubernetes","labels","app")}
        label_k8s-app ${record.dig("kubernetes","labels","k8s-app")}
        label_release ${record.dig("kubernetes","labels","release")}
        exclude_list ${record.dig("kubernetes", "annotations", "splunk.com/exclude") ? record.dig("kubernetes", "annotations", "splunk.com/exclude") : record.dig("kubernetes", "namespace_annotations", "splunk.com/exclude") ? (record["kubernetes"]["namespace_annotations"]["splunk.com/exclude"]) : ("false")}
      </record>
    </filter>
    <filter tail.containers.**>
      @type grep
      <exclude>
        key "exclude_list"
        pattern /^true$/
      </exclude>
    </filter>
    <filter tail.containers.var.log.containers.dns-controller*dns-controller*.log>
      @type record_transformer
      <record>
        sourcetype kube:dns-controller
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*sidecar*.log>
      @type record_transformer
      <record>
        sourcetype kube:kubedns-sidecar
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*dnsmasq*.log>
      @type record_transformer
      <record>
        sourcetype kube:dnsmasq
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-apiserver
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-controller-manager
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-dns-autoscaler
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-proxy*kube-proxy*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-proxy
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log>
      @type record_transformer
      <record>
        sourcetype kube:kube-scheduler
      </record>
    </filter>
    <filter tail.containers.var.log.containers.kube-dns*kubedns*.log>
      @type record_transformer
      <record>
        sourcetype kube:kubedns
      </record>
    </filter>
    <filter journald.**>
      @type jq_transformer
      jq ".record.source = \"/var/log/journal/\" + .record.source | .record.sourcetype = (.tag | ltrimstr(\"journald.\")) | .record.cluster_name = \"****-ml-lv\" | .record.splunk_index = \"ml_logs\" |.record"
    </filter>
    <filter tail.file.**>
      @type jq_transformer
      jq ".record.sourcetype = (.tag | ltrimstr(\"tail.file.\")) | .record.cluster_name = \"****-ml-lv\" | .record.splunk_index = \"ml_logs\" | .record"
    </filter>
    <filter monitor_agent>
      @type jq_transformer
      jq ".record.source = \"namespace:splunk-sck/pod:lv-splunk-logging-qvvp2\" | .record.sourcetype = \"fluentd:monitor-agent\" | .record.cluster_name = \"***-ml-lv\" | .record.splunk_index = \"ml_logs\" | .record"
    </filter>
    <match **>
      @type relabel
      @label @SPLUNK
    </match>
  </label>
  <label @SPLUNK>
    <match **>
      @type splunk_hec
      protocol https
      hec_host "splunk-hec.oi.tivo.com"
      consume_chunk_on_4xx_errors true
      hec_port 8088
      hec_token xxxxxx
      index_key "splunk_index"
      insecure_ssl true
      host "las2-mlgpu32"
      source_key "source"
      sourcetype_key "sourcetype"
      app_name "splunk-kubernetes-logging"
      app_version "1.5.2"
      <fields>
        container_image
        pod_uid
        pod
        container_name
        namespace
        container_id
        cluster_name
        label_app
        label_k8s-app
        label_release
      </fields>
      <buffer index>
        @type "memory"
        chunk_limit_records 100000
        chunk_limit_size 20m
        flush_interval 5s
        flush_thread_count 1
        overflow_action block
        retry_max_times 5
        retry_type periodic
        retry_wait 30
        total_limit_size 600m
      </buffer>
      <format monitor_agent>
        @type "json"
      </format>
      <format>
        @type "single_value"
        message_key "log"
        add_newline false
      </format>
    </match>
  </label>
  <source>
    @type prometheus
  </source>
  <source>
    @type forward
  </source>
  <source>
    @type prometheus_monitor
    <labels>
      host ${hostname}
    </labels>
  </source>
  <source>
    @type prometheus_output_monitor
    <labels>
      host ${hostname}
    </labels>
  </source>
</ROOT>
2023-03-07 13:40:20 +0000 [info]: starting fluentd-1.15.3 pid=1 ruby="2.7.6"
2023-03-07 13:40:20 +0000 [info]: spawn command to main:  cmdline=["/usr/bin/ruby", "-r/usr/local/share/gems/gems/bundler-2.3.26/lib/bundler/setup", "-Eascii-8bit:ascii-8bit", "/usr/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "--under-supervisor"]
2023-03-07 13:40:20 +0000 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-03-07 13:40:21 +0000 [info]: #0 init worker0 logger path=nil rotate_age=nil rotate_size=nil
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.dns-controller*dns-controller*.log" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns*sidecar*.log" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns*dnsmasq*.log" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-proxy*kube-proxy*.log" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="tail.containers.var.log.containers.kube-dns*kubedns*.log" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding filter in @CONCAT pattern="journald.kube:kubelet" type="concat"
2023-03-07 13:40:21 +0000 [info]: adding match in @CONCAT pattern="**" type="relabel"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.**" type="grep"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.**" type="kubernetes_metadata"
2023-03-07 13:40:21 +0000 [INFO]: Reading bearer token from /var/run/secrets/kubernetes.io/serviceaccount/token
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.**" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.**" type="grep"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.dns-controller*dns-controller*.log" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-dns*sidecar*.log" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-dns*dnsmasq*.log" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-apiserver*kube-apiserver*.log" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-controller-manager*kube-controller-manager*.log" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-dns-autoscaler*autoscaler*.log" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-proxy*kube-proxy*.log" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-scheduler*kube-scheduler*.log" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.containers.var.log.containers.kube-dns*kubedns*.log" type="record_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="journald.**" type="jq_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="tail.file.**" type="jq_transformer"
2023-03-07 13:40:21 +0000 [info]: adding filter in @PARSE pattern="monitor_agent" type="jq_transformer"
2023-03-07 13:40:21 +0000 [info]: adding match in @PARSE pattern="**" type="relabel"
2023-03-07 13:40:21 +0000 [info]: adding match in @SPLUNK pattern="**" type="splunk_hec"
2023-03-07 13:40:21 +0000 [info]: adding source type="tail"
2023-03-07 13:40:21 +0000 [info]: adding source type="tail"
2023-03-07 13:40:21 +0000 [info]: adding source type="systemd"
2023-03-07 13:40:21 +0000 [info]: adding source type="systemd"
2023-03-07 13:40:21 +0000 [info]: adding source type="monitor_agent"
2023-03-07 13:40:21 +0000 [info]: adding source type="prometheus"
2023-03-07 13:40:21 +0000 [info]: adding source type="forward"
2023-03-07 13:40:21 +0000 [info]: adding source type="prometheus_monitor"
2023-03-07 13:40:21 +0000 [info]: adding source type="prometheus_output_monitor"
2023-03-07 13:40:21 +0000 [warn]: parameter 'de_dot' in <filter tail.containers.**>
  @type kubernetes_metadata
  annotation_match [".*"]
  de_dot false
  watch true
  cache_ttl 3600
</filter> is not used.
2023-03-07 13:40:21 +0000 [info]: #0 starting fluentd worker pid=19 ppid=1 worker=0
2023-03-07 13:40:21 +0000 [info]: #0 listening port port=24224 bind="0.0.0.0"
2023-03-07 13:40:21 +0000 [info]: #0 fluentd worker is now running worker=0
2023-03-07 13:40:33 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:40:43 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:40:54 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:41:04 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:41:14 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:41:25 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:41:39 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:41:46 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:42:00 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:42:06 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:42:17 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:42:27 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:42:39 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:42:48 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:42:58 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:43:08 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:43:20 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:43:29 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:43:39 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:43:50 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:44:00 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:44:10 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:44:21 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:44:31 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:44:41 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:44:46 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:44:52 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:45:02 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:45:13 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:45:23 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:45:33 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:45:39 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:45:44 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:45:54 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:46:04 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:46:18 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:46:25 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default
2023-03-07 13:46:39 +0000 [info]: #0 Timeout flush: journald.kube:kubelet:default

splunk logs

image

Demonset.yaml file

kubectl get ds -n splunk-sck lv-splunk-logging -o yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "1"
    meta.helm.sh/release-name: lv-splunk-connect
    meta.helm.sh/release-namespace: splunk-sck
  creationTimestamp: "2023-03-07T13:40:11Z"
  generation: 1
  labels:
    app: splunk-kubernetes-logging
    app.kubernetes.io/managed-by: Helm
    chart: splunk-kubernetes-logging-1.5.2
    engine: fluentd
    heritage: Helm
    release: lv-splunk-connect
  name: lv-splunk-logging
  namespace: splunk-sck
  resourceVersion: "390920101"
  selfLink: /apis/apps/v1/namespaces/splunk-sck/daemonsets/lv-splunk-logging
  uid: ed892500-8054-49c5-bc75-da098dbce325
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: splunk-kubernetes-logging
      release: lv-splunk-connect
  template:
    metadata:
      annotations:
        checksum/config: 6401fdcfd0a7ddd7c71e0b459aa342ebc61ed26afe237a64101f8369da6007a0
        prometheus.io/port: "24231"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app: splunk-kubernetes-logging
        release: lv-splunk-connect
    spec:
      containers:
      - env:
        - name: K8S_NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: MY_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: SPLUNK_HEC_TOKEN
          valueFrom:
            secretKeyRef:
              key: splunk_hec_token
              name: splunk-kubernetes-logging
        image: docker.io/splunk/fluentd-hec:1.3.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /api/plugins.json
            port: 24220
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 1
        name: splunk-fluentd-k8s-logs
        ports:
        - containerPort: 24231
          name: metrics
          protocol: TCP
        - containerPort: 24220
          name: monitor-agent
          protocol: TCP
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          privileged: false
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/log/pods
          name: varlogdest
          readOnly: true
        - mountPath: /var/log/journal
          name: journallogpath
          readOnly: true
        - mountPath: /fluentd/etc
          name: conf-configmap
        - mountPath: /fluentd/etc/splunk
          name: secrets
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        beta.kubernetes.io/os: linux
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: lv-splunk-logging
      serviceAccountName: lv-splunk-logging
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      volumes:
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/log/pods
          type: ""
        name: varlogdest
      - hostPath:
          path: /var/log/journal
          type: ""
        name: journallogpath
      - configMap:
          defaultMode: 420
          name: lv-splunk-logging
        name: conf-configmap
      - name: secrets
        secret:
          defaultMode: 420
          secretName: splunk-kubernetes-logging
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 53
  desiredNumberScheduled: 53
  numberAvailable: 50
  numberMisscheduled: 0
  numberReady: 50
  numberUnavailable: 3
  observedGeneration: 1
  updatedNumberScheduled: 53

values.yaml

COMPUTED VALUES:
global:
  logLevel: info
  splunk:
    hec:
      gzip_compression: false
      host: splunk-hec.oi.tivo.com
      insecureSSL: true
      port: 8088
      protocol: https
      token: 779EE032-1473-40F8-AA19-DBEFFE60E51B
splunk-kubernetes-logging:
  affinity: {}
  buffer:
    '@type': memory
    chunk_limit_records: 100000
    chunk_limit_size: 20m
    flush_interval: 5s
    flush_thread_count: 1
    overflow_action: block
    retry_max_times: 5
    retry_type: periodic
    retry_wait: 30
    total_limit_size: 600m
  bufferChunkKeys:
  - index
  charEncodingUtf8: false
  containers:
    enableStatWatcher: true
    localTime: false
    logFormat: null
    logFormatType: json
    path: /var/log
    pathDest: /var/log/pods
    refreshInterval: null
    removeBlankEvents: true
  customFilters: {}
  customMetadata: null
  customMetadataAnnotations: null
  enabled: true
  environmentVar: null
  extraLabels: null
  extraVolumeMounts: []
  extraVolumes: []
  fluentd:
    path: /var/log/pods/*.log
  fullnameOverride: lv-splunk-logging
  global:
    kubernetes:
      clusterName: ****-ml-lv
    logLevel: info
    metrics:
      service:
        enabled: true
        headless: true
    monitoring_agent_enabled: true
    prometheus_enabled: true
    serviceMonitor:
      additionalLabels: {}
      enabled: false
      interval: ""
      metricsPort: 24231
      scrapeTimeout: 10s
    splunk:
      hec:
        gzip_compression: false
        host: splunk-hec.oi.tivo.com
        insecureSSL: true
        port: 8088
        protocol: https
        token: 779EE032-1473-40F8-AA19-DBEFFE60E51B
  image:
    name: splunk/fluentd-hec
    pullPolicy: IfNotPresent
    registry: docker.io
    tag: 1.3.1
    usePullSecret: false
  indexFields: []
  journalLogPath: /var/log/journal
  k8sMetadata:
    cache_ttl: 3600
    podLabels:
    - app
    - k8s-app
    - release
    propagate_namespace_labels: false
    watch: true
  kubernetes:
    clusterName: ****-ml-lv
    securityContext: false
  logLevel: null
  logs:
    dns-controller:
      from:
        pod: dns-controller
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:dns-controller
    dns-sidecar:
      from:
        container: sidecar
        pod: kube-dns
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kubedns-sidecar
    dnsmasq:
      from:
        pod: kube-dns
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:dnsmasq
    docker:
      from:
        journald:
          unit: docker.service
      sourcetype: kube:docker
    etcd:
      from:
        container: etcd-container
        pod: etcd-server
    etcd-events:
      from:
        container: etcd-container
        pod: etcd-server-events
    etcd-minikube:
      from:
        container: etcd
        pod: etcd-minikube
    kube-apiserver:
      from:
        pod: kube-apiserver
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-apiserver
    kube-audit:
      from:
        file:
          path: /var/log/kube-apiserver-audit.log
      sourcetype: kube:apiserver-audit
      timestampExtraction:
        format: '%Y-%m-%dT%H:%M:%SZ'
    kube-controller-manager:
      from:
        pod: kube-controller-manager
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-controller-manager
    kube-dns-autoscaler:
      from:
        container: autoscaler
        pod: kube-dns-autoscaler
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-dns-autoscaler
    kube-proxy:
      from:
        pod: kube-proxy
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-proxy
    kube-scheduler:
      from:
        pod: kube-scheduler
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kube-scheduler
    kubedns:
      from:
        pod: kube-dns
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kubedns
    kubelet:
      from:
        journald:
          unit: kubelet.service
      multiline:
        firstline: /^\w[0-1]\d[0-3]\d/
      sourcetype: kube:kubelet
  namespace: null
  nodeSelector:
    beta.kubernetes.io/os: linux
  podAnnotations: null
  podSecurityPolicy:
    apparmor_security: true
    create: false
  priorityClassName: null
  rbac:
    create: true
    openshiftPrivilegedSccBinding: false
  resources:
    requests:
      cpu: 100m
      memory: 200Mi
  rollingUpdate: null
  secret:
    create: true
  sendAllMetadata: false
  serviceAccount:
    create: true
  sourcetypePrefix: kube
  splunk:
    hec:
      caFile: null
      clientCert: null
      clientKey: null
      consume_chunk_on_4xx_errors: null
      fullUrl: null
      gzip_compression: null
      host: splunk-hec.oi.tivo.com
      indexName: ml_logs
      indexRouting: false
      indexRoutingDefaultIndex: default
      insecureSSL: true
      port: 8088
      protocol: https
      token: 779EE032-1473-40F8-AA19-DBEFFE60E51B
    ingest_api: {}
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
splunk-kubernetes-metrics:
  affinity: {}
  aggregatorBuffer:
    '@type': memory
    chunk_limit_records: 10000
    chunk_limit_size: 100m
    flush_interval: 5s
    flush_thread_count: 1
    overflow_action: block
    retry_max_times: 10
    retry_type: periodic
    retry_wait: 30
    total_limit_size: 400m
  aggregatorNodeSelector:
    beta.kubernetes.io/os: linux
  aggregatorTolerations: {}
  buffer:
    '@type': memory
    chunk_limit_records: 10000
    chunk_limit_size: 100m
    flush_interval: 5s
    flush_thread_count: 1
    overflow_action: block
    retry_max_times: 10
    retry_type: periodic
    retry_wait: 30
    total_limit_size: 400m
  customFilters: {}
  customFiltersAggr: {}
  enabled: true
  environmentVar: null
  environmentVarAgg: null
  extraLabels: null
  extraLabelsAgg: null
  fullnameOverride: lv-splunk-metrics
  global:
    kubernetes:
      clusterName: ****-ml-lv
    logLevel: info
    monitoring_agent_enabled: false
    monitoring_agent_index_name: false
    prometheus_enabled: true
    splunk:
      hec:
        gzip_compression: false
        host: splunk-hec.oi.tivo.com
        insecureSSL: true
        port: 8088
        protocol: https
        token: 779E*****-1473-40F8-AA19-DBEFFE****
  image:
    name: splunk/k8s-metrics
    pullPolicy: IfNotPresent
    registry: docker.io
    tag: 1.2.1
    usePullSecret: false
  imageAgg:
    name: splunk/k8s-metrics-aggr
    pullPolicy: IfNotPresent
    registry: docker.io
    tag: 1.2.1
    usePullSecret: false
  kubernetes:
    bearerTokenFile: null
    caFile: null
    clusterName: ****-ml-lv
    insecureSSL: true
    kubeletAddress: '"#{ENV[''KUBERNETES_NODE_IP'']}"'
    kubeletPort: 10250
    kubeletPortAggregator: null
    secretDir: null
    useRestClientSSL: true
  logLevel: null
  metricsInterval: 60s
  namespace: null
  nodeSelector:
    beta.kubernetes.io/os: linux
  podAnnotations: null
  podAnnotationsAgg: null
  podSecurityPolicy:
    apparmor_security: true
    create: false
  priorityClassName: null
  priorityClassNameAgg: null
  rbac:
    create: true
  resources:
    fluent:
      limits:
        cpu: 200m
        memory: 300Mi
      requests:
        cpu: 200m
        memory: 300Mi
  rollingUpdate: null
  secret:
    create: true
  serviceAccount:
    create: true
    name: splunk-kubernetes-metrics
    usePullSecrets: false
  splunk:
    hec:
      caFile: null
      clientCert: null
      clientKey: null
      consume_chunk_on_4xx_errors: null
      fullUrl: null
      host: splunk-hec.oi.tivo.com
      indexName: em_metrics
      insecureSSL: true
      port: 8088
      protocol: null
      token: 779*****-1473-40F8-AA19-DBEFF******
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
splunk-kubernetes-objects:
  enabled: false
  fullnameOverride: lv-splunk-object
  kubernetes:
    clusterName: ****-ml-lv
    insecureSSL: true
  objects:
    apps:
      v1:
      - interval: 60s
        name: daemon_sets
    core:
      v1:
      - interval: 60s
        name: pods
      - interval: 60s
        name: nodes
  rbac:
    create: true
  serviceAccount:
    create: true
    name: splunk-kubernetes-objects
  splunk:
    hec:
      indexName: em_meta

What you expected to happen: Logs should be sent to splunk

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):1.17
  • Ruby version (use ruby --version):
  • OS (e.g: cat /etc/os-release):
  • Splunk version:
  • Splunk Connect for Kubernetes helm chart version:1.5.2
  • Others:
@kavita1205
Copy link
Author

@rockb1017 @chaitanyaphalak can you please help me here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant