You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am not sure if this should be categorized as a bug or feature request as I think its just undesirable behavior. I am also not sure if this is an issue with just microk8s or an upstream component.
daemon.kubelite generates excessive logs in certain cases for CronJobs. I have a cronjob that has concurrencyPolicy: Forbid set. I had a machine run out of space due to repeatedly generating the following message in syslog:
Apr 8 03:59:59 lambda-quad microk8s.daemon-kubelite[839288]: I0408 03:59:59.994995 839288 event.go:294] "Event occurred" object="default/cron-sitemap" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="JobAlreadyActive" message="Not starting job because prior execution is running and concurrency policy is Forbid"
This message gets generated thousands of times per second (my estimate).
What Should Happen Instead?
I don't think its my place to be to prescriptive on how this problem might be solved. I couldn't find a way to control the log level of kubelite (in contrast to containerd). I would like to not have a machine will up with this log message.
Reproduction Steps
Create a CronJob that runs for a long time with concurrencyPolicy: Forbid
Have the second attempt to run the job start while the previous one is still running.
Hi @sseveran the fact that this log message is printed along with its frequency should be raised with the upstream kubernetes project.
What we can offer is some info on how to alter the log verbosity. kubelite starts all k8s services. The log level can be set via the -v <log level ranging from 0 to 9> flag in the files kube-apiserver, kubelet, kube-proxy, kube-scheduler, kube-controller-manager under /var/snap/microk8s/current/args/. A microk8s.stop and microk8s.start is needed after setting the log level.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Summary
I am not sure if this should be categorized as a bug or feature request as I think its just undesirable behavior. I am also not sure if this is an issue with just microk8s or an upstream component.
daemon.kubelite generates excessive logs in certain cases for CronJobs. I have a cronjob that has
concurrencyPolicy: Forbid
set. I had a machine run out of space due to repeatedly generating the following message in syslog:Apr 8 03:59:59 lambda-quad microk8s.daemon-kubelite[839288]: I0408 03:59:59.994995 839288 event.go:294] "Event occurred" object="default/cron-sitemap" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="JobAlreadyActive" message="Not starting job because prior execution is running and concurrency policy is Forbid"
This message gets generated thousands of times per second (my estimate).
What Should Happen Instead?
I don't think its my place to be to prescriptive on how this problem might be solved. I couldn't find a way to control the log level of kubelite (in contrast to containerd). I would like to not have a machine will up with this log message.
Reproduction Steps
concurrencyPolicy: Forbid
Introspection Report
inspection-report-20230408_065938.tar.gz
Can you suggest a fix?
No.
Are you interested in contributing with a fix?
Potentially. I looked around to see if I could track down exactly where this happens and I was not successful.
The text was updated successfully, but these errors were encountered: