Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive Logging from CronJob #3908

Open
sseveran opened this issue Apr 8, 2023 · 3 comments
Open

Excessive Logging from CronJob #3908

sseveran opened this issue Apr 8, 2023 · 3 comments
Labels

Comments

@sseveran
Copy link

sseveran commented Apr 8, 2023

Summary

I am not sure if this should be categorized as a bug or feature request as I think its just undesirable behavior. I am also not sure if this is an issue with just microk8s or an upstream component.

daemon.kubelite generates excessive logs in certain cases for CronJobs. I have a cronjob that has concurrencyPolicy: Forbid set. I had a machine run out of space due to repeatedly generating the following message in syslog:

Apr 8 03:59:59 lambda-quad microk8s.daemon-kubelite[839288]: I0408 03:59:59.994995 839288 event.go:294] "Event occurred" object="default/cron-sitemap" fieldPath="" kind="CronJob" apiVersion="batch/v1" type="Normal" reason="JobAlreadyActive" message="Not starting job because prior execution is running and concurrency policy is Forbid"

This message gets generated thousands of times per second (my estimate).

What Should Happen Instead?

I don't think its my place to be to prescriptive on how this problem might be solved. I couldn't find a way to control the log level of kubelite (in contrast to containerd). I would like to not have a machine will up with this log message.

Reproduction Steps

  1. Create a CronJob that runs for a long time with concurrencyPolicy: Forbid
  2. Have the second attempt to run the job start while the previous one is still running.

Introspection Report

inspection-report-20230408_065938.tar.gz

Can you suggest a fix?

No.

Are you interested in contributing with a fix?

Potentially. I looked around to see if I could track down exactly where this happens and I was not successful.

@ktsakalozos
Copy link
Member

Hi @sseveran the fact that this log message is printed along with its frequency should be raised with the upstream kubernetes project.

What we can offer is some info on how to alter the log verbosity. kubelite starts all k8s services. The log level can be set via the -v <log level ranging from 0 to 9> flag in the files kube-apiserver, kubelet, kube-proxy, kube-scheduler, kube-controller-manager under /var/snap/microk8s/current/args/. A microk8s.stop and microk8s.start is needed after setting the log level.

@daviddenis-stx
Copy link

Reported upstream here (we observed the same thing) kubernetes/kubernetes#118789

Copy link

stale bot commented May 16, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the inactive label May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants