{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":21225407,"defaultBranch":"master","name":"kubernetes","ownerLogin":"smarterclayton","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2014-06-26T02:28:13.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1163175?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1681750500.0","currentOid":""},"activityList":{"items":[{"before":"b3c95b659bef27f6ff302d2324077d57ba11aa5c","after":"6ac1bae28154b6f81023d6e783d91728687f7aff","ref":"refs/heads/minimal_podmanager","pushedAt":"2023-05-12T20:41:59.419Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"test: Improve debug output of init container tests\n\nWhen certain status conditions are not expected, we need to see\nthe nested objects, but %#v doesn't handle pointers well. Output\nas simple encoded JSON.","shortMessageHtmlLink":"test: Improve debug output of init container tests"}},{"before":"5447621682517f062caa22b93055edcb88a78467","after":"b3c95b659bef27f6ff302d2324077d57ba11aa5c","ref":"refs/heads/minimal_podmanager","pushedAt":"2023-05-12T16:49:29.910Z","pushType":"push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":"cb83e2125a4e38b8e96dbe006254e07b0ed11ebf","after":"5447621682517f062caa22b93055edcb88a78467","ref":"refs/heads/minimal_podmanager","pushedAt":"2023-05-11T21:13:28.231Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":"d7975d903ae7e0b91fdaea3e269bd0816a03a5fc","after":"cb83e2125a4e38b8e96dbe006254e07b0ed11ebf","ref":"refs/heads/minimal_podmanager","pushedAt":"2023-05-08T14:23:08.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":"f37a80561b77fe5fe62bf0bfa5e422257b978e9f","after":"d7975d903ae7e0b91fdaea3e269bd0816a03a5fc","ref":"refs/heads/minimal_podmanager","pushedAt":"2023-04-17T18:04:59.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":null,"after":"453f81d1caeeb39a4904353d76ef04342dcd7d42","ref":"refs/heads/volume_cancel","pushedAt":"2023-04-17T16:55:00.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: pass context to VolumeManager.WaitFor*\n\nThis allows us to return with a timeout error as soon as the\ncontext is canceled. Previously in cases where the mount will\nnever succeed pods can get stuck deleting for 2 minutes.\n\nIn the Sync*Pod methods that call VolumeManager.WaitFor*, we\nmust filter out wait.Interrupted errors from being logged as\nthey are part of control flow, not runtime problems. Any\nearly interruption should result in exiting the Sync*Pod method\nas quickly as possible without logging intermediate errors.","shortMessageHtmlLink":"kubelet: pass context to VolumeManager.WaitFor*"}},{"before":"7365c489434c97526cb8a2b3404128093db5f6d5","after":"f37a80561b77fe5fe62bf0bfa5e422257b978e9f","ref":"refs/heads/minimal_podmanager","pushedAt":"2023-04-14T19:50:14.000Z","pushType":"push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":null,"after":"7365c489434c97526cb8a2b3404128093db5f6d5","ref":"refs/heads/minimal_podmanager","pushedAt":"2023-04-14T19:44:46.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":null,"after":"861e1935e2acfb1cdfac099a761b4b7defe7b4ee","ref":"refs/heads/automated-cherry-pick-of-#116995-upstream-release-1.27","pushedAt":"2023-04-14T18:30:42.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: Ensure pods that have not started track a pendingUpdate\n\nA pod that cannot be started yet (due to static pod fullname\nexclusion when UIDs are reused) must be accounted for in the\npod worker since it is considered to have been admitted and will\neventually start.\n\nDue to a bug we accidentally cleared pendingUpdate for pods that\ncannot start yet which means we can't report the right metric to\nusers in kubelet_working_pods and in theory we might fail to start\nthe pod in the future (although we currently have not observed\nthat in tests that should catch such an error). Describe, implement,\nand test the invariant that when startPodSync returns in every path\nthat either activeUpdate OR pendingUpdate is set on the status, but\nnever both, and is only nil when the pod can never start.\n\nThis bug was detected by a \"programmer error\" assertion we added\non metrics that were not being reported, suggesting that we should\nbe more aggressive on using log assertions and automating detection\nin tests.","shortMessageHtmlLink":"kubelet: Ensure pods that have not started track a pendingUpdate"}},{"before":null,"after":"55df2f809f60e1398fb966da8c665c380b606589","ref":"refs/heads/automated-cherry-pick-of-#116482-upstream-release-1.24","pushedAt":"2023-04-12T21:00:12.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: Do not mutate pods in the pod manager\n\nThe pod manager is a cache and modifying objects returned from the\npod manager can cause race conditions in the Kubelet. In this case,\nit causes static pod status from the mirror pod to leak back to\nthe config source, which means a static pod whose mirror pod is\nset to a terminal phase (succeeded or failed) cannot restart.","shortMessageHtmlLink":"kubelet: Do not mutate pods in the pod manager"}},{"before":null,"after":"f1a2a6c1e553056fe4cafa3182666fdc7518b555","ref":"refs/heads/automated-cherry-pick-of-#116482-upstream-release-1.25","pushedAt":"2023-04-12T20:57:00.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: Do not mutate pods in the pod manager\n\nThe pod manager is a cache and modifying objects returned from the\npod manager can cause race conditions in the Kubelet. In this case,\nit causes static pod status from the mirror pod to leak back to\nthe config source, which means a static pod whose mirror pod is\nset to a terminal phase (succeeded or failed) cannot restart.","shortMessageHtmlLink":"kubelet: Do not mutate pods in the pod manager"}},{"before":null,"after":"e28b77e3a9584f9ae36d588acda1ed27b24702e0","ref":"refs/heads/automated-cherry-pick-of-#116482-upstream-release-1.26","pushedAt":"2023-04-12T20:56:29.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: Do not mutate pods in the pod manager\n\nThe pod manager is a cache and modifying objects returned from the\npod manager can cause race conditions in the Kubelet. In this case,\nit causes static pod status from the mirror pod to leak back to\nthe config source, which means a static pod whose mirror pod is\nset to a terminal phase (succeeded or failed) cannot restart.","shortMessageHtmlLink":"kubelet: Do not mutate pods in the pod manager"}},{"before":null,"after":"d662e339aa506387d3522028585a88fa0acc683b","ref":"refs/heads/automated-cherry-pick-of-#116482-upstream-release-1.27","pushedAt":"2023-04-12T20:55:56.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: Do not mutate pods in the pod manager\n\nThe pod manager is a cache and modifying objects returned from the\npod manager can cause race conditions in the Kubelet. In this case,\nit causes static pod status from the mirror pod to leak back to\nthe config source, which means a static pod whose mirror pod is\nset to a terminal phase (succeeded or failed) cannot restart.","shortMessageHtmlLink":"kubelet: Do not mutate pods in the pod manager"}},{"before":null,"after":"ed48dcd2d71d9b25a6aab83c07487b4038cc647e","ref":"refs/heads/pending_update","pushedAt":"2023-03-29T19:47:46.679Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: Ensure pods that have not started track a pendingUpdate\n\nA pod that cannot be started yet (due to static pod fullname\nexclusion when UIDs are reused) must be accounted for in the\npod worker since it is considered to have been admitted and will\neventually start.\n\nDue to a bug we accidentally cleared pendingUpdate for pods that\ncannot start yet which means we can't report the right metric to\nusers in kubelet_working_pods and in theory we might fail to start\nthe pod in the future (although we currently have not observed\nthat in tests that should catch such an error). Describe, implement,\nand test the invariant that when startPodSync returns in every path\nthat either activeUpdate OR pendingUpdate is set on the status, but\nnever both, and is only nil when the pod can never start.\n\nThis bug was detected by a \"programmer error\" assertion we added\non metrics that were not being reported, suggesting that we should\nbe more aggressive on using log assertions and automating detection\nin tests.","shortMessageHtmlLink":"kubelet: Ensure pods that have not started track a pendingUpdate"}},{"before":null,"after":"91f09458d7df71d1576a19fc8a0739ed520ac4b9","ref":"refs/heads/admission_on_pod_worker","pushedAt":"2023-03-16T23:05:42.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"WIP: Explore how admission should change","shortMessageHtmlLink":"WIP: Explore how admission should change"}},{"before":"d62c08d653fbab7b29b885a3b2c5f004189d21ab","after":"bd252178eb169fc1ace96167f82527c93cc2b77a","ref":"refs/heads/z3","pushedAt":"2023-03-16T22:10:42.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"WIP: reduce the interface pod.Manager consumers accept\n\nEvery component that uses a pod.Manager should use a stub interface\n(like we do for podWorker) that explicitly describes what methods\nthey use. This will allow podWorker to implement the minimum set\nof manager interfaces.","shortMessageHtmlLink":"WIP: reduce the interface pod.Manager consumers accept"}},{"before":"366d51e4f9756b7b20d85b167ad0a2b2193b7629","after":"d25572c38927da74a795d20220f37dde1c6673d0","ref":"refs/heads/handle_twice","pushedAt":"2023-03-16T21:18:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: HandlePodCleanups takes an extra sync to restart pods\n\nHandlePodCleanups is responsible for restarting pods that are no\nlonger running (usually due to delete and recreation with the same\nUID in quick succession). We have to filter the list of pods to\nrestart from podManager to get the list of admitted pods, which\nuses filterOutInactivePods on the kubelet. That method excludes\npods the pod worker has already terminated. Since a restarted\npod will be in the terminated state before HandlePodCleanups\ncalls SyncKnownPods, we have to call filterOutInactivePods after\nSyncKnownPods, otherwise the to-be-restarted pod is ignored and\nwe have to wait for the next houskeeping cycle to restart it.\n\nSince static pods are often critical system components, this\nextra 2s wait is undesirable and we should restart as soon as\nwe can. Add a failing test that passes after we move the filter\ncall after SyncKnownPods.","shortMessageHtmlLink":"kubelet: HandlePodCleanups takes an extra sync to restart pods"}},{"before":null,"after":"6d7f71b5db20f13a675c0b3b4bc0e0ceb2895ed7","ref":"refs/heads/force_gather","pushedAt":"2023-03-16T21:07:11.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"DO NOT MERGE: Seeing what output gather-test-metrics reports","shortMessageHtmlLink":"DO NOT MERGE: Seeing what output gather-test-metrics reports"}},{"before":null,"after":"366d51e4f9756b7b20d85b167ad0a2b2193b7629","ref":"refs/heads/handle_twice","pushedAt":"2023-03-16T20:53:51.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":"9d57ba88c06785187930b092ca6b4a857ff75043","after":"c015d6e434cc9b9b8882bda4c779708e8c2b151e","ref":"refs/heads/remove_channel","pushedAt":"2023-03-16T00:44:51.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: Remove status manager channel\n\nThe status manager channel forces all container status to be\nprocessed, even if multiple updates are generated in succession.\nInstead of queueing the updates, just remember which ones changed\nand process them in a batch. This should reduce QPS load from\nthe Kubelet for status, reduce latency of status propagation to\nthe API in general, and is easier to reason about.\n\nThis also prevents status from being lost when the channel is\nfull - all updates sent by SetPodStatus are guaranteed to be\nrecorded. Changing to remove the channel allows us to set a\nmarker flag when the pod worker state machine completes that\navoids the status manager having to call into the pod worker\ndirectly.","shortMessageHtmlLink":"kubelet: Remove status manager channel"}},{"before":null,"after":"9d57ba88c06785187930b092ca6b4a857ff75043","ref":"refs/heads/remove_channel","pushedAt":"2023-03-14T21:03:06.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":"422025c1cd869201f132829711cc1ef1918b98ba","after":"133dd6157887f26aa91f648ea3103936d67d747b","ref":"refs/heads/context_wait","pushedAt":"2023-03-14T19:14:41.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"wait: Deprecate legacy Poll methods for new context aware methods\n\nThe Poll* methods predate context in Go, and the current implementation\nwill return ErrWaitTimeout even if the context is cancelled, which\nprevents callers who are using Poll* from handling that error directly\n(for instance, if you want to cancel a function in a controlled fashion\nbut still report cleanup errors to logs, you want to know the difference\nbetween 'didn't cancel', 'cancelled cleanly', and 'hit an error).\n\nThis commit adds two new methods that reflect how modern Go uses\ncontext in polling while preserving all Kubernetes-specific behavior:\n\n\tPollUntilContextCancel\n\tPollUntilContextTimeout\n\nThese methods can be used for infinite polling (normal context),\ntimed polling (deadline context), and cancellable poll (cancel context).\nAll other Poll/Wait methods are marked as deprecated for removal in\nthe future. The ErrWaitTimeout error will no longer be returned from the\nPoll* methods, but will continue to be returned from ExponentialBackoff*.\nUsers updating to use these new methods are responsible for converting\ntheir error handling as appropriate. A convenience helper\n`Interrupted(err) bool` has been added that should be used instead of\nchecking `err == ErrWaitTimeout`. In a future release ErrWaitTimeout will\nbe made private to prevent incorrect use. The helper can be used with all\npolling methods since context cancellation and deadline are semantically\nequivalent to ErrWaitTimeout. A new `ErrorInterrupted(cause error)` method\nshould be used instead of returning ErrWaitTimeout in custom code.\n\nThe convenience method PollUntilContextTimeout is added because deadline\ncontext creation is verbose and the cancel function must be called to\nproperly cleanup the context - many of the current poll users would see\ncode sizes increase. To reduce the overall method surface area, the\ndistinction between PollImmediate and Poll has been reduced to a single\nboolean on PollUntilContextCancel so we do not need multiple helper methods.\n\nThe existing methods were not altered because ecosystem callers have been\nobserved to use ErrWaitTimeout to mean \"any error that my condition func\ndid not return\" which prevents cancellation errors from being returned\nfrom the existing methods. Callers must make a deliberate migration.\n\nCallers migrating to `PollWithContextCancel` should:\n\n1. Pass a context with a deadline or timeout if they were previously using\n\t`Poll*Until*` and check `err` for `context.DeadlineExceeded` instead of\n\t`ErrWaitTimeout` (more specific) or use `Interrupted(err)` for a generic\n\tcheck.\n2. Callers that were waiting forever or for context cancellation should\n\tensure they are checking `context.Canceled` instead of `ErrWaitTimeout`\n\tto detect when the poll was stopped early.\n\nCallers of `ExponentialBackoffWithContext` should use `Interrupted(err)`\ninstead of directly checking `err == ErrWaitTimeout`. No other changes are\nneeded.\n\nCode that returns `ErrWaitTimeout` should instead define a local cause\nand return `wait.ErrorInterrupted(cause)`, which will be recognized by\n`wait.Interrupted()`. If nil is passed the previous message will be used\nbut clients are highly recommended to use typed checks vs message checks.\n\nAs a consequence of this change the new methods are more efficient - Poll\nuses one less goroutine.","shortMessageHtmlLink":"wait: Deprecate legacy Poll methods for new context aware methods"}},{"before":"1f74daccbcd317c9cf779a842e0c71b93414b60f","after":"8bc8f0c4f3b15b889ac34e0d7e6c5afe12fe9367","ref":"refs/heads/deployment_available","pushedAt":"2023-03-13T23:14:31.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"wip comments","shortMessageHtmlLink":"wip comments"}},{"before":null,"after":"71a36529d12986f625c9f31db659f8463d347078","ref":"refs/heads/sync_known_race","pushedAt":"2023-03-13T22:25:03.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: TestSyncKnownPods should not race\n\nSyncKnownPods began triggering UpdatePod() for pods that have been\norphaned by desired config to ensure pods run to termination. This\ntest reads a mutex protected value while pod workers are running\nin the background and as a consequence triggers a data race.\n\nWait for the workers to stabilize before reading the value. Other\ntests validate that the correct sync events are triggered (see\nkubelet_pods_test.go#TestKubelet_HandlePodCleanups for full\nverification of this behavior).\n\nIt is slightly concerning that I was unable to recreate the race\nlocally even under stress testing, but I cannot identify why.","shortMessageHtmlLink":"kubelet: TestSyncKnownPods should not race"}},{"before":"2ac6c4323f10961a81a67e93d69a33afa6683723","after":"422025c1cd869201f132829711cc1ef1918b98ba","ref":"refs/heads/context_wait","pushedAt":"2023-03-13T18:06:11.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":"cbe5d0e1219dbae9ba8ac00cb4be701085e89b3b","after":"2ac6c4323f10961a81a67e93d69a33afa6683723","ref":"refs/heads/context_wait","pushedAt":"2023-03-10T20:17:58.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":"48400ac08bd3fd1db64d9bc0377b8e4e58bccbf3","after":"cbe5d0e1219dbae9ba8ac00cb4be701085e89b3b","ref":"refs/heads/context_wait","pushedAt":"2023-03-10T19:22:23.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}},{"before":null,"after":"aadb87bdcdbc57317e23c753b5683bee192bb5a1","ref":"refs/heads/no_mutate","pushedAt":"2023-03-10T19:17:03.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: Do not mutate pods in the pod manager\n\nThe pod manager is a cache and modifying objects returned from the\npod manager can cause race conditions in the Kubelet. In this case,\nit causes static pod status from the mirror pod to leak back to\nthe config source, which means a static pod whose mirror pod is\nset to a terminal phase (succeeded or failed) cannot restart.","shortMessageHtmlLink":"kubelet: Do not mutate pods in the pod manager"}},{"before":"7c823e10b4b8dd0a2a65cdf6afe23c7dcdaa6b04","after":"a02c55402b4c9fcd901f9dd5ea90bb8306bcfd71","ref":"refs/heads/pr-115331","pushedAt":"2023-03-10T19:15:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"},"commit":{"message":"kubelet: Do not mutate pods in the pod manager","shortMessageHtmlLink":"kubelet: Do not mutate pods in the pod manager"}},{"before":"77dce5511194a2d9d931103054de01bff33be180","after":"48400ac08bd3fd1db64d9bc0377b8e4e58bccbf3","ref":"refs/heads/context_wait","pushedAt":"2023-03-10T17:32:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"smarterclayton","name":"Clayton Coleman","path":"/smarterclayton","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1163175?s=80&v=4"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAADK8tdfAA","startCursor":null,"endCursor":null}},"title":"Activity ยท smarterclayton/kubernetes"}