Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scheduler panic in unexpected places and forced termination of Jobs due to Data Race #3324

Open
belo4ya opened this issue Feb 22, 2024 · 1 comment · May be fixed by #3325
Open

Scheduler panic in unexpected places and forced termination of Jobs due to Data Race #3324

belo4ya opened this issue Feb 22, 2024 · 1 comment · May be fixed by #3325
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@belo4ya
Copy link

belo4ya commented Feb 22, 2024

What happened:

During operation of the Volcano Scheduler service, a bug was discovered in the code that leads to a Data Race for the resource var IgnoredDevicesList []string. This results in an unexpected application panic and termination, which forcibly terminates running user Jobs.

The modification functions used in the informer handlers ensure safe modification of IgnoredDevicesList using a mutex (setNodeOthersResource pkg/scheduler/api/node_info.go), but there are no read locks - read operations can occur during modifications (NewResource pkg/scheduler/api/resource_info.go, GetInqueueResource pkg/scheduler/plugins/util/util.go).

Due to the Data Race, a panic occurs panic: runtime error: invalid memory address or nil pointer dereference with a strange stack trace, leading to Scheduler restart and forced termination of user Jobs. The strange stack trace leads to the standard library in the string comparison method (strings.Compare(...) strings/compare.go:24).

What you expected to happen:

There are no Data Races and the system behaves deterministically and predictably at any time. Unexpected Scheduler termination does not lead to termination of all user Jobs.

How to reproduce it:

E2E Reproduction:

1. Build vc-scheduler with the Data Race Detector (-race): CGO_ENABLED=1 go build -race -o ${BIN_DIR}/vc-scheduler ./cmd/scheduler
2. Deploy Volcano v1.8.1 on a Kubernetes cluster with multiple nodes
3. In a cluster with a large number of nodes, informers receive a large number of events when nodes change (e.g., Node Heartbeats). To emulate this, you can, for example, constantly update node labels using a script.

Script example
import (...)

var nodes = []string{"v26-worker", "v26-worker2", "v26-worker3", "v26-worker4"}
var patchTmpl = `[{"op":"replace","path":"/metadata/labels/rand-int","value":"%d"}]`

func main() {
	ctx := context.Background()
	clientset, _ := corev1.NewForConfig(config.GetConfigOrDie())
	for next := range time.Tick(200 * time.Millisecond) {
		patch := []byte(fmt.Sprintf(patchTmpl, rand.Int()))
		for _, node := range nodes {
			if _, err := clientset.Nodes().Patch(ctx, node, types.JSONPatchType, patch, metav1.PatchOptions{}); err != nil {
				fmt.Printf("[%s] %s - %s\n", next.Format(time.TimeOnly), node, err)
			} else {
				fmt.Printf("[%s] %s - OK\n", next.Format(time.TimeOnly), node)
			}
		}
	}
}

4. Run several instances of user Jobs:

k apply -f ./jobs/volcano/16
job.batch.volcano.sh/completed-job-0 created
job.batch.volcano.sh/completed-job-1 created
...
job.batch.volcano.sh/completed-job-15 created
Job example
apiVersion: batch.volcano.sh/v1alpha1
kind: Job
metadata:
  name: completed-job-xxx
  namespace: volcano-jobs
spec:
  minAvailable: 4
  schedulerName: volcano
  tasks:
    - name: mpimaster
      replicas: 1
      policies:
        - events: [ "PodFailed", "PodEvicted", "Unknown" ]
          action: "TerminateJob"
        - event: "TaskCompleted"
          action: "CompleteJob"
      template:
        spec:
          restartPolicy: Never
          tolerations:
            - effect: NoSchedule
              operator: Exists
          initContainers:
            - command:
                - sh
                - -c
                - |
                  echo "Prestart (sleep 20)"
                  sleep 20
              image: busybox
              name: prestart
              resources:
                limits:
                  cpu: 50m
                  memory: 64Mi
                requests:
                  cpu: 50m
                  memory: 64Mi
          containers:
            - command: [ "flog" ]
              args: [ "-n", "300", "-d", "1s" ]
              image: mingrammer/flog
              name: mpimaster
              resources:
                limits:
                  cpu: 50m
                  memory: 64Mi
                  nvidia.com/gpu: "0"
                requests:
                  cpu: 50m
                  memory: 64Mi
                  nvidia.com/gpu: "0"
    - name: worker
      replicas: 3
      policies:
        - event: "*"
          action: "TerminateJob"
      template:
        spec:
          restartPolicy: Never
          tolerations:
            - effect: NoSchedule
              operator: Exists
          initContainers:
            - command:
                - sh
                - -c
                - |
                  echo "Prestart (sleep 20)"
                  sleep 20
              image: busybox
              name: prestart
              resources:
                limits:
                  cpu: 50m
                  memory: 64Mi
                requests:
                  cpu: 50m
                  memory: 64Mi
          containers:
            - command:
                - sh
                - -c
                - |
                  echo "I'm worker (sleep 10000)"
                  sleep 10000
              image: busybox
              name: worker
              resources:
                limits:
                  cpu: 50m
                  memory: 64Mi
                  nvidia.com/gpu: "1"
                requests:
                  cpu: 50m
                  memory: 64Mi
                  nvidia.com/gpu: "1"

5. Detect Data Race Detector messages in the Scheduler logs: k logs -f deploy/volcano-scheduler | grep "WARNING: DATA RACE" -B 4 -A 40

Output example
I0216 02:26:15.008392       1 proportion.go:350] job completed-job-6-24e28f9f-9614-4d60-805b-7b480d484a6f inqueue true
==================
WARNING: DATA RACE
Read at 0x00000565d810 by goroutine 276:
  volcano.sh/volcano/pkg/scheduler/api.NewResource()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/resource_info.go:89 +0x40a
  volcano.sh/volcano/pkg/scheduler/api.(*JobInfo).GetMinResources()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/job_info.go:490 +0x1705
  volcano.sh/volcano/pkg/scheduler/plugins/proportion.(*proportionPlugin).OnSessionOpen.func5()
      /go/src/volcano.sh/volcano/pkg/scheduler/plugins/proportion/proportion.go:352 +0x16a1
  volcano.sh/volcano/pkg/scheduler/framework.(*Session).JobEnqueueable()
      /go/src/volcano.sh/volcano/pkg/scheduler/framework/session_plugins.go:401 +0x24a
  volcano.sh/volcano/pkg/scheduler/actions/enqueue.(*Action).Execute()
      /go/src/volcano.sh/volcano/pkg/scheduler/actions/enqueue/enqueue.go:95 +0xddd
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).runOnce()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:126 +0x478
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).runOnce-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:226 +0x48
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:227 +0xce
  k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:204 +0x10d
  k8s.io/apimachinery/pkg/util/wait.Until()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:161 +0x48
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).Run.func2()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:95 +0x58

Previous write at 0x00000565d810 by goroutine 163:
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).setNodeOthersResource()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:348 +0x2e4
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).setNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:355 +0x6f
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).SetNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:330 +0xa4
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).updateNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/cache/event_handlers.go:353 +0x164
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).UpdateNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/cache/event_handlers.go:417 +0x14b
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).UpdateNode-fm()
      <autogenerated>:1 +0x6d
  k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate()
--
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:226 +0x48
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:227 +0xce
  k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:204 +0x10d
  k8s.io/apimachinery/pkg/util/wait.Until()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:161 +0x78
  k8s.io/client-go/tools/cache.(*processorListener).run()
      /go/pkg/mod/k8s.io/client-go@v0.27.2/tools/cache/shared_informer.go:967 +0x35
  k8s.io/client-go/tools/cache.(*processorListener).run-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:72 +0x73

Goroutine 276 (running) created at:
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).Run()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:95 +0x32e
  volcano.sh/volcano/cmd/scheduler/app.Run.func2()
      /go/src/volcano.sh/volcano/cmd/scheduler/app/server.go:102 +0x4e
  volcano.sh/volcano/cmd/scheduler/app.Run()
      /go/src/volcano.sh/volcano/cmd/scheduler/app/server.go:107 +0xdab
  main.main()
      /go/src/volcano.sh/volcano/cmd/scheduler/main.go:71 +0x4c4

Goroutine 163 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:70 +0xe4
  k8s.io/client-go/tools/cache.(*sharedProcessor).run.func1()
      /go/pkg/mod/k8s.io/client-go@v0.27.2/tools/cache/shared_informer.go:797 +0x1e7
  k8s.io/client-go/tools/cache.(*sharedProcessor).run()
      /go/pkg/mod/k8s.io/client-go@v0.27.2/tools/cache/shared_informer.go:801 +0x51
  k8s.io/client-go/tools/cache.(*sharedProcessor).run-fm()
      <autogenerated>:1 +0x44
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:55 +0x3e
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:72 +0x73
==================
==================
WARNING: DATA RACE
Read at 0x00c0009da0d0 by goroutine 276:
  volcano.sh/volcano/pkg/scheduler/api.NewResource()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/resource_info.go:89 +0x5f9
  volcano.sh/volcano/pkg/scheduler/api.(*JobInfo).GetMinResources()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/job_info.go:490 +0x1705
  volcano.sh/volcano/pkg/scheduler/plugins/proportion.(*proportionPlugin).OnSessionOpen.func5()
      /go/src/volcano.sh/volcano/pkg/scheduler/plugins/proportion/proportion.go:352 +0x16a1
  volcano.sh/volcano/pkg/scheduler/framework.(*Session).JobEnqueueable()
      /go/src/volcano.sh/volcano/pkg/scheduler/framework/session_plugins.go:401 +0x24a
  volcano.sh/volcano/pkg/scheduler/actions/enqueue.(*Action).Execute()
      /go/src/volcano.sh/volcano/pkg/scheduler/actions/enqueue/enqueue.go:95 +0xddd
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).runOnce()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:126 +0x478
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).runOnce-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:226 +0x48
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:227 +0xce
  k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:204 +0x10d
  k8s.io/apimachinery/pkg/util/wait.Until()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:161 +0x48
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).Run.func2()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:95 +0x58

Previous write at 0x00c0009da0d0 by goroutine 163:
  runtime.slicecopy()
      /usr/local/go/src/runtime/slice.go:310 +0x0
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).setNodeOthersResource()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:349 +0x429
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).setNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:355 +0x6f
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).SetNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:330 +0xa4
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).updateNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/cache/event_handlers.go:353 +0x164
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).UpdateNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/cache/event_handlers.go:417 +0x14b
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).UpdateNode-fm()
--
I0216 02:26:16.165353       1 overcommit.go:73] Enter overcommit plugin ...
==================
WARNING: DATA RACE
Read at 0x00000565d810 by goroutine 276:
  volcano.sh/volcano/pkg/scheduler/api.NewResource()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/resource_info.go:89 +0x40a
  volcano.sh/volcano/pkg/scheduler/plugins/overcommit.(*overcommitPlugin).OnSessionOpen()
      /go/src/volcano.sh/volcano/pkg/scheduler/plugins/overcommit/overcommit.go:95 +0x110a
  volcano.sh/volcano/pkg/scheduler/framework.OpenSession()
      /go/src/volcano.sh/volcano/pkg/scheduler/framework/framework.go:45 +0x494
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).runOnce()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:118 +0x33e
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).runOnce-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:226 +0x48
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:227 +0xce
  k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:204 +0x10d
  k8s.io/apimachinery/pkg/util/wait.Until()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:161 +0x48
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).Run.func2()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:95 +0x58

Previous write at 0x00000565d810 by goroutine 163:
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).setNodeOthersResource()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:348 +0x2e4
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).setNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:355 +0x6f
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).SetNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:340 +0x39c
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).updateNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/cache/event_handlers.go:353 +0x164
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).UpdateNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/cache/event_handlers.go:417 +0x14b
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).UpdateNode-fm()
      <autogenerated>:1 +0x6d
  k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate()
      /go/pkg/mod/k8s.io/client-go@v0.27.2/tools/cache/controller.go:250 +0x8b
  k8s.io/client-go/tools/cache.(*ResourceEventHandlerFuncs).OnUpdate()
      <autogenerated>:1 +0x29
  k8s.io/client-go/tools/cache.FilteringResourceEventHandler.OnUpdate()
--
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:226 +0x48
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:227 +0xce
  k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:204 +0x10d
  k8s.io/apimachinery/pkg/util/wait.Until()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:161 +0x78
  k8s.io/client-go/tools/cache.(*processorListener).run()
      /go/pkg/mod/k8s.io/client-go@v0.27.2/tools/cache/shared_informer.go:967 +0x35
  k8s.io/client-go/tools/cache.(*processorListener).run-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:72 +0x73

Goroutine 276 (running) created at:
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).Run()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:95 +0x32e
  volcano.sh/volcano/cmd/scheduler/app.Run.func2()
      /go/src/volcano.sh/volcano/cmd/scheduler/app/server.go:102 +0x4e
  volcano.sh/volcano/cmd/scheduler/app.Run()
      /go/src/volcano.sh/volcano/cmd/scheduler/app/server.go:107 +0xdab
  main.main()
      /go/src/volcano.sh/volcano/cmd/scheduler/main.go:71 +0x4c4

Goroutine 163 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:70 +0xe4
  k8s.io/client-go/tools/cache.(*sharedProcessor).run.func1()
      /go/pkg/mod/k8s.io/client-go@v0.27.2/tools/cache/shared_informer.go:797 +0x1e7
  k8s.io/client-go/tools/cache.(*sharedProcessor).run()
      /go/pkg/mod/k8s.io/client-go@v0.27.2/tools/cache/shared_informer.go:801 +0x51
  k8s.io/client-go/tools/cache.(*sharedProcessor).run-fm()
      <autogenerated>:1 +0x44
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:55 +0x3e
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/wait.go:72 +0x73
==================
==================
WARNING: DATA RACE
Read at 0x00c000ae8bd0 by goroutine 276:
  volcano.sh/volcano/pkg/scheduler/api.NewResource()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/resource_info.go:89 +0x5f9
  volcano.sh/volcano/pkg/scheduler/plugins/overcommit.(*overcommitPlugin).OnSessionOpen()
      /go/src/volcano.sh/volcano/pkg/scheduler/plugins/overcommit/overcommit.go:95 +0x110a
  volcano.sh/volcano/pkg/scheduler/framework.OpenSession()
      /go/src/volcano.sh/volcano/pkg/scheduler/framework/framework.go:45 +0x494
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).runOnce()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:118 +0x33e
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).runOnce-fm()
      <autogenerated>:1 +0x39
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:226 +0x48
  k8s.io/apimachinery/pkg/util/wait.BackoffUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:227 +0xce
  k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:204 +0x10d
  k8s.io/apimachinery/pkg/util/wait.Until()
      /go/pkg/mod/k8s.io/apimachinery@v0.27.2/pkg/util/wait/backoff.go:161 +0x48
  volcano.sh/volcano/pkg/scheduler.(*Scheduler).Run.func2()
      /go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:95 +0x58

Previous write at 0x00c000ae8bd0 by goroutine 163:
  runtime.slicecopy()
      /usr/local/go/src/runtime/slice.go:310 +0x0
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).setNodeOthersResource()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:349 +0x429
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).setNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:355 +0x6f
  volcano.sh/volcano/pkg/scheduler/api.(*NodeInfo).SetNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/api/node_info.go:340 +0x39c
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).updateNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/cache/event_handlers.go:353 +0x164
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).UpdateNode()
      /go/src/volcano.sh/volcano/pkg/scheduler/cache/event_handlers.go:417 +0x14b
  volcano.sh/volcano/pkg/scheduler/cache.(*SchedulerCache).UpdateNode-fm()
      <autogenerated>:1 +0x6d
  k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate()
      /go/pkg/mod/k8s.io/client-go@v0.27.2/tools/cache/controller.go:250 +0x8b
  k8s.io/client-go/tools/cache.(*ResourceEventHandlerFuncs).OnUpdate()

[Additionally] Leave the system running in a large cluster with constant rotation of user workloads, and with some luck, after 5-30 days you can detect a Scheduler restart due to panic:

I0120 20:57:12.806734       1 panic.go:884] End scheduling ...
E0120 20:57:12.806797       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 306 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1edfac0?, 0x3665e50})
	/go/src/volcano.sh/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xfffffffe?})
	/go/src/volcano.sh/volcano/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75
panic({0x1edfac0, 0x3665e50})
	/usr/local/go/src/runtime/panic.go:884 +0x213
strings.Compare(...)
	/usr/local/go/src/strings/compare.go:24
volcano.sh/volcano/pkg/scheduler/api.NewResource(0xc0001f5990?)
	/go/src/volcano.sh/volcano/pkg/scheduler/api/resource_info.go:90 +0x3d4
volcano.sh/volcano/pkg/scheduler/api.(*JobInfo).GetMinResources(...)
	/go/src/volcano.sh/volcano/pkg/scheduler/api/job_info.go:490
volcano.sh/volcano/pkg/scheduler/plugins/proportion.(*proportionPlugin).OnSessionOpen(0xc02889c8a0, 0xc0053ba3c0)
	/go/src/volcano.sh/volcano/pkg/scheduler/plugins/proportion/proportion.go:145 +0xbf4
volcano.sh/volcano/pkg/scheduler/framework.OpenSession({0x24d4618?, 0xc0002ca000?}, {0xc0004a1140, 0x2, 0x2}, {0x0, 0x0, 0x0})
	/go/src/volcano.sh/volcano/pkg/scheduler/framework/framework.go:45 +0x32b
volcano.sh/volcano/pkg/scheduler.(*Scheduler).runOnce(0xc000134c60)
	/go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:118 +0x265
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001f5f00?)
	/go/src/volcano.sh/volcano/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:226 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x3664740?, {0x24abb60, 0xc01cad32f0}, 0x1, 0xc000254780)
	/go/src/volcano.sh/volcano/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:227 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000788fb0?, 0x3b9aca00, 0x0, 0x0?, 0x442b65?)
	/go/src/volcano.sh/volcano/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:204 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0xa1a02a?, 0xa00000001?, 0xc000761fa0?)
	/go/src/volcano.sh/volcano/vendor/k8s.io/apimachinery/pkg/util/wait/backoff.go:161 +0x25
created by volcano.sh/volcano/pkg/scheduler.(*Scheduler).Run
	/go/src/volcano.sh/volcano/pkg/scheduler/scheduler.go:95 +0x19b
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x402f2c]

Localizing the behavior:

SafeWrite - emulates updating node information in informer handlers. UnsafeRead - emulates unsafe reading from var IgnoredDevicesList []string. Using the code below, you can get a similar stack trace:

import (...)

func main() {
	for i := 0; i < 8; i++ {
		go SafeWrite()
	}
	go UnsafeRead()
	for next := range time.Tick(1 * time.Second) {
		fmt.Printf("[%s] I'm alive\n", next.Format(time.TimeOnly))
	}
}

var IgnoredDevicesList []string
var mu sync.Mutex

func SafeWrite() {
	for range time.Tick(50 * time.Microsecond) {
		mu.Lock()
		IgnoredDevicesList = []string{}
		IgnoredDevicesList = append(IgnoredDevicesList, []string{"dev0", "dev1", "dev2", "dev3", "dev4", "dev5"}...)
		IgnoredDevicesList = append(IgnoredDevicesList, []string{"dev6", "dev7", "dev8", "dev9", "dev10", "dev11"}...)
		mu.Unlock()
	}
}

func UnsafeRead() {
	left := "left"
	for range time.Tick(50 * time.Microsecond) {
		for _, val := range IgnoredDevicesList {
			_ = strings.Compare(left, val) == 0
		}
	}
}
Output example
[05:56:15] I'm alive
[05:56:16] I'm alive
[05:56:17] I'm alive
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x1005ce760]

goroutine 14 [running]:
strings.Compare(...)
        /opt/homebrew/opt/go/libexec/src/strings/compare.go:24
main.UnsafeRead()
        ./cmd/main.go:37 +0xc0
created by main.main in goroutine 1
        ./cmd/main.go:14 +0x4c

Process finished with the exit code 2

Anything else we need to know?:

Additionally, when the Scheduler terminates with such a panic and stack trace, we found that all running user Jobs are forcibly terminated (Running -> Terminated). We do not yet understand why this happens, since ordinary restarts do not cause this behavior. Any ideas❓

I have prepared a PR that fixes the bugs: #3325

@belo4ya belo4ya added the kind/bug Categorizes issue or PR as related to a bug. label Feb 22, 2024
@belo4ya
Copy link
Author

belo4ya commented Feb 22, 2024

Additionally, in PR #3325, I replaced the string comparison method with a slightly more efficient one that uses fewer instructions:

strings.Compare(a, b) == 0   --->   a == b

Ironically, using a == b would have initially reduced the number of panics and made it less likely to find the bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
1 participant