Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage capacity: patch vs. update of CSIStorageCapacity #467

Open
pohly opened this issue Aug 17, 2020 · 3 comments
Open

storage capacity: patch vs. update of CSIStorageCapacity #467

pohly opened this issue Aug 17, 2020 · 3 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@pohly
Copy link
Contributor

pohly commented Aug 17, 2020

capacity.go currently uses Update to change the capacity value. It might be more efficient to use Patch:

  • write a benchmark
  • compare both approaches
@pohly pohly mentioned this issue Aug 17, 2020
7 tasks
@pohly pohly added this to To do in Storage Capacity GA Sep 25, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 15, 2020
@pohly
Copy link
Contributor Author

pohly commented Nov 16, 2020

/remove-lifecycle stale
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 16, 2020
@pohly
Copy link
Contributor Author

pohly commented Feb 18, 2022

The benchmark is in PR #706.

Here's a preliminary patch for using patching (sic!):

diff --git a/pkg/capacity/capacity.go b/pkg/capacity/capacity.go
index 815b00679..8516164dd 100644
--- a/pkg/capacity/capacity.go
+++ b/pkg/capacity/capacity.go
@@ -25,6 +25,7 @@ import (
 	"sort"
 	"sync"
 	"time"
+	"strings"
 
 	"github.com/container-storage-interface/spec/lib/go/csi"
 	"github.com/kubernetes-csi/external-provisioner/pkg/capacity/topology"
@@ -45,6 +46,7 @@ import (
 	"k8s.io/client-go/util/workqueue"
 	"k8s.io/component-base/metrics"
 	"k8s.io/klog/v2"
+	k8stypes "k8s.io/apimachinery/pkg/types"
 )
 
 const (
@@ -648,15 +650,24 @@ func (c *Controller) syncCapacity(ctx context.Context, item workItem) error {
 		return nil
 	} else {
 		// Update existing object. Must not modify object in the informer cache.
-		capacity := capacity.DeepCopy()
-		capacity.Capacity = quantity
-		capacity.MaximumVolumeSize = maximumVolumeSize
-		if c.owner != nil && !c.isOwnedByUs(capacity) {
-			capacity.OwnerReferences = append(capacity.OwnerReferences, *c.owner)
-		}
+		// capacity := capacity.DeepCopy()
+		// capacity.Capacity = quantity
+		// capacity.MaximumVolumeSize = maximumVolumeSize
+		// if c.owner != nil && !c.isOwnedByUs(capacity) {
+		// 	capacity.OwnerReferences = append(capacity.OwnerReferences, *c.owner)
+		// }
 		var err error
 		klog.V(5).Infof("Capacity Controller: updating %s for %+v, new capacity %v, new maximumVolumeSize %v", capacity.Name, item, quantity, maximumVolumeSize)
-		capacity, err = c.client.StorageV1beta1().CSIStorageCapacities(capacity.Namespace).Update(ctx, capacity, metav1.UpdateOptions{})
+		ops := []string {
+			fmt.Sprintf(`{"op":"replace","path":"/capacity","value":"%d"}`, quantity.Value()),
+		}
+		if maximumVolumeSize != nil {
+			ops = append(ops, fmt.Sprintf(`{"op":"replace","path":"/maximumVolumeSize","value":"%d"}`, maximumVolumeSize.Value()))
+		} else if capacity.MaximumVolumeSize != nil {
+			ops = append(ops, `{"op":"remove","path":"/maximumVolumeSize"}`)
+		}
+		patch := "[" + strings.Join(ops, ",") + "]"
+		capacity, err := c.client.StorageV1beta1().CSIStorageCapacities(capacity.Namespace).Patch(ctx, capacity.Name, k8stypes.JSONPatchType, []byte(patch), metav1.PatchOptions{}, "")
 		if err != nil {
 			return fmt.Errorf("update CSIStorageCapacity for %+v: %v", item, err)
 		}

It's not complete because it doesn't handle adding the ownerref, but that is irrelevant for the benchmark.

Comparing the baseline against the modified version shows no improvement:

$ $GOPATH/bin/benchstat /tmp/log /tmp/log-patch 
name               old time/op  new time/op  delta
CapacityUpdate-36  3.93ms ± 1%  3.92ms ± 1%   ~     (p=0.548 n=5+5)

I've not tried to measured load on the apiserver. Processing the patch must be more work, so I would expect the load to become higher.

My conclusion is that the time for updating is limited by roundtrip times and overhead of request processing. Making the payload for a single request smaller doesn't help here, probably because a single CSIStorageCapacity object isn't large to begin with.

I therefore suggest to close this issue here by merging the benchmark PR, without changing the implementation.

kbsonlong pushed a commit to kbsonlong/external-provisioner that referenced this issue Dec 29, 2023
…go_modules/k8s.io/csi-translation-lib-0.27.4

Bump k8s.io/csi-translation-lib from 0.27.3 to 0.27.4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
Development

Successfully merging a pull request may close this issue.

3 participants