We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes v1.29, Knative v1.13
0.9.x 0.10.x 0.11.x Output of git describe --dirty
git describe --dirty
When I send five requests to existing knative service with seven running pods, five pods should be retained when scaling is triggered.
Only four pods will be retained. Sometimes I can even find four out seven pods deleted, one pod created immediately, thus reach four pods.
Run this script.
#! /bin/bash set -ex echo "Create the app" cat > /tmp/service <<EOF apiVersion: serving.knative.dev/v1 kind: Service metadata: name: delete namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: "1" autoscaling.knative.dev/target-utilization-percentage: "100" autoscaling.knative.dev/target-burst-capacity: "1" autoscaling.knative.dev/metric: "concurrency" spec: timeoutSeconds: 180 containers: - image: docker.io/hisy/delete:latest ImagePullPolicy: IfNotPresent terminationGracePeriodSeconds: 300 EOF kn service apply -f /tmp/service sleep 5 export APP=$(kubectl get service.serving.knative.dev/delete | grep http | awk '{print $2}') echo "Wait for pods to be terminated" while [ $(kubectl get pods 2>/dev/null | wc -l) -ne 0 ]; do sleep 5; done echo "hit the autoscaler with burst of requests" for i in `seq 7`; do curl -s "$APP?wait=10" 1>/dev/null & done echo "wait for the autoscaler to kick in and the bursty requests to finish" sleep 30 echo "send longer requets" for i in `seq 5`; do curl "$APP?wait=120"& sleep 1; done
You can find seven pods created first. Then only four pods are retained.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
What version of Knative?
Kubernetes v1.29, Knative v1.13
Expected Behavior
When I send five requests to existing knative service with seven running pods, five pods should be retained when scaling is triggered.
Actual Behavior
Only four pods will be retained. Sometimes I can even find four out seven pods deleted, one pod created immediately, thus reach four pods.
Steps to Reproduce the Problem
Run this script.
You can find seven pods created first. Then only four pods are retained.
The text was updated successfully, but these errors were encountered: