Skip to content

Releases: knative/serving

Knative Serving release v0.18.1

20 Oct 11:36
81cd60d
Compare
Choose a tag to compare
Pre-release

Meta

Serving v1alpha1 & v1beta1 will EOL in our next release (v0.19)

  • This applies to the resources: Service, Route, Revision, Configuration
  • You will want to migrate your storage version for these resources to v1 using our post-install job

Monitoring Bundle is deprecated

This bundle was deprecated in our 0.14 release and will be removed in our next release

Kubernetes minimum version has changed to v1.17

net-contour has moved to stable status

GitHub Actions FTW

  • faster, varied and easier to support: linting, vetting, coverage, unit tests, e2e tests, etc.

Breaking EnableVarLogCollection behaviour

We always mount a emptyDir volume at /var/log in our user-containers. This impacts some popular containers images (ie. nginx) preventing them from starting.

In the next release (v0.19) we plan on changing this default behaviour and only mount a volume when EnableVarLogCollection is set to true.

Please reach out in #7881 if you have issues/comments about the approach & timeline.

Removed divisive language from most of the codebase.

Autoscaling

Core API

  • Add support for serviceAccountToken in projected volumes. #9264 (thanks @skonto)
  • Added RuntimeClassName feature flag. #9072 (thanks @ianlewis)
  • Fixes a race where the Route controller would report readiness prematurely. #9325 (thanks @mattmoor)
  • For security reasons, registries that are shipping image metadata on TLS version 1.0 or 1.1 are no longer supported. #9489 (thanks @markusthoemmes)
  • Digest resolution improvements & timeout. #9455, #9354, #9442 (thanks @julz, @mattmoor)
  • Responsive revision garbage collection is on (allowed) by default. #9335 (thanks @whaught)
  • Reduce the cardinality of our webhook metrics to reduce memory usage. knative/pkg#1464 (thanks @tragiclifestories)

Networking

  • tagHeaderBasedRouting flag in config-network is moved to config-features as tag-header-based-routing. (#8856, @igsong)

Knative Serving release v0.17.4

20 Oct 11:25
4ed85f3
Compare
Choose a tag to compare
Pre-release

Meta

initialScale annotation to control the initial deployment size

There is a new annotation that can be used to control the number of pods that are initially deployed when new Revisions are rolled out.

net-contour and net-kourier have moved to Beta

In addition to net-istio, we now have three networking layers that we classify as (at least) Beta.

Kubernetes minimum version has NOT changed

It remains 1.16, but we may bump to 1.17 in the coming release depending on its availability for qualification testing (per our release principles).

Autoscaling

Core API

  • Leader Election enabled by default (thanks @mattmoor)
    • By default control plane components now enable leader election, which can be disabled (for now) with --disable-ha.
  • New feature flags are now available - see config-features for details
  • Adopt a two-lane work queue for our controllers to prevent starvation during global re-syncs knative/pkg#1512 (thanks @vagababov)
  • Add config knob "max-value," which allows for setting a cluster-wide value for the max scale of any revision that doesn't have the "autoscaling.knative.dev/maxScale" annotation. #8951 (thanks @arturenault)
  • Adds a 60 second timeout for image digest resolution to guard against slow registries #8724 (thanks @julz)
  • Implemented new garbage collector that allows for either time-based or min/max count bounds for automatic deletion of old revisions. #8621 (thanks @whaught)
    • To enable this a new v2 Labeler populates RoutingState and RoutingStateModified annotations on Revisions
  • PodSpec DryRun also validates unparented (service-less) Configurations. #8828 (thanks @whaught)
  • Users can specify the size of the initial deployment with both cluster-wide flag initial-scale, and annotation "autoscaling.internal.knative.dev/initialScale". Cluster-wide flag allow-zero-initial-scale controls whether the cluster-wide and revision initial scale can be zero. #8846, (thanks @taragu)
  • When enabled, the ResponsiveGC feature flag disables lastPinned annotation timestamp refreshes #8757 (thanks @whaught)
  • Added a workaround so Knative will work on AKS 1.17+ knative/pkg#1592 (thanks @n3wscott)
  • Webhooks now drain for longer when shutting down knative/pkg#1517 (thanks @mattmoor)

Networking

Knative Serving release v0.18.0

29 Sep 18:10
0a890ef
Compare
Choose a tag to compare
Pre-release

Meta

Serving v1alpha1 & v1beta1 will EOL in our next release (v0.19)

  • This applies to the resources: Service, Route, Revision, Configuration
  • You will want to migrate your storage version for these resources to v1 using our post-install job

Monitoring Bundle is deprecated

This bundle was deprecated in our 0.14 release and will be removed in our next release

Kubernetes minimum version has changed to v1.17

net-contour has moved to stable status

GitHub Actions FTW

  • faster, varied and easier to support: linting, vetting, coverage, unit tests, e2e tests, etc.

Breaking EnableVarLogCollection behaviour

We always mount a emptyDir volume at /var/log in our user-containers. This impacts some popular containers images (ie. nginx) preventing them from starting.

In the next release (v0.19) we plan on changing this default behaviour and only mount a volume when EnableVarLogCollection is set to true.

Please reach out in #7881 if you have issues/comments about the approach & timeline.

Removed divisive language from most of the codebase.

Autoscaling

Core API

  • Add support for serviceAccountToken in projected volumes. #9264 (thanks @skonto)
  • Added RuntimeClassName feature flag. #9072 (thanks @ianlewis)
  • Fixes a race where the Route controller would report readiness prematurely. #9325 (thanks @mattmoor)
  • For security reasons, registries that are shipping image metadata on TLS version 1.0 or 1.1 are no longer supported. #9489 (thanks @markusthoemmes)
  • Digest resolution improvements & timeout. #9455, #9354, #9442 (thanks @julz, @mattmoor)
  • Responsive revision garbage collection is on (allowed) by default. #9335 (thanks @whaught)
  • Reduce the cardinality of our webhook metrics to reduce memory usage. knative/pkg#1464 (thanks @tragiclifestories)

Networking

  • tagHeaderBasedRouting flag in config-network is moved to config-features as tag-header-based-routing. (#8856, @igsong)

Knative Serving release v0.17.3

22 Sep 10:57
7b28cee
Compare
Choose a tag to compare
Pre-release

Meta

initialScale annotation to control the initial deployment size

There is a new annotation that can be used to control the number of pods that are initially deployed when new Revisions are rolled out.

net-contour and net-kourier have moved to Beta

In addition to net-istio, we now have three networking layers that we classify as (at least) Beta.

Kubernetes minimum version has NOT changed

It remains 1.16, but we may bump to 1.17 in the coming release depending on its availability for qualification testing (per our release principles).

Autoscaling

Core API

  • Leader Election enabled by default (thanks @mattmoor)
    • By default control plane components now enable leader election, which can be disabled (for now) with --disable-ha.
  • New feature flags are now available - see config-features for details
  • Adopt a two-lane work queue for our controllers to prevent starvation during global re-syncs knative/pkg#1512 (thanks @vagababov)
  • Add config knob "max-value," which allows for setting a cluster-wide value for the max scale of any revision that doesn't have the "autoscaling.knative.dev/maxScale" annotation. #8951 (thanks @arturenault)
  • Adds a 60 second timeout for image digest resolution to guard against slow registries #8724 (thanks @julz)
  • Implemented new garbage collector that allows for either time-based or min/max count bounds for automatic deletion of old revisions. #8621 (thanks @whaught)
    • To enable this a new v2 Labeler populates RoutingState and RoutingStateModified annotations on Revisions
  • PodSpec DryRun also validates unparented (service-less) Configurations. #8828 (thanks @whaught)
  • Users can specify the size of the initial deployment with both cluster-wide flag initial-scale, and annotation "autoscaling.internal.knative.dev/initialScale". Cluster-wide flag allow-zero-initial-scale controls whether the cluster-wide and revision initial scale can be zero. #8846, (thanks @taragu)
  • When enabled, the ResponsiveGC feature flag disables lastPinned annotation timestamp refreshes #8757 (thanks @whaught)
  • Added a workaround so Knative will work on AKS 1.17+ knative/pkg#1592 (thanks @n3wscott)
  • Webhooks now drain for longer when shutting down knative/pkg#1517 (thanks @mattmoor)

Networking

Knative Serving release v0.17.2

01 Sep 11:01
c868ae8
Compare
Choose a tag to compare
Pre-release

Meta

initialScale annotation to control the initial deployment size

There is a new annotation that can be used to control the number of pods that are initially deployed when new Revisions are rolled out.

net-contour and net-kourier have moved to Beta

In addition to net-istio, we now have three networking layers that we classify as (at least) Beta.

Kubernetes minimum version has NOT changed

It remains 1.16, but we may bump to 1.17 in the coming release depending on its availability for qualification testing (per our release principles).

Autoscaling

Core API

  • Leader Election enabled by default (thanks @mattmoor)
    • By default control plane components now enable leader election, which can be disabled (for now) with --disable-ha.
  • New feature flags are now available - see config-features for details
  • Adopt a two-lane work queue for our controllers to prevent starvation during global re-syncs knative/pkg#1512 (thanks @vagababov)
  • Add config knob "max-value," which allows for setting a cluster-wide value for the max scale of any revision that doesn't have the "autoscaling.knative.dev/maxScale" annotation. #8951 (thanks @arturenault)
  • Adds a 60 second timeout for image digest resolution to guard against slow registries #8724 (thanks @julz)
  • Implemented new garbage collector that allows for either time-based or min/max count bounds for automatic deletion of old revisions. #8621 (thanks @whaught)
    • To enable this a new v2 Labeler populates RoutingState and RoutingStateModified annotations on Revisions
  • PodSpec DryRun also validates unparented (service-less) Configurations. #8828 (thanks @whaught)
  • Users can specify the size of the initial deployment with both cluster-wide flag initial-scale, and annotation "autoscaling.internal.knative.dev/initialScale". Cluster-wide flag allow-zero-initial-scale controls whether the cluster-wide and revision initial scale can be zero. #8846, (thanks @taragu)
  • When enabled, the ResponsiveGC feature flag disables lastPinned annotation timestamp refreshes #8757 (thanks @whaught)
  • Added a workaround so Knative will work on AKS 1.17+ knative/pkg#1592 (thanks @n3wscott)
  • Webhooks now drain for longer when shutting down knative/pkg#1517 (thanks @mattmoor)

Networking

Knative Serving release v0.15.3

01 Sep 10:56
f95f38b
Compare
Choose a tag to compare
Pre-release

Meta

go mod migration

Knative is now completely migrated to Golang modules.

Serving release artifact deprecations

serving.yaml and serving-cert-manager.yaml will be shipped for the last time in this release. They have been broken out into separate artifacts. Please refer to the current installation docs for guidance on how to install Knative Serving and its optional components.

Minimum supported Kubernetes version bumped to 1.16

As per the Kubernetes minimum version principle - our current minimum supported Kubernetes version is now 1.16.

Autoscaling

Activator Subsetting (thanks @vagababov)

We compute a subset of Activator pods for each revision in a consistent manner, rather than assigning all. This noticeably improves load balancing for smaller revisions with small container concurrency values.

  • Improved pod scraping latency by directly scraping pods if available #7804 (thanks @vagababov)
  • Autoscaling Documentation (thanks @markusthoemmes)
  • Last pod retention period #7931 (thanks @vagababov)
  • Unify Activator and QueueProxy stats reporting libraries and report more precise concurrency values from Activator #7775 (thanks @makusthoemmes)
  • Add a global setting which prohibits setting container concurrency to 0 #7932 (thanks @julz)
  • Progress deadline is now a configurable parameter #7649 (thanks @vagababov)
  • Burst capacity is calculated over the panic window now (thanks @vagababov)
  • General code cleanup, test stabilization, etc thanks (@julz, @markusthoemmes, @vagababov, @nak3)

Core API

  • Our Revision shape has slightly changed to support multiple containers in the future #7373 (thanks @savitaashture)
    • Revision.Status.ImageDigest is deprecated and the digest will appear in Revision.Status.ContainerStatus.
  • Enable K8s dry-run as an experimental feature to provide faster feedback when your template won't create a valid Pod #3425 (thanks @whaught)
    • These are currently opt-in via the current annotation (may change)
      • features.knative.dev/podspec-dryrun: enabled
      • features.knative.dev/podspec-dryrun: strict
    • Strict mode will return failures if dry-run is not supported. This happens when webhooks have side-effects.
  • Webhook infrastructure now supports receiving a callback when a deletion occurs pkg/#1219 (thanks @whaught)
  • Some lingering and deprecated v1alpha1 properties have been removed from our go types
  • Reduced some churn reconciling deleted objects when they were tracking dependent resources #7679 (thanks @markusthoemmes)
  • genreconciler now allows developers to override the controller’s name pkg/#1137 (thanks @shashwathi @andrew-su)

Networking

  • Remove /var/log symlink logic from the queue proxy #7882 (thanks @dprotaso)
    • /var/log log capture now supports containers that aren't named user-container.
  • Add support for labels in DomainTemplate #7647 (thanks @duglin)
    • This allows users to create custom URLs via the template and to choose custom domains in the config-domain configMap via labels.
  • net-certmanager repository setup and code migration (thanks @ZhiminXiang)
    • Cert-manager related resources for AutoTLS are generated and released from the net-certmanager repository now.
  • KIngress no longer uses retries #7842 (thanks @tcnghia)
  • Operation name for activator's proxy span and queue-proxy's span are renamed to {activator,queue}_proxy #7934 (thanks @nak3)
  • Ingress conformance test for visibility and path #7666 (thanks @andrew-su)
  • Better timeouts for the ingress prober #7702 (thanks @JRBANCEL)
  • For ingress prober, use default http.Transport and context with timeout for better timeouts #7702 (thanks @JRBANCEL)
  • Use "go mod" within net-istio, net-contour, net-certmanager, net-http01 (thanks @andrew-su, @mattmoor, @tcnghia, @ZhiminXiang)
  • Propagate status from KCert to Route #7163 (thanks @nak3)

Knative Serving release v0.17.1

21 Aug 20:07
3372d58
Compare
Choose a tag to compare
Pre-release

Meta

initialScale annotation to control the initial deployment size

There is a new annotation that can be used to control the number of pods that are initially deployed when new Revisions are rolled out.

net-contour and net-kourier have moved to Beta

In addition to net-istio, we now have three networking layers that we classify as (at least) Beta.

Kubernetes minimum version has NOT changed

It remains 1.16, but we may bump to 1.17 in the coming release depending on its availability for qualification testing (per our release principles).

Autoscaling

Core API

  • Leader Election enabled by default (thanks @mattmoor)
    • By default control plane components now enable leader election, which can be disabled (for now) with --disable-ha.
  • New feature flags are now available - see config-features for details
  • Adopt a two-lane work queue for our controllers to prevent starvation during global re-syncs knative/pkg#1512 (thanks @vagababov)
  • Add config knob "max-value," which allows for setting a cluster-wide value for the max scale of any revision that doesn't have the "autoscaling.knative.dev/maxScale" annotation. #8951 (thanks @arturenault)
  • Adds a 60 second timeout for image digest resolution to guard against slow registries #8724 (thanks @julz)
  • Implemented new garbage collector that allows for either time-based or min/max count bounds for automatic deletion of old revisions. #8621 (thanks @whaught)
    • To enable this a new v2 Labeler populates RoutingState and RoutingStateModified annotations on Revisions
  • PodSpec DryRun also validates unparented (service-less) Configurations. #8828 (thanks @whaught)
  • Users can specify the size of the initial deployment with both cluster-wide flag initial-scale, and annotation "autoscaling.internal.knative.dev/initialScale". Cluster-wide flag allow-zero-initial-scale controls whether the cluster-wide and revision initial scale can be zero. #8846, (thanks @taragu)
  • When enabled, the ResponsiveGC feature flag disables lastPinned annotation timestamp refreshes #8757 (thanks @whaught)
  • Added a workaround so Knative will work on AKS 1.17+ knative/pkg#1592 (thanks @n3wscott)
  • Webhooks now drain for longer when shutting down knative/pkg#1517 (thanks @mattmoor)

Networking

Knative Serving release v0.17.0

18 Aug 19:57
427b2bf
Compare
Choose a tag to compare
Pre-release

Meta

initialScale annotation to control the initial deployment size

There is a new annotation that can be used to control the number of pods that are initially deployed when new Revisions are rolled out.

net-contour and net-kourier have moved to Beta

In addition to net-istio, we now have three networking layers that we classify as (at least) Beta.

Kubernetes minimum version has NOT changed

It remains 1.16, but we may bump to 1.17 in the coming release depending on its availability for qualification testing (per our release principles).

Autoscaling

Core API

  • Leader Election enabled by default (thanks @mattmoor)
    • By default control plane components now enable leader election, which can be disabled (for now) with --disable-ha.
  • New feature flags are now available - see config-features for details
  • Adopt a two-lane work queue for our controllers to prevent starvation during global re-syncs knative/pkg#1512 (thanks @vagababov)
  • Add config knob "max-value," which allows for setting a cluster-wide value for the max scale of any revision that doesn't have the "autoscaling.knative.dev/maxScale" annotation. #8951 (thanks @arturenault)
  • Adds a 60 second timeout for image digest resolution to guard against slow registries #8724 (thanks @julz)
  • Implemented new garbage collector that allows for either time-based or min/max count bounds for automatic deletion of old revisions. #8621 (thanks @whaught)
    • To enable this a new v2 Labeler populates RoutingState and RoutingStateModified annotations on Revisions
  • PodSpec DryRun also validates unparented (service-less) Configurations. #8828 (thanks @whaught)
  • Users can specify the size of the initial deployment with both cluster-wide flag initial-scale, and annotation "autoscaling.internal.knative.dev/initialScale". Cluster-wide flag allow-zero-initial-scale controls whether the cluster-wide and revision initial scale can be zero. #8846, (thanks @taragu)
  • When enabled, the ResponsiveGC feature flag disables lastPinned annotation timestamp refreshes #8757 (thanks @whaught)
  • Added a workaround so Knative will work on AKS 1.17+ knative/pkg#1592 (thanks @n3wscott)
  • Webhooks now drain for longer when shutting down knative/pkg#1517 (thanks @mattmoor)

Networking

Knative Serving release v0.14.3

18 Aug 11:04
bcda051
Compare
Choose a tag to compare
Pre-release

Meta

Monitoring Bundle is deprecated

We have made the decision to deprecate the bundled monitoring tools that have remained unchanged since 2018 due to a lack of community interest. We will stop releasing them in a coming release and will instead focus on documenting how to integrate with existing monitoring systems using OpenTelemetry.

V1 is now our storage version

We have included a new migration Job to migrate existing resources. See the serving-storage-version-migration.yaml release artifact.

Several new net-* repos!

  • Our Istio integration has moved out of Serving and into knative/net-istio.
  • Kourier has moved to knative/net-kourier.
  • We have a new knative/net-http01 project for implementing auto-TLS.

We have NOT bumped our minimum Kubernetes dependency (still 1.15)

We were unable to bump our minimum Kubernetes dependency to 1.16 this release as planned due to its lack of availability in GKE (on which we have a hard dependency for CI/CD). The principle behind our choice of minimum upstream version remains the same, and users should expect future releases to attempt to “catch up”.

Autoscaling

  • Disable metric scraping in situations where the activator is always in path for increased efficiency #7431 (thanks @dsimansk)
  • Added a metric for measuring metric scraping overhead #7232 (thanks @rmoe)
  • The “Metric” resource now surfaces potential errors in its status #7525 (thanks @markusthoemmes)
  • Activator tracks revision public service endpoints to assign downstream pods #7208 (thanks @vagababov)
  • Documented the internal autoscaling systems #7126 (thanks @markusthoemmes)
    Cleanups and improvements (logging, metrics, config map, unit and e2e tests, etcs); many PRs (thanks @julz, @mgencur, @vagababov, @markusthoemmes)

Fixed various bugs

  • Fixed races where a revision briefly scales below minScale only to immediately scale up again #7110, #7214 (thanks @tanzeeb)
  • Fixed a bug where a revision would never become ready if minScale was set > 1 #7514 (thanks @markusthoemmes)
  • Fixed a bug where request counts have been reported off by 1 on scale-from-0 #7109 (thanks @vagababov)
  • Fixed potential panics around timeout handling in the queue-proxy #7138, #7146 (thanks @JRBANCEL)
  • Fixed a rare race condition, where the activator would fail to schedule new , requests even though there is capacity in the system #7360 (thanks @markusthoemmes)

Core API

V1 is now our storage version #7204, #7499 (thanks @dprotaso)

After installing 0.14, a new migration Job must be run to migrate pre-existing resources, and remove v1alpha1 as a stored version from our CRDs.

Support for resolving AWS ECR images #7244 (thanks @mattmoor)

Fixes a long-standing issue where our tag resolutions does not work properly for AWS ECR.

Assorted Cleanups:

  • Leader election config map cleaned up, defaulting is implemented, example verified as default values (thanks @vagababov)

Networking

Introducing knative/net-istio repository (thanks @mattmoor, @nghia, @tshafer):

Istio KIngress reconciler is now separated into its own repository knative/net-istio, enabling more focused testing on presubmits. In the future, Istio integration bugs should be filed to this new repository

Introducing knative/net-http01 repository (thanks @mattmoor):

knative/net-http01 is a simple standalone ACME HTTP01 solver for the Knative Certificate abstraction.

Introducing knative/net-kourier repository (thanks @dortiz, @jmprussi):

A new home for Kourier - a lightweight Envoy-based Knative Ingress reconciler previously hosted at https://github.com/3scale/kourier.

Support Istio canonical service and revision #6832 (thanks @tshafer):

Adding Istio canonical service labels (istio/istio#20943) to Knative objects for better integration with Istio UX.

Use /healthz for probe path for easier whitelisting #5918 (thanks itsmurugappa, shreejad)

We changed our probe path from /_internal/knative/activator/probe to /healthz and made that consistent across all probe receivers in Knative Serving.

Best effort Istio probing #6962 (thanks JRBANCEL)

Any scenario where probing would fail forever with the current implementation is now treated as a successful probing, to allow failing-open in cases where users use a 3-legged-oauth setup that would cause probing to fail indefinitely.

Generated VirtualService contains wrong gateways field knative-extensions/net-istio#44 (thanks @yanniszark)

Previously, we sometimes referred to unused Gateways in a VirtualService. That caused issues with Istio validation logic if those unused Gateways were non-existent. Unused Gateways are no longer referred from VirtualServices.

Assorted cleanups:

Knative Serving release v0.16.0

07 Jul 17:54
d74ecbe
Compare
Choose a tag to compare
Pre-release

Meta

Minimum Kubernetes version supported is 1.16

  • The previous release documented a minimum version of 1.16, where this release actually raises the hard limit as we have begun to take advantage of 1.16 features (namely CRD v1 API).

PodAutoscaler custom metrics API is dropped

  • The autoscaler no longer implements the custom metrics API contract and we also no longer ship the APIService necessary to enable generic metric clients (like the HPA) to fetch those metrics from the autoscaler. Revisions can no longer be scaled using concurrency and/or request-per-second metrics when using the HPA.
  • Multiple shoutouts via mailing lists and the community meeting yielded no usage of that feature.

We no longer release a serving.yaml manifest

  • We stopped documenting this manifest some time ago, and it has long been the concatenation of several of the other manifests.

Post-Install Jobs

  • The new serving-post-install-jobs.yaml is expected to be used with kubectl create - the jobs are idempotent

Autoscaling

  • Improved load-balancing behavior for revisions that have the activator in their networking path #8226 #8263 (thanks @vagababov)
  • Dropped support for HPA-scaling based on concurrency/RPS metrics #8318 (thanks @markusthoemmes)
  • Stop renewing panic mode if it’s not necessary** #8125 (thanks @vagababov)**
  • Added validation of autoscaler classes under the “knative.dev” domain #8224 (thanks @yanweiguo)
  • Optimized the protocol between the Activator and the Autoscaler #8266 (thanks @julz)
  • Made autoscaler calculations consistent between CPU architectures #8341 (thanks @mundaym)
  • Keep connections alive during scrapes, if possible #8367 (thanks @julz)
  • Scale non-routable revisions down quicker #8389 (thanks @vagababov)
  • Remove endpoints informer from autoscaler reducing reducing our API server load for watches, memory usage and GC (thanks @vagababov)

Core API

  • Support for multiple-containers is now alpha (many PRs, thanks @savitaashture, @skonto)
    • You can now use multiple-containers in the pod spec of a Knative Service when you set “multi-container” to “enabled” in our config-features ConfigMap.
  • Support for disabling “service links” #8439, #8498, #8499 (thanks @dprotaso, @mattmoor, @vagababov)
    • This let’s through a field of the pod spec that was added in K8s 1.13 to disable a feature of the Kubernetes runtime environment called “service links”. This early service discovery feature injects 8 environment variables into the Pod’s containers for each Kubernetes Service in the same namespace, which leads to serious problems when many services are deployed. We have started to socialize a change to the default runtime behavior here, which would take effect in 0.19: #8563.
  • Support for using the downwards API in environment variables #8126 (thanks @JRBANCEL)
    • This let’s users start to use fieldRef in their environment variable spec to project information like namespace into their containers.
    • This feature must be explicitly enabled by setting “kubernetes.podspec-fieldref” to “enabled” in our config-features ConfigMap.
  • We have extended our leader election support to apply to Knative webhooks (many PRs, thanks @mattmoor, @yanweiguo)
  • Ongoing improvements to our generated controller infrastructure (many PRs, thanks @whaught)
    • In reconcilers for “Knative-shaped” resources, the generated controller logic takes on a number of additional “best-practice” responsibilities (now by default!), including management of “observed generation”
  • Reduce the idle queue-proxy CPU usage 10x #8148 (thanks @mattmoor)
    • The default exec probe frequency on minScale revisions led to a high CPU usage by the queue proxy due to the frequency. We reduced the frequency from 1s to 10s to reduce this overhead by roughly 10x.
  • Users are now warned if they change the “_example” block in ConfigMaps seemingly by accident #8123 (thanks @markusthoemmes)
  • Ingress conformance tests have been moved out of test files so they can be consumed downstream #8150 (thanks @dprotaso)
  • Operators can now set queue proxy resource requests/limits in the config-deployment.yaml config map #8195 (thanks @julz)

Networking

  • Fix Unknown cert status issue when cluster-local visibility is set #8043 (thanks @nak3)
  • Support tag header based routing (thanks @igsong, @tanzeeb, @tcnghia, @ZhiminXiang)
  • Split networking related resources from knative/serving repo into knative/networking repo (thanks @tcnghia)
  • Reduce the high CPU usage of idle queue-proxy #8147 #8149 (thanks @mattmoor, @vagababov)
  • Increase the QPS limit of networking probing #8054 (thanks @JRBANCEL)
  • Fix the issue that namespace-level auto TLS feature does not work with web browser because of HTTP connection reuse #7495 (thanks @ZhiminXiang)
  • Drop istio-injection=enabled label in knative-serving namespace from serving-core.yaml. #8482 (thanks @nak3)
  • Add a documentation about how to use Istio Authorization with Knative (thanks @nak3)