Skip to content

Releases: karmada-io/karmada

karmada v0.10.0 release

20 Nov 15:28
7a34c2f
Compare
Choose a tag to compare

What's New

Resource Interpreter Webhook

The newly introduced Resource Interpreter Webhook framework allows users to implement their own CRD plugins that will be consulted at all parts of propagation process. With this feature, CRDs and CRs will be propagated just like Kubernetes native resources, which means all scheduling primitives also support custom resources. An example as well as some helpful utilities are provided to help users better understand how this framework works.

Refer to Proposal for more details.

(Feature contributor: @RainbowMango, @XiShanYongYe-Chang, @gy95)

Significant Scheduling Enhancement

  1. Introduced dynamicWeight primitive to PropagationPolicy and ClusterPropagationPolicy. With this feature, replicas could be divided by a dynamic weight list, and the weight of each cluster will be calculated based on the available replicas during scheduling.
    This feature can balance the cluster's utilization significantly. (#841)

  2. Introduced Job schedule (divide) support. A Job that desires many replicas now could be divided into many clusters just like Deployment.
    This feature makes it possible to run huge Jobs across small clusters. (#898)

(Feature contributor: @Garrybest )

Workloads Observation from Karmada Control Plane

After workloads (e.g. Deployments) are propagated to member clusters, users may also want to get the overall workload status across many clusters, especially the status of each pod. In this release, a get subcommand was introduced to the kubectl-karmada. With this command, user are now able get all kinds of resources deployed in member clusters from the Karmada control plane.

For example (get deployment and pods across clusters):

$ kubectl karmada get deployment
NAME    CLUSTER   READY   UP-TO-DATE   AVAILABLE   AGE   ADOPTION
nginx   member2   1/1     1            1           19m   Y
nginx   member1   1/1     1            1           19m   Y
$ kubectl karmada get pods
NAME                     CLUSTER   READY   STATUS    RESTARTS   AGE
nginx-6799fc88d8-vzdvt   member1   1/1     Running   0          31m
nginx-6799fc88d8-l55kk   member2   1/1     Running   0          31m

(Feature contributor: @lfbear @QAQ-rookie)

Other Notable Changes

  • karmada-scheduler-estimator: The number of pods becomes an important reference when calculating available replicas for the cluster. (@Garrybest, #777)
  • The labels (resourcebinding.karmada.io/namespace, resourcebinding.karmada.io/name, clusterresourcebinding.karmada.io/name) which were previously added on the Work object now have been moved to annotations. (@XiShanYongYe-Chang, #752)
  • Bugfix: Fixed the impact of cluster unjoining on resource status aggregation. (@dddddai, #817)
  • Instrumentation: Introduced events (SyncFailed and SyncSucceed) to the Work object. (@wawa0210, #800)
  • Instrumentation: Introduced condition (Scheduled) to the ResourceBinding and ClusterResourceBinding. (@dddddai, #823)
  • Instrumentation: Introduced events (CreateExecutionNamespaceFailed and RemoveExecutionNamespaceFailed) to the Cluster object. (@pigletfly, #749)
  • Instrumentation: Introduced several metrics (workqueue_adds_total, workqueue_depth, workqueue_longest_running_processor_seconds, workqueue_queue_duration_seconds_bucket) for karmada-agent and karmada-controller-manager. (@Garrybest, #831)
  • Instrumentation: Introduced condition (FullyApplied) to the ResourceBinding and ClusterResourceBinding. (@lonelyCZ, #825)
  • karmada-scheduler: Introduced feature gates. (@iawia002, #805)
  • karmada-controller-manager: Deleted resources from member clusters that use "Background" as the default delete option. (@RainbowMango, #970)

Contributors

Thank you to everyone who contributed to this release!

Users whose commits are in this release (alphabetically by user name)

karmada v0.9.0 release

30 Sep 15:31
721215c
Compare
Choose a tag to compare

What's New

Upgrading support
Users are now able to upgrade from the previous version smoothly. With the multiple version feature of CRD, objects with different schemas can be automatically converted between versions. Karmada uses the semantic versioning and will provide workarounds for inevitable breaking changes.

In this release, the ResourceBining and ClusterResourceBinding promote to v1alpha2 and the previous v1alpha1 version is still available for one more release. With the upgrading instruction, the previous version of Karmada can promote smoothly.

(Feature contributor: @RainbowMango )

Introduced karmada-scheduler-estimator to facilitate end-to-end scheduling accuracy
Karmada scheduler aims to assign workload to clusters according to constraints and available resources of each member cluster. The kube-scheduler working on each cluster takes the responsibility to assign Pods to Nodes.
Even though Karmada has the capacity to reschedule failure workload between member clusters, but the community still commits lots of effort to improve the accuracy of the end-to-end scheduling.

The karmada-scheduler-estimator is the effective assistant of karmada-scheduler, it provides prediction-based scheduling decisions that can significantly improve the scheduling efficiency and avoid the wave of rescheduling among clusters. Note that this feature is implemented as a pluggable add-on. For the instructions please refer to scheduler estimator guideline.

(Feature contributor: @Garrybest )

Maintainability improvements
A bunch of significant maintainability improvements were added to this release, including:

  • Simplified Karmada installation with helm chart.
    (Feature contributor: @algebra2k @jrkeen )

  • Provided metrics to observe scheduler status, the metrics API now served at /metrics of karmada-scheduler.
    With these metrics, users are now able to evaluate the scheduler's performance and identify the bottlenecks.
    (Feature contributor: @qianjun1993 )

  • Provided events to Karmada API objects as supplemental information to debug problems.
    (Feature contributor: @pigletfly )

Other Notable Changes

  • karmada-controller-manager: The ResourceBinding/ClusterResourceBinding won't be deleted after associate PropagationPolicy/ClusterPropagationPolicy is removed and is still available until resource template is removed.(@qianjun1993, #601)
  • Introduced --leader-elect-resource-namespace which is used to specify the namespace of election object to components karmada-controller-manager/karmada-scheduler/karmada-agent`. (@XiShanYongYe-Chang #698)
  • Deprecation: The API ReplicaSchedulingPolicy has been deprecated and will be removed from the following release. The feature now has been integrated into ReplicaScheduling.
  • Introduced kubectl-karmada commands as the extensions for kubectl. (@XiShanYongYe-Chang #686)
  • karmada-controller-manager introduced a version command to represent version information. (@RainbowMango #717 )
  • karmada-scheduler/karmada-webhook/karmada-agent/karmada-scheduler-estimator introduced a version command to represent version information. (@lonelyCZ #719 )
  • Provided instructions about how to use the Submariner to connect the network between member clusters. (@XiShanYongYe-Chang #737 )
  • Added four metrics to the karmada-scheduler to monitor scheduler performance. (@qianjun1993 #747)

Contributors

Thank you to everyone who contributed to this release!

Users whose commits are in this release (alphabetically by user name)

karmada v0.8.0 release

20 Aug 10:09
c37bedc
Compare
Choose a tag to compare

What's New

Automatic cluster discovery with cluster-api
For users who are using cluster-api (sigs.k8s.io/cluster-api), Karmada is now able to automatically discover & join clusters when provisioned, and unjoin them in case of destroyed.

Note that this features is implemented as a built-in plugin. To enbale it, simply indicate the following to flags in karmada-controller-manager config:

--cluster-api-kubeconfig string        Path to the cluster-api management cluster kubeconfig file.
--cluster-api-context string           Name of the cluster context in cluster-api management cluster kubeconfig file.

(Feature contributor: @XiShanYongYe-Chang )

Introduced CommandOverrider and ArgsOverrider to simplify commands customization per cluster
For multi-cluster applications, it's quite common to set different arguments when running on different clusters or environments.
In this release, two overrider plugins: CommandOverrider and ArgsOverrider are introduced, based on industry best practices. These two handy tools allow users to declare complex declarations and avoid configuration mistakes.

Workload types supported now are: Deployment, ReplicaSet, DaemonSet, StatefulSet and Pod, more types including CRDs will be supported in later releases.

(Feature contributor: @lfbear @betaincao )

Better integration support with Kubernetes ecosystem
The Kubernetes native APIs support and patterns to run cloud-native applications of Karmada make it quite easy to quickly integrate with other projects in the Kubernetes ecosystem.

In release, several useful features that will help Karmada work seamlessly with other systems.

  • ResourceBinding and ClusterResourceBinding now supports present the applied status. (@pigletfly #595)
  • More types of resources now support aggregating status to the resource template, inlcuding Job, Service, and Ingress. (@mrlihanbo #609)
  • argo-cd is also verified to run full featured with Karmada to achieve multi-cluster GitOps.

Other Notable Changes

  • karmadactl: introduced cordon and uncordon commands to mark a cluster schedulable and un-schedulable. (#464, @algebra2k )
  • karmada-controller-manager: introduced --skipped-propagating-namespaces flag to skip resources in certain namespaces from propagating. (#533, @pigletfly )
  • karmada-controller-manager/karmada-agent/karmada-scheduler: Introduced flags to config the QPS and burst which are used to control the client traffic interacting with Karmada or member cluster's kube-apiserver. (#611, @Garrybest )
    • --cluster-api-qps QPS to use while talking with cluster kube-apiserver.
    • --cluster-api-burst Burst to use while talking with cluster kube-apiserver.
    • --kube-api-qps QPS to use while talking with karmada-apiserver.
    • --kube-api-burst Burst to use while talking with karmada-apiserver.
  • Karmada quick-start scripts now support running on Mac OS. (#538, @lfbear )

Contributors

Thank you to everyone who contributed to this release!

Users whose commits are in this release (alphabetically by user name)

karmada v0.7.0 release

12 Jul 13:55
87afdbb
Compare
Choose a tag to compare

What's New

Support multi-cluster service discovery
In many cases, a Kubernetes user may want to split their deployments across multiple clusters, but still retain mutual dependencies between workloads running in those clusters.

Users are now able to export and import services between clusters with Multi-Cluster Service API (MCS-API). (@XiShanYongYe-Chang)

Support more precise cluster status management
Besides reporting cluster status, the cluster status controller now also renews the lease. The newly introduced cluster monitor monitors the lease and will mark cluster ready status to unknown in case of cluster status controller not working. (@Garrybest)

Support replica scheduling based on cluster resources
In some scenarios, users want to divide the replicas in a deployment to multiple clusters if a single cluster doesn't have sufficient resources.
Users now able to declare the replica scheduling preference by the new field ReplicaDivisionPreference in PropagationPolicy and ClusterPropagationPolicy. (@qianjun1993)

Support more convenient APIs to divide replicas by weight list
Users now able to declare cluster weight by ReplicaDivisionPreference in PropagationPolicy and ClusterPropagationPolicy, with the preference Weighted, the scheduler will divide replicas according to the WeightPreference. (@qianjun1993)

This feature is designed to replace the standalone ReplicaSchedulingPolicy API in the future.

Other Notable Changes

  • karmada-agent: Introduced --karmada-context flag to indicate the cluster context in karmada kubeconfig file. (#415, @mrlihanbo)
  • karmada-agent and karmada-controller-manager: Introduced --cluster-lease-duration and --cluster-lease-renew-interval-fraction flags to specify the lease expiration period and renew interval fraction. (#421, @pigletfly)
  • karmada-scheduler: Added a filter plugin to prevent the cluster from scheduling if the required API is not installed. (#470, @vincent-pli)
  • karmada-controller-manager: Introduced --skipped-propagating-apis flag to skip the resources from propagating. (#345, @pigletfly)
  • Installation: Now the hack/deploy-karmada.sh and hack/deploy-karmada-agent.sh scripts support install Karmada components on both Kind clusters and standalone clusters. (#458, @lfbear)
  • In the case of resources already in member clusters, in order to avoid conflict karmada will refuse to propagate and adopt the resource by default. (#471, @mrlihanbo)

Contributors

Thank you to everyone who contributed to this release!

Users whose commits are in this release (alphabetically by user name)

karmada v0.6.0 release

29 May 15:09
bc7cfd8
Compare
Choose a tag to compare

What's New

Support syncing with member cluster behind proxy
In some scenarios where certain clusters may not be directly connected from the Internet, such as:

  • The member clusters are behind a NAT gateway from the Karmada control plane
  • The member clusters are in an on-prem Intranet while Karmada runs in the cloud

By setting proxy-url in the kubeconfig when registering member clusters, Karmada will talk to member clusters through indicated proxy. (#307, @liufen90)

Introduced ImageOverrider for simplifying image replacement
In most scenarios where clusters are running in different cloud or data centers, the workload requires a different image registry. ImageOverrider is a handy tool to override images for a workload before they are propagated to clusters. (#370, @XiShanYongYe-Chang)

Support scheduling based on cluster taint toleration
Karmada-scheduler now reflects taints on member clusters and tolerations defined in PropagationPolicy and ClusterPropagationPolicy
when scheduling resources. (#320, @mrlihanbo)

Support scheduling based on cluster topology
Karmada-scheduler now supports scheduling resources according to the topology information(cluster/provider/region/zone)
defined in cluster objects. (#357, @mrlihanbo)

Other Notable Changes

  • Installation: introduced hack/remote-up-karmada.sh to install Karmada on a specified Kubernetes as host. (#367, @lfbear)
  • karmadactl: introduced the version command to show the version it is built from. Try it on by command: # karmadactl version. (#285, @algebra2k)
  • API: added short name for most APIs. (#376, @pigletfly)
  • The resource templates now match PropagationPolicy or ClusterPropagationPolicy in alphabetical order
    when there are multiple policies that match. (#306, @XiShanYongYe-Chang)
  • Always generates ResourceBinding objects for namespace-scoped resource template. (#315, @vincent-pli)
  • karmada-controller-manager: introduced the leader-elect command line flag to enable or disable leadership election. (#321, @pigletfly)
  • The Work objects name now consist of the resource template's .metada.name, .metada.kind and .metadata.namespace. (#359, @Garrybest)

Contributors

Thank you to everyone who contributed to this release!

Users whose commits are in this release (alphabetically by user name)

karmada v0.5.0 release

20 Apr 14:54
29c307c
Compare
Choose a tag to compare

What's New

Support resource status aggregation from Karmada
Users are now able to query aggregated status of resources(propagated by Karmada) from Karmada API-server, no need to connect to each member cluster.
All resource's status in member clusters will be aggregated to its binding objects.
In addition, if the resource type is deployment, deployment status will be also reflected.

karmada-agent to support pull-based synchronization between control plan and member clusters
karmada-agent is introduced in this release to support cases the member clusters not directly reachable from the Karmada control plan.
The agent basically pulls all useful configurations from the Karmada control plane and applies to member clusters it serves.
The karmada-agent also completes cluster registration automatically.

ReplicaSchedulingPolicy API to customize replica scheduling constraints of Deployments
Users are now able to customize replica scheduling constraints of Deployments with ReplicaScheduling Policy API.
The replicas will be divided into different numbers for member clusters according to weight list indicated by the policy.

Other Notable Changes

  • The label karmada.io/override and karmada.io/cluster-override have been deprecated and replaced by policy.karmada.io/applied-overrides and policy.karmada.io/applied-cluster-overrides to indicate applied override rules.
  • The ResourceBinding and ClusterResourceBinding names now consist of resource kind and resource name.
  • Both PropagationPolicy and ClusterPropagationPolicy names now restricted to no more than 63 characters.
  • OverridePolicy and ClusterOverridePolicy changes will take effect immediately now.
  • Users are now able to use new flag --cluster-status-update-frequency when configuring karmada-agent and karmada-controller-manager, to specify cluster status update frequency.

Contributors

Thank you to everyone who contributed to this release!

Users whose commits are in this release (alphabetically by user name)

karmada v0.4.0 release

13 Mar 10:44
0a9a52c
Compare
Choose a tag to compare

What's New

New policy APIs have been added to support cluster level resources propagation and customization
Users are now able to use ClusterPropagationPolicy to propagate both cluster-scoped and namespace-scoped resources. In addition, users are able to use ClusterOverridePolicy to define the overall policy to realize differentiation propagation.

Support resource and policy detector
The detector watches both resources and policy (PropagationPolicy and ClusterPropagationPolicy) changes, all changes on resources or policies will take effect immediately.

Namespace Auto-provision feature get on board
Namespaces created on Karmada will be synced to all member clusters automatically. Users don't need to propagate namespaces anymore.

Scheduler now able to reschedule resources when policy changes
Once the Placement rule in the PropagationPolicy changed, the scheduler will reschedule to meet the declaration.

Scheduler now support failure recovery
Once any of the clusters becomes failure, the scheduler now able to re-schedule the resources to available clusters.
This feature is controlled by flag --failover and disabled by default.

Other Notable Changes

  • The PropagationWork API is now Work and located at the work.karmada.io group.
  • The PropagationBinding API is now ResourceBinding and located at the work.karmada.io group.
  • The label karmada.io/driven-by has been deprecated and replaced by propagationpolicy.karmada.io/namespace, propagationpolicy.karmada.io/name, and clusterpropagationpolicy.karmada.io/name.
  • The label karmada.io/created-by has been deprecated and replaced by propagationpolicy.karmada.io/namespace, propagationpolicy.karmada.io/name, clusterpropagationpolicy.karmada.io/name, resourcebinding.karmada.io/namespace, resourcebinding.karmada.io/name, clusterresourcebinding.karmada.io/name, work.karmada.io/namespace, work.karmada.io/name.
  • Added new annotation policy.karmada.io/applied-placement for both ResourceBinding and ClusterResourceBinding resources, to indicate the placement rule.
  • Added Validating Admission Webhook to restrict resource selector change for PropagationPolicy and ClusterPropagationPolicy objects.

Contributors

Thank you to everyone who contributed to this release!

Users whose commits are in this release (alphabetically by user name)

karmada v0.3.0 release

08 Feb 10:08
eb6265a
Compare
Choose a tag to compare

What's New

Support override resources when propagating to member clusters

Users are now able to specify override policies to customize specific resource fields for different clusters. (#130, @RainbowMango, @mrlihanbo)

Support labelselector in cluster affinity

Users are now able to use ClusterAffinity.LabelSelector in PropagationPolicy API to restrict target clusters to when propagating resources. (#149, @mrlihanbo)

Support spread constraints

Users are now able to specify resource spread constraints in propagation policies:

More constraint options will be introduced in the later releases:

  • SpreadByFieldRegion: resource will be spread by region.
  • SpreadByFieldZone: resource will be spread by zone.
  • SpreadByFieldProvider: resource will be spread by cloud providers.

Added webhook components to mutating and validating resources automatically
Introduced new components named karmada-webhook for implementating Mutating and Validationg webhooks. (#133, @RainbowMango)

Other Notable Changes

  • E2E testing time consumption has been significantly reduced. (#119, @mrlihanbo)
  • Provided generic client for operating both Kubernetes and Karmada APIs. (#126, @RainbowMango)
  • The MemberCluster API is now Cluster. (#139, @kevin-wangzefeng)
  • The API group propagationstrategy.karmada.io is now policy.karmada.io. (#142, @kevin-wangzefeng)
  • Supported skip member cluster TLS verification. (#159, @mrlihanbo)
  • Any unexpected modification of resource in member cluster will be amended automatically. (#127, @mrlihanbo)

karmada v0.2.0 release

07 Jan 12:43
c59afde
Compare
Choose a tag to compare
Added scheduler framework and basic functionality(#108)

Signed-off-by: xuzhonghu <xuzhonghu@huawei.com>

karmada v0.1.0 release

04 Dec 13:30
Compare
Choose a tag to compare
update architecture and concepts

Signed-off-by: Kevin Wang <kevinwzf0126@gmail.com>