Skip to content

Releases: oracle/coherence-operator

v3.2.6

11 May 19:15
Compare
Choose a tag to compare

Coherence Operator Release 3.2.6

Tested on Kubernetes:

  • v1.23.4
  • v1.22.7
  • v1.21.10
  • v1.20.15
  • v1.19.16
  • v1.18.20
  • v1.17.17
  • v1.16.15

Tested on OpenShift

  • 4.10.4

Changes

  • Build depends on Go 1.17
  • Depend on K8s v1.24 APIs
  • Added support for a number of additional Pod spec fields
    • ActiveDeadlineSeconds
    • EnableServiceLinks
    • Overhead
    • PreemptionPolicy
    • Priority
    • PriorityClassName
    • RestartPolicy
    • RuntimeClassName
    • SchedulerName
    • TopologySpreadConstraints

v3.2.5

08 Feb 18:18
Compare
Choose a tag to compare

Coherence Operator Release 3.2.5

Fixes

  • Fix issue where RetryingWkaAddressProvider may fail to resolve any addresses if coherence.wka property not set correctly.
  • Fix an issue where the Operator fails to patch ServiceMonitor resources, which then causes the rest of a Coherence deployment update to fail to be applied.
  • Fixed an issue where failure of one or more secondary resource reconcilers blocked other reconcilers from running.

Enhancements

  • Added Grafana dashboards for Coherence Executors
  • Added the ability to trigger an action when a Coherence cluster becomes ready - for example trigger a load from DB Job.
  • Publish a new image suitable for use as a base image for Coherence test images ghcr.io/oracle/coherence-operator:3.2.5-test-base

v3.2.4

01 Oct 12:09
Compare
Choose a tag to compare

Coherence Operator Release 3.2.4

Fixes

  • Fixed an issue where adding additional volumeClaimTemplates caused the StatefulSet to never be ready due to the additional PVC's name being blanked out.

Enhancements

  • Added labels to the CRD to more easily identify information such as the corresponding the Operator version etc.

v3.2.3

24 Sep 16:14
Compare
Choose a tag to compare

Coherence Operator Release 3.2.3

Fixes

  • Fixed an issue reconciling ServiceMonitor resources
  • The Operator no longer attempts to cleanly shutdown Pods if the whole namespace is being deleted.
  • Fixed an issue where Coherence services could be suspended on start-up and Pods fail to reach ready. These services will now be automatically resumed on start-up of storage enabled members. When suspending services on shutdown, or scaling to zero, the Operator now only suspends partitioned cache services using active persistence.
  • Fixed an issue in the Grafana dashboards to allow for cache names containing a $ character.

Enhancements

  • Added support for specifying StartupProbes to be added to Coherence cluster Pods.
  • Added support for specifying readiness gates to be added to Coherence cluster Pods.
  • Added support for JVM argument files created when building images with JIB. This allows the Operator to run JIB images with the class path and main class configured when the image was created.
  • Added additional examples
  • Added a section in the examples for those unfortunate enough to need to run Coherence clusters in Kubernetes without the benefits of the Operator.

v3.2.2

03 Sep 09:52
Compare
Choose a tag to compare

Coherence Operator Release 3.2.2

Fixes

  • Fixed an issue in the controllers reconcile functionality where the Operator could fail to properly reconcile all resources if restarted during a previous reconcile.

Other Changes

  • Added support for setting the SecurityContext for the Coherence container. Setting the PodSecurityContext was already supported but the container context had been missed.
  • Removed the Microprofile and Micrometer Grafana dashboards. Going forward there will be a single set of dashboards supporting the default metric names published by Coherence metrics. The previous situation of having multiple sets of dashboards and making the customer figure out which to use was confusing for the customer and a maintenance headache for us.
  • Changed how Prometheus and Grafana are installed in tests to use the latest Prometheus Operator Quick Start instructions.
  • Updated examples documentation that use Coherence metrics to point to the Prometheus Operator Quick Start instructions for installing Prometheus and Grafana
  • Created a shell script to perform a bulk upload of Grafana dashboards
  • Included an example of managing Coherence resources using a Helm chart including a work-around for the fact that Helm's --wait argument does not support Coherence resources.

v3.2.1

28 Jul 11:26
Compare
Choose a tag to compare

Coherence Operator Release 3.2.1

⚠️ There was an issue with the Helm chart in the previous version 3.2.0 that meant the helm upgrade command could not be used to upgrade from a previous version. This has now been fixed and it is possible to upgrade from 3.1.5 and earlier to 3.2.1 using helm upgrade. It is not possible to upgrade from 3.2.0 to 3.2.1 using helm upgrade.

Fixes

  • Fixed the Operator Helm chart to work with the helm upgrade command when upgrading from v3.1.5 to v3.2.1. Previously helm upgrade could not be used to upgrade from 3.1.5 to 3.2.0.

Changes and Enhancements

  • The readiness probe no longer uses the internal Coherence MBean model to obtain cache service attributes, but instead pulls the required attributes directly from local services.
  • Added Cache Evictions Over Time graph to the Grafana Dashboards

v3.2.0

01 Jul 15:02
Compare
Choose a tag to compare

Coherence Operator Release 3.2.0

The version of the Coherence Operator no longer supports running on Kubernetes clusters prior to 1.16

⚠️ The Helm chart for this version has breaking changes that means the helm upgrade command cannot be used to upgrade a 3.1.5 or earlier Operator deployment to 3.2.0. This has been fixed in v3.2.1, which means that helm upgrade cannot be used to upgrade from 3.2.0 to 3.2.1, but can be used to upgrade 3.1.5 or earlier to 3.2.1.

Changes and Enhancements

  • Added support for running in Kubernetes on Linux/arm64 The Operator images are multi-arch images, so the correct image corresponding to the platform architecture will be used. The caveat is that the Coherence application image used to run Coherence clusters must also support ARM
  • The Coherence IPMonitor is disabled by default in Coherence clusters managed by the Coherence Operator
  • Upgraded the version of the Operator SDK used by this project to v1.7.2 with a corresponding bump in Kubernetes API used.
  • Added documentation on running Coherence clusters with Istio
  • Upgrade to Operator SDK 1.9.0 and Go 1.16
  • The default Coherence image used to deploy Coherence clusters is version 21.06

Fixes

  • Further hardened rolling upgrade when there is only a single Coherence storage member
  • Improved the readiness probe to remove needless checks for storage disabled members

v3.1.5

09 Apr 09:04
Compare
Choose a tag to compare

Coherence Operator Release 3.1.5

Fixes

  • Fixed an issue where the Operator REST endpoint was causing Coherence to be initialised too early, which did not play nicely with things like Micronaut applications.

  • Fixed an issue where a reconcile event may fail to be re-queued during a rolling upgrade.

  • Fixed an issue where the readiness probe never signalled ready when using federated caches

  • Fixed an issue where the readiness probe never signalled ready in clusters where different cache services are enabled on different members

  • Harden the creation/update of CRDs when the operator starts by using the same client that the controller manager uses.

  • Fixed an issue where the Coherence Pod's callback to the Operator to get site/rack names fails when using Istio due to the Istio side-car container not being ready when the http request is made in the Coherence container. The request now has retry with backoff to allow Istio time to start.

Enhancements

  • Added a configurable timeout for the request to suspend services prior to deleting a Coherence resource.

  • Various changes to play nicer with Istio. E.g. consistent app and version labels, port names follow Istio convention as much as possible, use of Services instead of direct Pod communication, etc.

v3.1.4

05 Mar 04:47
Compare
Choose a tag to compare

Coherence Operator Release 3.1.4

Changes:

  • Support environment variable expansion in JVM args, program args and classpath entries. Any values in the format $VAR or ${VAR} in any of those elements in the Coherence yaml will be replaced with corresponding environment variable values at runtime. These environment variables are taken from the Coherence container in the Coherence cluster Pod's not from the Operator's environment.

  • Fixed an issue where a Coherence deployment could not be deleted if all of the Pods failed to start (for example if all Pods were stuck in an ImagePullBackoff loop)

  • Fixed an issue where updating the Coherence yaml to both scale down to 1 replica and cause a rolling update could result in data loss even if persistence is enabled. Obviously this scenario is guaranteed to cause complete data loss in cache services where persistence is not enabled but in clusters with active persistence an upgrade of a single member should not loose data.

  • Allow the operator to be installed without requiring ClusterRoles and ClusterRoleBindings. Whilst this is really not recommended some customers have queried whether it is possible due to particularly tight corporate security policies. It is now possible to run the Operator without installing cluster wide RBAC roles but with some caveats - see the RBAC section of the install section of the documentation.

v3.1.3

28 Jan 14:12
Compare
Choose a tag to compare

Coherence Operator Release 3.1.3

This release contains a single bug fix that removes some debug logging that was left in the readiness probe that the Operator injects into the Coherence containers. This does not cause functional issues but it is irritating to see a lot of these messages the Coherence logs.