Releases: openebs/openebs
0.7.1
Getting Started
Prerequisite to install
- Kubernetes 1.9.7+ is installed
- Make sure that you run the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
- Make sure iSCSI Initiator is installed on the Kubernetes nodes.
- NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.
Using kubectl
kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.1.yaml
Using helm stable charts
helm install --namespace openebs --name openebs stable/openebs
Using OpenEBS Helm Charts (will be deprecated in the coming releases)
helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs
Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs
For more details refer to the documentation at: https://docs.openebs.io/
Change Summary
Minor enhancements
- Support for using OpenEBS PVs as Block Devices for Application Pods
Bug Fixes
- Fixed an issue with PVs not getting created when capacity had "i" suffix
- Fixed an issue with cStor Target Pod stuck in terminating state due to shared hostPath
- Fixed an issue with FSType from StorageClass not being configured on PV
- Fixed an issue with NDM discovering capacity of disks via CDB16
- Fixed an issue with PV name generation exceeding 64 characters. PVC UUID will be used as PV Name.
- Fixed an issue with cStor Pool Pod terminating when there is an abrupt connection break
- Fixed an issue with cStor Volume clean-up failure blocking new volumes from being created.
Detailed release notes are maintained in Project Tracker Wiki.
Limitations
- Jiva target to Replica message protocol has been enhanced to handle the write errors. This change in the data exchanges causes the older replicas to be incompatible with the newer target and vice versa. The upgrade involves shutting down all the replicas before launching them with the new version. Since the volume requires the target and at least 2 replicas to be online, chances of volumes getting into the read-only state during upgrade are higher. A manual intervention will be required to recover the volume.
- For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
- This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
- While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
- The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
- The current version of OpenEBS volumes are not optimized for performance sensitive applications.
For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.
v0.7
Getting Started
Prerequisite to install
- Kubernetes 1.9.7+ is installed
- Make sure that you have completed the below installation steps with cluster admin context. The installation will involve creating a new Service Account and assigning to OpenEBS components.
- Make sure iSCSI Initiator is installed on the Kubernetes nodes.
- NDM helps in discovering the devices attached to Kubernetes nodes, which can be used to create storage pools. If you like to exclude some of the disks from getting discovered, update the filters on NDM to exclude paths before installing OpenEBS.
Using kubectl
kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.0.yaml
Using OpenEBS Helm Charts (will be deprecated in the coming releases)
helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs
For more details refer to the documentation at: https://docs.openebs.io/
Note: Kubernetes stable/openebs helm chart and other charts still point to 0.6 and efforts are underway to update them to 0.7.
Quick Summary on changes
- Node Disk Manager that helps with discovering block devices attached to nodes
- Alpha support for cStor Storage Engines
- Updated CRDs for supporting cStor as well as pluggable storage control plane
- Jiva Storage Pool called
default
and StorageClass calledopenebs-jiva-default
- cStor Storage Pool Claim called
cstor-sparse-pool
and StorageClass calledopenebs-cstor-sparse
- There has been a change in the way volume storage policies can be specified with the addition of new policies like:
- Number of Data copies to be made
- Specify the nodes on which the Data copies should be persisted
- Specify the CPU or Memory Limits per PV
- Choice of Storage Engine : cStor or Jiva
Sample Storage Pool Claims, Storage Class and PVC configurations to make use of new features can be found here: Sample YAMLs
Detailed release notes are maintained in Project Tracker Wiki.
Limitations
- cStor Target or Pool pods can at times be stuck in a Terminating state. They will need to be manually cleaned up using kubectl delete with 0 sec grace period.
- Jiva target to Replica message protocol has been enhanced to handle the write errors. This change in the data exchanges causes the older replicas to be incompatible with the newer target and vice versa. The upgrade involves shutting down all the replicas before launching them with the new version. Since the volume requires the target and at least 2 replicas to be online, chances of volumes getting into the read-only state during upgrade are higher. A manual intervention will be required to recover the volume.
- For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
- This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
- While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
- The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
- The current version of OpenEBS volumes are not optimized for performance sensitive applications.
For a more comprehensive list of open issues uncovered through e2e, please refer to open issues.
v0.7-RC2
Please Note: This is a release candidate build. If you are looking at deploying from a stable release, please follow the instructions at Quick Start Guide.
Getting Started with OpenEBS v0.7-RC2
Prerequisite to install
- Kubernetes 1.9.7+ is installed
- Make sure you run the following
kubectl
command with cluster admin context. The installation will involve create a new Service Account and assigned to OpenEBS components.
Install and Setup
kubectl apply -f https://openebs.github.io/charts/openebs-operator-0.7.0-RC2.yaml
The above command will install OpenEBS Control Plane components and all the required Kubernetes CRDs. With 0.7, the following new services will be installed:
- Node Disk Manager that helps with discovering block devices attached to nodes
- Configuration Files required for supporting both Jiva and cStor Storage Engines
- A default Jiva Storage Pool and a StorageClass called
openebs-standard
- A default cStor Storage Pool and a StorageClass called
openebs-cstor-sparse
You are all set!
You can now install your Stateful applications that make use of either of the above StorageClasses or you can create a completely new StorageClass that can be configured with Storage Policies like:
- Number of Data copies to be made
- Specify the nodes on which the Data copies should be persisted
- Specify the CPU or Memory Limits per PV
- Choice of Storage Engine : cStor or Jiva
Some of the sample Storage Class and PVC configurations can be found here: Sample YAMLs
Additional details and release notes are available on Project Tracker Wiki.
v0.7-RC1
Getting Started
Prerequisite to install
Make sure that user is assigned with cluster-admin clusterrole to run the below provided install steps.
Using kubectl
Install the 0.7.0-RC1 OpenEBS with CAS Templates.
kubectl apply -f https://raw.githubusercontent.com/openebs/store/master/docs/openebs-operator-0.7.0-RC1.yaml
kubectl apply -f https://raw.githubusercontent.com/openebs/store/master/docs/openebs-pre-release-features-0.7.0-RC1.yaml
Download the following file, update the disks and apply to create cStor Pools.
wget https://raw.githubusercontent.com/openebs/store/master/docs/openebs-config-0.7.0-RC1.yaml
kubectl apply -f openebs-config-0.7.0-RC1.yaml
v0.6
Getting Started
Using kubectl
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.6/k8s/openebs-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.6/k8s/openebs-storageclasses.yaml
Using Kubernetes Stable Helm charts
helm install --namespace openebs --name openebs -f https://openebs.github.io/charts/helm-values-0.6.0.yaml stable/openebs
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.6/k8s/openebs-storageclasses.yaml
Using OpenEBS Helm Charts (will be deprecated in the coming releases)
helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs
For more details refer to the documentation at: https://docs.openebs.io/
New Capabilities / Enhancements
- Integrate the Volume Snapshot capabilities with Kubernetes Snapshot controller
- Enhance maya-apiserver to use CAS Templates for orchestrating new Storage Engines
- Enhance mayactl to provide additional details about volumes such as - replica status and node details where replicas are running.
- Enhance maya-apiserver to schedule Replica Pods on specific nodes using nodeSelector
- Enhance provisioner and maya-apiserver to allow specifying cross AZ scheduling of Replica Pods.
- Support for deploying OpenEBS via Kubernetes stable Helm Charts
- openebs-operator.yaml is modified to run OpenEBS pods in its own namespace
openebs
- Enhance e2e tests to simulate chaos at different layers such as - CPU, RAM, Disk, Network, and Node
Major Issues Fixed
- Fixed an issue where intermittent connectivity errors between controller and replica caused iSCSI initiator to mark the volume as read-only. openebs/gotgt#15
- Fixed an issue where intermittent connectivity errors were causing the controller to silently drop the replicas and mark the Volumes as read-only. The replicas dropped in this way were not getting re-added to the Controller. openebs/jiva#45
- Fixed an issue where volume would be marked as read-only if one of the three replicas returned an error to IO. openebs/jiva#56
- Fixed an issue where replica fails to register back with the controller if the attempt to register occurred before the controller cleared the replica's previous state. openebs/jiva#56
- Fixed an issue where a volume with a single replica would get stuck in the read-only state once the replica was restarted. openebs/jiva#45
Upgrade from older releases
Since 0.6 has made changes to the way controller and replica pods communicate with each other, the older volumes need to be upgraded with scheduled downtime for applications.
Limitations
- For OpenEBS volumes configured with more than 1 replica, at least more than half of the replicas should be online for the Volume to allow Read and Write. In the upcoming releases, with cStor data engine, Volumes can be allowed to Read/Write when there is at least one replica in the ready state.
- This release contains a preview support for cloning an OpenEBS Volume from a snapshot. This feature only supports single replica for a cloned volume, which is intended to be used for temporarily spinning up a new application pod for recovering lost data from the previous snapshot.
- While testing for different platforms, with a three-node/replica OpenEBS volume and shutting down one of the three nodes, there was an intermittent case where one of the 2 remaining replicas also had to be restarted.
- The OpenEBS target (controller) pod depends on the Kubernetes node tolerations to reschedule the pod in the event of node failure. For this feature to work, TaintNodesByCondition alpha feature must be enabled in Kubernetes. In a scenario where OpenEBS target (controller) is not rescheduled or is back to running within 120 seconds, the volume gets into a read-only state and a manual intervention is required to make the volume as read-write.
- The current version of OpenEBS volumes are not optimized for performance sensitive applications.
- For a more comprehensive list of open issues uncovered through e2e, please refer open issues.
Additional details are available on Project Tracker Wiki.
v0.5.4
Issues Fixed in v0.5.4
- Provision to specify filesystems other than ext4 (default) in the OpenEBS provisioner spec (#1454 )
- Support for xfs filesystem format for mongodb statefulset using OpenEBS Persistent Volume (#1446 )
Known Issues in v0.5.4
For a complete list of known issues, go to v0.5.4 known issues
- xfs formatted volumes are not remounted post snapshot reverts/forced restarts (bugs)
- Requires Kubernetes 1.7.5+
- Requires iSCSI initiator to be installed in the Kubernetes nodes or kubelet container
- Not recommended for mission critical workloads
- Not recommended for performance sensitive workloads. Ongoing efforts intended to improve performance
Enhancements
- OpenEBS is now available as a stable chart from Kubernetes (https://github.com/kubernetes/charts/tree/master/stable/openebs)
- Increased integration test & e2e coverage in the CI
Installation
Using kubectl
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.4/k8s/openebs-operator.yaml
Using helm
helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs
Alternately, refer : https://docs.openebs.io/docs/next/installation.html#install-openebs-using-helm-charts
v0.5.3
Issues Fixed in v0.5.3
- Fixed usage of StoragePool issue when rbac settings are applied 1189.
- Fixed hardcoded maya-apiserver-service name to a configurable value as it resulted in conflict with other services running on the same cluster 1227.
- Fixed an issue where the OpenEBS iSCSI volume showed progressive increase in the memory consumed by the controller pod 1298.
Known Issues in v0.5.3
For a complete list of known issues, go to v0.5.3 known issues.
- Requires Kubernetes 1.7.5+
- Requires iSCSI initiator to be installed in the Kubernetes nodes or kubelet container
- Not recommended for mission critical workloads
- Not recommended for performance sensitive workloads. Ongoing efforts intended to improve performance
Enhancement to Documentation
The OpenEBS documentation is now available at https://docs.openebs.io/. You can provide your feedback comments by clicking the Feedback button provided on every page.
Installation
Using kubectl
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.3/k8s/openebs-operator.yaml
Using helm
helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs
v0.5.2
This is a single-fix release on top of v0.5.1 to allow maya-apiserver and openebs-provisioner to work with Kubernetes non-SSL configuration.
Issue Fixed:
- #1184 : You can set the non-SSL Kubernetes endpoints to use by specifying the ENV variables OPENEBS_IO_KUBE_CONFIG and OPENEBS_IO_K8S_MASTER on maya-apiserver and openebs-provisioner.
To use the above ENV variables, the following image versions have to be used:
- openebs/m-apiserver:0.5.2: OpenEBS Maya API Server along with the latest maya cli.
- openebs/openebs-k8s-provisioner:0.5.2: Dynamic OpenEBS Volume Provisioner for Kubernetes.
v0.5.1
This is a incremental release on top of v0.5. This release fixes bugs and adds support for running OpenEBS on CentOS based Kubernetes Cluster including OpenShift 3.7+
Issues Fixed in v0.5.1
- Fix the inter-operability issues of connecting to OpenEBS Volume from CentOS iSCSI Initiator (#1087)
- Fix openebs-k8s-provisioner to be launched in non-default namespace (#1055)
- Update the documentation with steps to use OpenEBS on OpenShift Kubernetes Cluster (#1102) and Kubernetes on CentOS (#1104)
- Update helm charts to use OpenEBS 0.5.1 (#1100)
Known Limitations
- Requires Kubernetes 1.7.5+
- Requires iSCSI initiator to be installed in the Kubernetes nodes or kubelet container
- Not recommended for mission critical workloads
- Not recommended for performance sensitive workloads. Ongoing efforts intended to improve performance
Installation
Using kubectl
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.1/k8s/openebs-operator.yaml
Using helm
helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs
Images
- openebs/jiva:0.5.1 : Containerized Storage Controller
- openebs/m-apiserver:0.5.1 : OpenEBS Maya API Server along with the latest maya cli.
- openebs/openebs-k8s-provisioner:0.5.1 : Dynamic OpenEBS Volume Provisioner for Kubernetes.
- openebs/m-exporter:0.5.1 : OpenEBS Volume metrics exporter.
Setup OpenEBS Volume Monitoring
If you are running your own prometheus, please update it with the following job configuration:
- job_name: 'openebs-volumes'
scheme: http
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_monitoring]
regex: volume_exporter_prometheus
action: keep
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- source_labels: [__meta_kubernetes_pod_label_vsm]
action: replace
target_label: openebs_pv
- source_labels: [__meta_kubernetes_pod_container_port_number]
action: drop
regex: '(.*)9501'
- source_labels: [__meta_kubernetes_pod_container_port_number]
action: drop
regex: '(.*)3260
If you don't have prometheus running, you can use the following yaml file to run prometheus and grafana.
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.0/k8s/openebs-monitoring-pg.yaml
You can import the following grafana-dashboard file to view the OpenEBS Volume metrics.
v0.5.0
This release marks a significant milestone for OpenEBS. We are excited about the new capabilities - like policy based Volume Provisioning and Customizations that will finally provide DevOps teams, the missing tools to automate the StorageOperations. We are more excited about the contributions that poured in from 50+ new community members that have made this release possible.
Changelog
- Storage Policy Enforcement Framework that allows DevOps teams to deploy a customized storage. Some policies supported are:
- Storage Policy - for using a custom Storage Engine like Jiva,
- Storage Policy - for exposing volume metrics in Prometheus format using a side-car to volume controller
- Storage Policy - for defining Capacity
- Storage Policy - for defining the persistent storage location like /var/openebs (default) or a directory mounted on EBS or GPD etc.,
- Extend OpenEBS API Server to expose volume snapshot api
- Support for deploying OpenEBS via helm charts
- Sample Prometheus configuration for collecting OpenEBS Volume Metrics
- Sample Grafana OpenEBS Volume Dashboard - using the prometheus Metrics
- Sample Deployment YAMLs and corresponding Storage Classes for different types of applications (see Project Tracker Wiki for detailed list)
- Sample Deployment YAMLs for launching Kubernetes Dashboard for a preview of the changes done by OpenEBS Team to Kubernetes Dashboard (see Project Tracker Wiki for the PRs raised and merged)
- Sample Deployment YAMLs for Prometheus and Grafana - in case they are not already part of your deployment.
- Several Documentation and Code Re-factoring Changes for improving code quality
Additional Details are available on Project Tracker Wiki
Changes from earlier releases to v0.5.0
- Some of the ENV variables for customizing default options have changed (openebs/openebs #927)
- DEFAULT_CONTROLLER_IMAGE -> OPENEBS_IO_JIVA_CONTROLLER_IMAGE
- DEFAULT_REPLICA_IMAGE -> OPENEBS_IO_JIVA_REPLICA_IMAGE
- DEFAULT_REPLICA_COUNT -> OPENEBS_IO_JIVA_REPLICA_COUNT
Known Limitations
- Requires Kubernetes 1.7.5+
- Requires iscsi initiator to be installed in the kubernetes nodes or kubelet container
- Has been tested primarily with enabling openebs and its volumes (PVCs) in the default namespace
- Not recommended for mission critical workloads
- Not recommended for performance sensitive workloads. Ongoing efforts intended to improve performance
Installation
Using kubectl
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.0/k8s/openebs-operator.yaml
Using helm
helm repo add openebs-charts https://openebs.github.io/charts/
helm repo update
helm install openebs-charts/openebs
Images
- openebs/jiva:0.5.0 : Containerized Storage Controller
- openebs/m-apiserver:0.5.0 : OpenEBS Maya API Server along with the latest maya cli.
- openebs/openebs-k8s-provisioner:0.5.0 : Dynamic OpenEBS Volume Provisioner for Kubernetes.
- openebs/m-exporter:0.5.0 : OpenEBS Volume metrics exporter.
Setup OpenEBS Volume Monitoring
If you are running your own prometheus, please update it with the following job configuration:
- job_name: 'openebs-volumes'
scheme: http
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_monitoring]
regex: volume_exporter_prometheus
action: keep
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- source_labels: [__meta_kubernetes_pod_label_vsm]
action: replace
target_label: openebs_pv
- source_labels: [__meta_kubernetes_pod_container_port_number]
action: drop
regex: '(.*)9501'
- source_labels: [__meta_kubernetes_pod_container_port_number]
action: drop
regex: '(.*)3260
If you don't have prometheus running, you can use the following yaml file to run prometheus and grafana.
kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/v0.5.0/k8s/openebs-monitoring-pg.yaml
You can import the following grafana-dashboard file to view the OpenEBS Volume metrics.