In the Log Storage
Toolbox
project on GitLab (behind VPN), we have a set of scripts that allow us to
provision OCP clusters in multiple cloud providers. The
README.md
under log-storage-toolbox/manifests/ocp
contains detailed instructions on how
to use the script. NOTE: Be sure to follow the instructions in the base
section as they are prerequisites.
For development on the multicluster-observability-addon, most of the time it's handy to provision two clusters, one hub and one spoke.
# Download openshift-install client
./scripts/ocp-download-release.sh 4.14.7
# Prepare bootstrap resources
./scripts/ocp-install.sh aws eu-central-1 4.14.7
# Launch cluster (takes 40min +-)
openshift-install-linux-4.14.7 create cluster --dir ./output/jmarcalaws24011261
./scripts/ocp-install.sh aws eu-central-1 4.14.7
openshift-install-linux-4.14.7 create cluster --dir ./output/jmarcalaws24011221
Both clusters are supported by 6
nodes m6a.4xlarge
(3 masters, 3 workers).
It's indifferent which cluster you pick to be the hub and the spoke. When you
are done with developing, be sure to destroy them.
openshift-install-linux-4.14.7 destroy cluster --dir ./output/jmarcalaws24011261
openshift-install-linux-4.14.7 destroy cluster --dir ./output/jmarcalaws24011221
All steps are meant to be run on the hub cluster except when explicitly stated.
- Use the OpenShift Installer to create and set up two OCP clusters.
- Install the
Advanced Cluster Management for Kubernetes
operator. - Create a
MultiClusterHub
resource using the web console. - Import each spoke cluster to RHACM via the web console (top left, to the right of the RH OpenShift logo), using the commands option by running the commands on each spoke cluster.
Note: the addon has a dependency on cert-manager operator, which should be installed on the hub cluster
When working on the addon, it's nice to be able to test things quickly; to do this, you can:
export REGISTRY_BASE=quay.io/YOUR_QUAY_ID
# Builds and pushes the addon images
make oci
# Deploys the CRDs necessary, the addon using your built image
make addon-deploy
Then every time you want to test a new version, you can just:
make oci
# Delete the mcoa pod which will make the Deployment pull the new image
oc -n open-cluster-management delete pod -l app=multicluster-observability-addon-manager
The addon supports disabling signals using the resource AddOnDeploymentConfig
. For instance, to disable the logging signal create the following resource on the hub cluster:
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: AddOnDeploymentConfig
metadata:
name: multicluster-observability-addon
namespace: open-cluster-management
spec:
customizedVariables:
- name: loggingDisabled
value: "true"
Supported keys are metricsDisabled
, loggingDisabled
and tracingDisabled
To actually install the addon on a spoke cluster, you need to:
- Have the addon manager running on the hub cluster.
- Create the necessary Kubernetes resources in the namespace of the spoke
cluster that will be used by the addon to generate the
ManifestWorks
, e.g.,secrets
,configmaps
. - Create the
ManagedClusterAddon
resource in the namespace of the spoke cluster.
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
name: multicluster-observability-addon
namespace: spoke-1
spec:
installNamespace: open-cluster-management-agent-addon
configs:
- resource: configmaps
name: spoke-1
namespace: spoke-1
- resource: secrets
name: spoke-1
namespace: spoke-1
- Once a
ManagedClusterAddon
is reconciled successfuly by the addon we can look for theManifestWorks
oc -n spoke-1 get manifestworks addon-multicluster-observability-addon-deploy-0
Currently the addon doesn't support any configuration, so no configuration is needed at the ManagedClusterAddOn
level. However, the addon has a dependency with MCO.
Nowadays the addon supports the collection of metrics from the spoke clusters. These metrics are sent to an MCO instance running in the Hub.
Currently the addon supports configuration to send logs either to:
- CloudWatch: requires the auth configmap to be specified
- Loki: requires the auth configmap, the url configmap and optionally the inject ca configmap
Currently the addon supports configuration to send traces to:
- OpenTelemetryCollector: requires the auth configmap, the url configmap and optionally the inject ca configmap