Skip to content

Script to deploy SNO with OpenShift Agent Based Installer and apply vDU profiles

License

Notifications You must be signed in to change notification settings

borball/sno-agent-based-installer

Repository files navigation

SNO with OpenShift Agent Based Installer

Overview

Note: This repo only works for OpenShift 4.12+. For 4.9/4.10 please use this helper script

Sister repo for Multiple Nodes OpenShift:

This repo provides set of helper scripts to install SNO(Single Node OpenShift) with OpenShift Agent Based Installer and install operators and apply tunings which are recommended and required by vDU applications.

  • sno-iso: Used to generate bootable ISO image based on agent based installer, some operators and node tunings for vDU applications are enabled as day 1 operations.
  • sno-install: Used to mount the ISO image generated by sno-iso to BMC console as virtual media with Redfish API, and boot the node from the image to trigger the SNO installation. Tested on HPE, ZT and KVM with Sushy tools, other servers may/not work, please create issues if not.
  • sno-day2: Most of the operators and tunings required by vDU applications are enabled as day1, but some of them can only be done as day2 configurations.
  • sno-ready: Used to validate if the SNO cluster has all vDU required configuration and tunings.

Dependencies

Some software and tools are required to be installed before running the scripts:

Configuration

Prepare config.yaml to fit your lab situation, here is an example:

cluster:
  domain: outbound.vz.bos2.lab
  name: sno148
  #optional: set ntps servers 
  #ntps:
    #- 0.rhel.pool.ntp.org
    #- 1.rhel.pool.ntp.org
host:
  hostname: sno148.outbound.vz.bos2.lab
  interface: ens1f0
  mac: b4:96:91:b4:9d:f0
  ipv4:
    enabled: true
    dhcp: false
    ip: 192.168.58.48
    dns: 
      - 192.168.58.15
    gateway: 192.168.58.1
    prefix: 25
    machine_network_cidr: 192.168.58.0/25
    #optional, default 10.128.0.0/14
    #cluster_network_cidr: 10.128.0.0/14
    #optional, default 23
    #cluster_network_host_prefix: 23
  ipv6:
    enabled: false
    dhcp: false
    ip: 2600:52:7:58::48
    dns: 
      - 2600:52:7:58::15
    gateway: 2600:52:7:58::1
    prefix: 64
    machine_network_cidr: 2600:52:7:58::/64
    #optional, default fd01::/48
    #cluster_network_cidr: fd01::/48
    #optional, default 64
    #cluster_network_host_prefix: 64
  vlan:
    enabled: false
    name: ens1f0.58
    id: 58
  disk: /dev/nvme0n1

cpu:
  isolated: 2-31,34-63
  reserved: 0-1,32-33

proxy:
  enabled: false
  http:
  https:
  noproxy:

pull_secret: ./pull-secret.json
ssh_key: /root/.ssh/id_rsa.pub

bmc:
  address: 192.168.13.148
  username: Administrator
  password: dummy

iso:
  address: http://192.168.58.15/iso/agent-148.iso

By default, following tunings or operators will be enabled during day1(installation phase):

  • Workload partitioning
  • SNO boot accelerate
  • Kdump service/config
  • crun(4.13+)
  • rcu_normal(4.14+)
  • sriov_kernel: (4.14+)
  • sync_time_once (4.14+)
  • Local Storage Operator
  • PTP Operator
  • SR-IOV Network Operator

You can turn on/off the day1 operations and specify the desired versions in the config file under section day1.

In some case you may want to include more customizations for the cluster during day1, you can create folder extra-manifests and put those CRs(Custom Resources) inside before you run sno-iso.sh, the script will copy and include those inside the ISO image. see advanced usage of the configurations.

Get other sample configurations.

Generate ISO image

You can run sno-iso.sh [config file] to generate a bootable ISO image so that you can boot from BMC console to install SNO. By default stable-4.12 will be downloaded and installed if not specified.

# ./sno-iso.sh -h
Usage: ./sno-iso.sh [config file] [ocp version]
config file and ocp version are optional, examples:
- ./sno-iso.sh sno130.yaml              equals: ./sno-iso.sh sno130.yaml stable-4.12
- ./sno-iso.sh sno130.yaml 4.12.10

Prepare a configuration file by following the example in config.yaml.sample           
-----------------------------------
# content of config.yaml.sample
...
-----------------------------------
Example to run it: ./sno-iso.sh config-sno130.yaml   

Demo

Boot node from ISO image

Once you generate the ISO image, you can boot the node from the image with your prefered way, OCP will be installed automatically.

A helper script sno-install.sh is available in this repo to boot the node from ISO and trigger the installation automatically, assume you have an HTTP server (http://192.168.58.15/iso in our case) to host the ISO image.

Define your BMC info and ISO location in the configuration file first:

bmc:
  address: 192.168.14.130
  username: Administrator
  password: Redhat123!
  kvm_uuid:

iso:
  address: http://192.168.58.15/iso/agent-130.iso

Then run it:

# ./sno-install.sh 
Usage : ./sno-install.sh config-file
Example : ./sno-install.sh config-sno130.yaml

Demo

Day2 operations

Some CRs are not supported in installation phase as day1 operations including PerformanceProfile(4.13+ will support), those can/shall be done as day 2 operations once SNO is deployed.

# ./sno-day2.sh -h
Usage: ./sno-day2.sh [config.yaml]
config.yaml is optional, will use config.yaml in the current folder if not being specified.

You can turn on/off day2 operation with configuration in section day2.

Demo

Validation

After applying all day2 operarions, node may be rebooted once, check if all vDU required tunings and operators are in placed:

NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.14.2    True        False         8d      Cluster version is 4.14.2

NAME                          STATUS   ROLES                         AGE   VERSION
sno148.outbound.vz.bos2.lab   Ready    control-plane,master,worker   8d    v1.27.6+f67aeb3

NAME                                                      AGE
local-storage-operator.openshift-local-storage            8d
ptp-operator.openshift-ptp                                8d
sriov-network-operator.openshift-sriov-network-operator   8d

Checking node:
[+]Node                                                        ready     

Checking cluster operators:
[+]cluster operator authentication                             healthy   
[+]cluster operator cloud-controller-manager                   healthy   
[+]cluster operator cloud-credential                           healthy   
[+]cluster operator config-operator                            healthy   
[+]cluster operator dns                                        healthy   
[+]cluster operator etcd                                       healthy   
[+]cluster operator ingress                                    healthy   
[+]cluster operator kube-apiserver                             healthy   
[+]cluster operator kube-controller-manager                    healthy   
[+]cluster operator kube-scheduler                             healthy   
[+]cluster operator kube-storage-version-migrator              healthy   
[+]cluster operator machine-approver                           healthy   
[+]cluster operator machine-config                             healthy   
[+]cluster operator marketplace                                healthy   
[+]cluster operator monitoring                                 healthy   
[+]cluster operator network                                    healthy   
[+]cluster operator node-tuning                                healthy   
[+]cluster operator openshift-apiserver                        healthy   
[+]cluster operator openshift-controller-manager               healthy   
[+]cluster operator operator-lifecycle-manager                 healthy   
[+]cluster operator operator-lifecycle-manager-catalog         healthy   
[+]cluster operator operator-lifecycle-manager-packageserver   healthy   
[+]cluster operator service-ca                                 healthy   

Checking all pods:
[+]No failing pods.                                                      

Checking required machine configs:
[+]mc 02-master-workload-partitioning                          exists    
[+]mc 06-kdump-enable-master                                   exists    
[-]kdump blacklist_ice is not enabled in config-sno148.yaml              
[+]mc container-mount-namespace-and-kubelet-conf-master        exists    
[+]mc 04-accelerated-container-startup-master                  not exist 
[+]MachineConfig 99-crio-disable-wipe-master                   exists    

Checking machine config pool:
[+]mcp master                                                  updated and not degraded

Checking required performance profile:
[+]PerformanceProfile openshift-node-performance-profile exists.           
[+]topologyPolicy is single-numa-node                                    
[+]realTimeKernel is enabled                                             

Checking required tuned:
[+]Tuned performance-patch                                     exists    

Checking SRIOV operator status:
[+]sriovnetworknodestate sync status                           succeeded 

Checking PTP operator status:
[+]Ptp linuxptp-daemon                                         ready     

Checking chronyd.service:
[-]ptpconfig is not enabled in config-sno148.yaml.                       
[+]chronyd service                                             active    
[+]chronyd service                                             enabled   

Checking openshift monitoring:
[+]Grafana                                                     not enabled
[+]AlertManager                                                enabled   
[+]PrometheusK8s retention                                     24h       

Checking openshift capabilities:
[+](cluster capability)operator marketplace                    enabled   
[+](cluster capability)operator node-tuning                    enabled   
[+](cluster capability)operator console                        disabled  

Checking network diagnostics:
[+]Network diagnostics                                         disabled  

Checking Operator hub:
[+]Catalog community-operators                                 disabled  
[+]Catalog redhat-marketplace                                  disabled  

Checking /proc/cmdline:
[+]systemd.cpu_affinity presents: systemd.cpu_affinity=0,1,32,33           
[+]isolcpus presents: isolcpus=managed_irq,2-31,34-63                    
[+]Isolated cpu in cmdline: 2-31,34-63 matches with the ones in performance profile: 2-31,34-63           
[+]Reserved cpu in cmdline: 0,1,32,33 matches with the ones in performance profile: 0-1,32-33           

Checking RHCOS kernel:
[+]Node kernel                                                 realtime  

Checking kdump.service:
[-]kdump service                                               not active
[+]kdump service                                               enabled   

Checking InstallPlans:
[+]All InstallPlans have been approved or auto-approved.                 

Checking container runtime:
[+]Container runtime                                           crun      

Completed the checking.

Demo

Demos

sno-iso

sno-iso

sno-install

sno-install

sno-day2

sno-day2

sno-ready

sno-ready

About

Script to deploy SNO with OpenShift Agent Based Installer and apply vDU profiles

Resources

License

Stars

Watchers

Forks

Packages

No packages published