Skip to content

ComputeCanada/puppet-magic_castle

Repository files navigation

Puppet Magic Castle

This repo contains the Puppet environment and the classes that are used to define the roles of the instances in a Magic Castle cluster.

Roles are attributed to instance based on their tags. For each tag, a list of classes to include is define. This mechanism is explained in section magic_castle::site.

The parameters of the classes can be customized by defined values in the hieradata. The profile:: sections list the available classes, their role and their parameters.

For classes with parameters, a folded default values subsection provides the default value of each parameter as it would be defined in hieradata. For some parameters, the value is displayed as ENC[PKCS7,...]. This corresponds to an encrypted random value generated by bootstrap.sh on the Puppet server initial boot. These values are stored in /etc/puppetlabs/code/environment/data/bootstrap.yaml - a file also created on Puppet server initial boot.

magic_castle::site

parameters

Variable Description Type
all List of classes that are included by all instances Array[String]
tags Mapping tag-classes - instances that have the tag include the classes Hash[Array[String]]
enable_chaos Shuffle class inclusion order - used for debugging purposes Boolean
default values
magic_castle::site::all:
  - profile::base
  - profile::consul
  - profile::users::local
  - profile::sssd::client
  - profile::metrics::node_exporter
  - swap_file
magic_castle::site::tags:
  dtn:
    - profile::globus
    - profile::nfs::client
    - profile::freeipa::client
    - profile::rsyslog::client
  login:
    - profile::fail2ban
    - profile::cvmfs::client
    - profile::slurm::submitter
    - profile::ssh::hostbased_auth::client
    - profile::nfs::client
    - profile::freeipa::client
    - profile::rsyslog::client
  mgmt:
    - mysql::server
    - profile::freeipa::server
    - profile::metrics::server
    - profile::metrics::slurm_exporter
    - profile::rsyslog::server
    - profile::squid::server
    - profile::slurm::controller
    - profile::freeipa::mokey
    - profile::slurm::accounting
    - profile::accounts
    - profile::users::ldap
  node:
    - profile::cvmfs::client
    - profile::gpu
    - profile::jupyterhub::node
    - profile::slurm::node
    - profile::ssh::hostbased_auth::client
    - profile::ssh::hostbased_auth::server
    - profile::metrics::slurm_job_exporter
    - profile::nfs::client
    - profile::freeipa::client
    - profile::rsyslog::client
  nfs:
    - profile::nfs::server
    - profile::cvmfs::alien_cache
  proxy:
    - profile::jupyterhub::hub
    - profile::reverse_proxy
    - profile::freeipa::client
    - profile::rsyslog::client
  efa:
    - profile::efa
example 1: enabling CephFS client in a complete Magic Castle cluster
magic_castle::site::tags:
  cephfs:
    - profile::ceph::client

Require adding cephfs tag in main.tf to all instances that should mount the Ceph fileystem.

example 2: barebone Slurm cluster with external LDAP authentication
magic_castle::site::all:
  - profile::base
  - profile::consul
  - profile::sssd::client
  - profile::users::local
  - swap_file

magic_castle::site::tags:
  mgmt:
    - profile::slurm::controller
    - profile::nfs::server
  login:
    - profile::slurm::submitter
    - profile::nfs::client
  node:
    - profile::slurm::node
    - profile::nfs::client
    - profile::gpu

profile::accounts

This class configures two services to bridge LDAP users, Slurm accounts and users' folders in filesystems. The services are:

  • mkhome: monitor new uid entries in slapd access logs and create their corresponding /home and optionally /scratch folders.
  • mkproject: monitor new gid entries in slapd access logs and create their corresponding /project folders and Slurm accounts if it matches the project regex.

parameters

Variable Description Type
project_regex Regex identifying FreeIPA groups that require a corresponding Slurm account String
skel_archives Archives extracted in each FreeIPA user's home when created Array[Struct[{filename => String[1], source => String[1]}]]
default values
profile::accounts::project_regex: '(ctb\|def\|rpp\|rrg)-[a-z0-9_-]*'
profile::accounts::skel_archives: []
example
profile::accounts::project_regex: '(slurm)-[a-z0-9_-]*'
profile::accounts::skel_archives:
  - filename: hss-programing-lab-2022.zip
    source: https://github.com/ComputeCanada/hss-programing-lab-2022/archive/refs/heads/main.zip
  - filename: hss-training-topic-modeling.tar.gz
    source: https://github.com/ComputeCanada/hss-training-topic-modeling/archive/refs/heads/main.tar.gz

optional dependencies

This class works at its full potential if these classes are also included:

profile::base

This class install packages, creates files and install services that have yet justified the creation of a class of their own but are very useful to Magic Castle cluster operations.

parameters

Variable Description Type
version Current version number of Magic Castle String
admin_email Email of the cluster administrator, use to send log and report cluster related issues String
default values
profile::base::version: '13.0.0'
profile::base::admin_emain: ~ #undef
example
profile::base::version: '13.0.0-rc.2'
profile::base::admin_emain: "you@email.com"

dependencies

When profile::base is included, these classes are included too:

profile::base::azure

This class ensures Microsoft Azure Linux Guest Agent is not installed as it tends to interfere with Magic Castle configuration. The class also install Azure udev storage rules that would normally be provided by the Linux Guest Agent.

profile::base::etc_hosts

This class ensures that each instance declared in Magic Castle main.tf have an entry in /etc/hosts. The ip addresses, fqdns and short hostnames are taken from the terraform.instances datastructure provided by /etc/puppetlabs/data/terraform_data.yaml.

profile::base::powertools

This class ensures the DNF Powertools repo is enabled when using EL8. For all other EL versions, this class does nothing.

profile::ceph::client

Ceph is a free and open-source software-defined storage platform that provides object storage, block storage, and file storage built on a common distributed cluster foundation. reference

This class install Ceph packages, and configure and mount a CephFS share.

parameters

Variable Description Type
share_name CEPH share name String
access_key CEPH share access key String
export_path Path of the share as exported by the monitors String
mon_host List of CEPH monitor hostnames Array[String]
mount_binds List of CEPH share folders that will bind mounted under / Array[String]
mount_name Name to give to the CEPH share once mounted under /mnt String
binds_fcontext_equivalence SELinux file context equivalence for the CEPH share String
default values
profile::ceph::client::mount_binds: []
profile::ceph::client::mount_name: 'cephfs01'
profile::ceph::client::binds_fcontext_equivalence: '/home'
example
profile::ceph::client::share_name: "your-project-shared-fs"
profile::ceph::client::access_key: "MTIzNDU2Nzg5cHJvZmlsZTo6Y2VwaDo6Y2xpZW50OjphY2Nlc3Nfa2V5"
profile::ceph::client::export_path: "/volumes/_nogroup/"
profile::ceph::client::mon_host:
  - 192.168.1.3:6789
  - 192.168.2.3:6789
  - 192.168.3.3:6789
profile::ceph::client::mount_binds:
  - home
  - project
  - software
profile::ceph::client::mount_name: 'cephfs'
profile::ceph::client::binds_fcontext_equivalence: '/home'

profile::consul

Consul is a service networking platform developed by HashiCorp. reference

This class install consul and configure the service. An instance becomes a Consul server agent if its local ip address is declared in profile::consul::servers. Otherwise, it becomes a Consul client agent.

parameters

Variable Description Type
servers IP addresses of the consul servers Array[String]
default values
profile::consul::servers: "%{alias('terraform.tag_ip.puppet')}"
example
profile::consul::servers:
  - 10.0.1.2
  - 10.0.1.3
  - 10.0.1.4

dependencies

When profile::consul is included, these classes are included too:

profile::consul::puppet_watch

This class configure a consul watch event that when triggered restart the Puppet agent. It is used mainly by Terraform to restart all Puppet agents across the cluster when the hieradata source files uploaded by Terraform are updated.

dependencies

When profile::consul::puppet_watch is included, this class is included too:

profile::cvmfs::client

The CernVM File System (CVMFS) provides a scalable, reliable and low-maintenance software distribution service. It was developed to assist High Energy Physics (HEP) collaborations to deploy software on the worldwide-distributed computing infrastructure used to run data processing applications. CernVM-FS is implemented as a POSIX read-only file system in user space (a FUSE module). Files and directories are hosted on standard web servers and mounted in the universal namespace /cvmfs. reference

This class installs CVMFS client and configure repositories.

parameters

Variable Description Type
quota_limit Instance local cache directory soft quota (MB) Integer
repositories List of CVMFS repositories to mount Array[String]
alien_cache_repositories List of CVMFS repositories that need an alien cache Array[String]
default values
profile::cvmfs::client::quota_limit: 4096
profile::cvmfs::client::repositories:
  - pilot.eessi-hpc.org
  - software.eessi.io
  - cvmfs-config.computecanada.ca
  - soft.computecanada.ca
profile::cvmfs::client::alien_cache_repositories: [ ]
example
profile::cvmfs::client::quota_limit: 8192
profile::cvmfs::client::repositories:
  - atlas.cern.ch
profile::cvmfs::client::alien_cache_repositories:
  - grid.cern.ch

dependencies

When profile::cvmfs::client is included, these classes are included too:

profile::cvmfs::local_user

This class configures a cvmfs local user. This guarantees a consistent UID and GID for user cvmfs across the cluster when using CVMFS Alien Cache.

parameters

Variable Description Type
cvmfs_uid cvmfs user id Integer
cvmfs_gid cvmfs group id Integer
cvmfs_group cvmfs group name String
default values
profile::cvmfs::local_user::cvmfs_uid: 13000004
profile::cvmfs::local_user::cvmfs_gid: 8000131
profile::cvmfs::local_user::cvmfs_group: "cvmfs-reserved"

profile::cvmfs::alien_cache

This class determines the location of the CVMFS alien cache.

parameters

Variable Description Type
alien_fs_root Shared file system where the alien cache will be create String
alien_folder_name Alien cache folder name String
default values
profile::cvmfs::alien_cache::alien_fs_root: "/scratch"
profile::cvmfs::alien_cache::alien_folder_name: "cvmfs_alien_cache"

profile::efa

This class installs the Elastic Fabric Adapter drivers on an AWS instance with an EFA network interface. reference

parameters

Variable Description Type
version EFA driver version String
default values
profile::efa::version: 'latest'
example
profile::efa::version: '1.30.0'

profile::fail2ban

Fail2ban is an intrusion prevention software framework. Written in the Python programming language, it is designed to prevent brute-force attacks. reference

This class installs and configures fail2ban.

parameters

Variable Description Type
ignoreip List of IP addresses that can never be banned (compatible with CIDR notation) Array[String]

Refer to puppet-fail2ban for more parameters to configure.

default values
profile::fail2ban::ignoreip: []
example
profile::fail2ban::ignoreip:
  - 132.203.0.0/16
  - 10.0.0.0/8

dependencies

When profile::fail2ban is included, these classes are included too:

profile::freeipa

FreeIPA is a free and open source identity management system. FreeIPA is the upstream open-source project for Red Hat Identity Management. reference

This class configures either the instance as a FreeIPA client or a server based on the value of profile::freeipa::client::server_ip. If this value matches the instance local IP address, the server class is included - profile::freeipa::server, otherwise the client class is included - profile::freeipa::client.

dependencies

When profile::freeipa is included, theses classes can be included too:

profile::freeipa::base

This class configures files and services that are common to FreeIPA client and FreeIPA server.

parameters

Variable Description Type
domain_name FreeIPA primary domain String
default values
profile::freeipa::base::domain_name: "%{alias('terraform.data.domain_name')}"

profile::freeipa::client

This class install packages, and configures files and services of a FreeIPA client.

parameters

Variable Description Type
server_ip FreeIPA server ip address String
default values

By default, the FreeIPA server ip address corresponds to the local ip address of the first instance with the tag mgmt.

profile::freeipa::client::server_ip: "%{alias('terraform.tag_ip.mgmt.0')}"

profile::freeipa::server

This class configures files and services of a FreeIPA server.

parameters

Variable Description Type
id_start Starting user and group id number Integer
admin_password Password of the FreeIPA admin account String
ds_password Password of the directory server String
hbac_services Name of services to control with HBAC rules Array[String]
default values
profile::freeipa::server::id_start: 60001
profile::freeipa::server::admin_password: ENC[PKCS7,...]
profile::freeipa::server::ds_password: ENC[PKCS7,...]
profile::freeipa::server::hbac_services: ["sshd", "jupyterhub-login"]

profile::freeipa::mokey

mokey is web application that provides self-service user account management tools for FreeIPA. reference

This class installs mokey, configures its files and manage its service.

parameters

Variable Description Type
password Password of Mokey table in MariaDB String
port Mokey internal web server port Integer
enable_user_signup Allow users to create an account on the cluster Boolean
require_verify_admin Require a FreeIPA to enable Mokey created account before usage Boolean
access_tags HBAC rule access tags for users created via mokey self-signup Array[String]
default values
profile::freeipa::mokey::password: ENC[PKCS7,...]
profile::freeipa::mokey::port: 12345
profile::freeipa::mokey::enable_user_signup: true
profile::freeipa::mokey::require_verify_admin: true
profile::freeipa::mokey::access_tags: "%{alias('profile::users::ldap::access_tags')}"
example

profile::gpu

This class installs and configures the NVIDIA GPU drivers if an NVIDIA GPU is detected. The class configures nvidia-persistenced and nvidia-dcgm daemons when the GPU is connected via PCI passthrough, or configures nvidia-gridd when dealing with an NVIDIA VGPU.

For PCI passthrough, the class installs the latest CUDA drivers available on NVIDIA yum repos. For VGPU, the driver source is cloud provider specific and has to be specified via either profile::gpu::install::vgpu::rpm::source for rpms or profile::gpu::install::vgpu::bin::source for binary installer.

profile::jupyterhub::hub

JupyterHub is a multi-user server for Jupyter Notebooks. It is designed to support many users by spawning, managing, and proxying many singular Jupyter Notebook servers. reference

This class installs and configure the hub part of JupyterHub.

parameters

Variable Description Type
register_url URL that links to register page. Empty string means no visible link. String
reset_pw_url URL that links to reset password page. Empty string means no visible link. String
default values
profile::jupyterhub::hub::register_url: "https://mokey.%{lookup('terraform.data.domain_name')}/auth/signup"
profile::jupyterhub::hub::reset_pw_url: "https://mokey.%{lookup('terraform.data.domain_name')}/auth/forgotpw"

dependency

When profile::jupyterhub::hub is included, this class is included too:

profile::jupyterhub::node

This class installs and configure the single-user notebook part of JupyterHub.

dependency

When profile::jupyterhub::node is included, these classes are included too:

profile::metrics::node_exporter

Prometheus is a free software application used for event monitoring and alerting. It records metrics in a time series database built using an HTTP pull model, with flexible queries and real-time alerting. reference

This class configures a Prometheus exporter that exports server usage metrics, for example CPU and memory usage. It should be included on every instances of the cluster.

dependencies

When profile::metrics::node_exporter is included, these classes are included too:

profile::metrics::slurm_job_exporter

This class configures a Prometheus exporter that exports the Slurm compute node metrics, for example:

  • job memory usage
  • job memory max
  • job memory limit
  • job core usage total
  • job process count
  • job threads count
  • job power gpu

This exporter needs to run on compute nodes.

parameter

Variable Description Type
version The version of the slurm job exporter to install String
default values
profile::metrics::slurm_job_exporter::version: '0.0.10'

dependency

When profile::metrics::slurm_job_exporter is included, this class is included too:

  • [profile::consul](#profileconsul)

profile::metrics::slurm_exporter

This class configures a Prometheus exporter that exports the Slurm scheduling metrics, for example:

  • allocated nodes
  • allocated gpus
  • pending jobs
  • completed jobs

This exporter typically runs on the Slurm controller server, but it can run on any server with a functional Slurm command-line installation.

profile::nfs

Network File System (NFS) is a distributed file system protocol [...] allowing a user on a client computer to access files over a computer network much like local storage is accessed. reference

This class instantiates either an NFS client or an NFS server. If profile::nfs::client::server_ipmatches the instance's local ip address, the server class is included - profile::nfs::server, otherwise the client class is included - profile::nfs::client.

profile::nfs::client

This class install NFS and configure the client to mount all shares exported by a single NFS server identified via its ip address.

parameters

Variable Description Type
server_ip IP address of the NFS server String
default values
profile::nfs::client::server_ip: "%{alias('terraform.tag_ip.nfs.0')}"

dependency

When profile::nfs::client is included, these classes are included too:

  • nfs (client_enabled => true)

profile::nfs::server

This class install NFS and configure an NFS server that will export all provided devices. The class also make sure that devices sharing a common export name form an LVM volume group that is exported as a single LVM logical volume formated as XFS.

If a volume's size associated with an NFS server device is expanded after the initial configuration, the class will not expand the LVM volume automatically. These operations currently have to be accomplished manually.

parameters

Variable Description Type
devices Mapping between NFS share and devices to export. Hash[String, Array[String]]
default values
profile::nfs::server::devices: "%{alias('terraform.volumes.nfs')}"
example
profile::nfs::server::devices:
  home:
    - /dev/disk/by-id/b0b686f6-62c8-11ee-8c99-0242ac120002
    - /dev/disk/by-id/b65acc52-62c8-11ee-8c99-0242ac120002
  scratch:
    - bfd50252-62c8-11ee-8c99-0242ac120002
  project:
    - c3b99e00-62c8-11ee-8c99-0242ac120002

dependency

When profile::nfs::server is included, these classes are included too:

  • nfs (server_enabled => true)

profile::reverse_proxy

Caddy is an extensible, cross-platform, open-source web server written in Go. [...] It is best known for its automatic HTTPS features. reference

This class installs and configure Caddy as a reverse proxy to expose Magic Castle cluster internal services to the Internet.

parameters

Variable Description Type
domain_name Domain name corresponding to the main DNS record A registered String
main2sub_redir Subdomain to redirect to when hitting domain name directly. Empty means no redirect. String
subdomains Subdomain names used to create vhosts to internal http endpoints Hash[String, String]
remote_ips List of allowed ip addresses per subdomain. Undef mean no restrictions. Hash[String, Array[String]]
default values
profile::reverse_proxy::domain_name: "%{alias('terraform.data.domain_name')}"
profile::reverse_proxy::subdomains:
  ipa: "ipa.int.%{lookup('terraform.data.domain_name')}"
  mokey: "%{lookup('terraform.tag_ip.mgmt.0')}:%{lookup('profile::freeipa::mokey::port')}"
  jupyter: "https://127.0.0.1:8000"
profile::reverse_proxy::main2sub_redit: "jupyter"
profile::reverse_proxy::remote_ips: {}
example
profile::reverse_proxy::remote_ips:
  ipa:
    - 132.203.0.0/16

profile::rsyslog::base

Rsyslog is an open-source software utility used on UNIX and Unix-like computer systems for forwarding log messages in an IP network. reference

This class installs rsyslog and launch the service.

profile::rsyslog::client

This class install and configures rsyslog service to forward the instance's logs to rsyslog servers. The rsyslog servers are discovered by the instance via Consul.

dependencies

When profile::rsyslog::client is included, these classes are included too:

profile::rsyslog::server

This class install and configures rsyslog service to receives forwarded logs from all rsyslog client in the cluster.

dependencies

When profile::rsyslog::server is included, these classes are included too:

profile::slurm::base

The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management, or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters. reference

MUNGE (MUNGE Uid 'N' Gid Emporium) is an authentication service for creating and validating credentials. It is designed to be highly scalable for use in an HPC cluster environment. reference

This class installs base packages and config files that are essential to all Slurm's roles. It also installs and configure Munge service.

parameters

Variable Description Type
cluster_name Name of the cluster String
munge_key Base64 encoded Munge key String
slurm_version Slurm version to install Enum[20.11, 21.08, 22.05, 23.02]
os_reserved_memory Memory in MB reserved for the operating system on the compute nodes Integer
suspend_time Idle time (seconds) for nodes to becomes eligible for suspension. Integer
resume_timeout Maximum time permitted (seconds) between a node resume request and its availability. Integer
force_slurm_in_path Enable Slurm's bin path in all users (local and LDAP) PATH environment variable Boolean
enable_x11_forwarding Enable Slurm's built-in X11 forwarding capabilities Boolean
config_addendum Additional parameters included at the end of slurm.conf. String
default values
profile::slurm::base::cluster_name: "%{alias('terraform.data.cluster_name')}"
profile::slurm::base::munge_key: ENC[PKCS7, ...]
profile::slurm::base::slurm_version: '21.08'
profile::slurm::base::os_reserved_memory: 512
profile::slurm::base::suspend_time: 3600
profile::slurm::base::resume_timeout: 3600
profile::slurm::base::force_slurm_in_path: false
profile::slurm::base::enable_x11_forwarding: true
profile::slurm::base::config_addendum: ''

dependencies

When profile::slurm::base is included, these classes are included too:

profile::slurm::accounting

This class installs and configure the Slurm database daemon - slurmdbd. This class also installs and configures MariaDB for slurmdbd to store its tables.

parameters

Variable Description Type
password Password used by for SlurmDBD to connect to MariaDB String
admins List of Slurm administrator usernames Array[String]
accounts Define Slurm account name and specifications Hash[String, Hash]
users Define association between usernames and accounts Hash[String, Array[String]]
options Define additional cluster's global Slurm accounting options Hash[String, Any]
dbd_port SlurmDBD service listening port Integer
default values
profile::slurm::accounting::password: ENC[PKCS7, ...]
profile::slurm::accounting::admin: ["centos"]
profile::slurm::accounting::accounts: {}
profile::slurm::accounting::users: {}
profile::slurm::accounting::options: {}
profile::slurm::accounting::dbd_port: 6869
example

Example of the definition of Slurm accounts and their association with users:

profile::slurm::accounting::admins: ['oppenheimer']

profile::slurm::accounting::accounts:
  physics:
    Fairshare: 1
    MaxJobs: 100
  engineering:
    Fairshare: 2
    MaxJobs: 200
  humanities:
    Fairshare: 1
    MaxJobs: 300

profile::slurm::accounting::users:
  oppenheimer: ['physics']
  rutherford: ['physics', 'engineering']
  sartre: ['humanities']

Each username in profile::slurm::accounting::users and profile::slurm::accounting::admins have to correspond to an LDAP or a local users. Refer to profile::users::ldap::users and profile::users::local::users for more information.

dependencies

When profile::slurm::accounting is included, these classes are included too:

profile::slurm::controller

This class installs and configure the Slurm controller daemon - slurmctld.

parameters

Variable Description Type
autoscale_version Version of Slurm Terraform cloud autoscale software to install String
tfe_token Terraform Cloud API Token. Required to enable autoscaling. String
tfe_workspace Terraform Cloud workspace id. Required to enable autoscaling. String
tfe_var_pool Variable name in Terraform Cloud workspace to control autoscaling pool String
selinux_context SELinux context for jobs (Slurm > 20.11) String
default values
profile::slurm::controller::autoscale_version: "0.4.0"
profile::slurm::controller::selinux_context: "user_u:user_r:user_t:s0"
profile::slurm::controller::tfe_token: ""
profile::slurm::controller::tfe_workspace: ""
profile::slurm::controller::tfe_var_pool: "pool"
example
profile::slurm::controller::tfe_token: "7bf4bd10-1b62-4389-8cf0-28321fcb9df8"
profile::slurm::controller::tfe_workspace: "ws-jE6Lq2hggNPyRJcJ"

For more information on how to configure Slurm autoscaling with Terraform cloud, refer to the Terraform Cloud section of Magic Castle manual.

dependencies

When profile::slurm::accounting is included, these classes are included too:

profile::software_stack

This class configures the initial shell profile that user will load on login and the default set of Lmod modules that will be loaded. The software stack selected depends on the Puppet fact software_stack which is set by Magic Castle Terraform variable software_stack.

Variable Description Type
min_uid Mininum UID value required to load the software environment init script on login Integer
initial_profile Path to shell script initializing software environment variables String
extra_site_env_vars Map of environment variables that will be exported before sourcing profile shell scripts. Hash[String, String]
lmod_default_modules List of lmod default modules Array[String]
default values
profile::software_stack::min_uid: "%{alias('profile::freeipa::server::id_start')}"

computecanada software stack

profile::software_stack::initial_profile: "/cvmfs/soft.computecanada.ca/config/profile/bash.sh"
profile::software_stack::extra_site_env_vars: {}
profile::software_stack::lmod_default_modules:
    - gentoo/2020
    - imkl/2020.1.217
    - gcc/9.3.0
    - openmpi/4.0.3

eessi software stack

profile::software_stack::initial_profile: "/cvmfs/software.eessi.io/versions/2023.06/init/Magic_Castle/bash"
profile::software_stack::extra_site_env_vars: {}
profile::software_stack::lmod_default_modules:
  - GCC

dependencies

When profile::software_stack is included, these classes are included too:

profile::squid::server

Squid is a caching and forwarding HTTP web proxy. It has a wide variety of uses, including speeding up a web server by caching repeated requests reference

This class configures and installs the Squid service. Its main usage is to act as an HTTP cache for CVMFS clients in the cluster.

parameters

Variable Description Type
port Squid service listening port Integer
cache_size Amount of disk space (MB) Integer
cvmfs_acl_regex List of allowed CVMFS stratums as regexes Array[String]
default values
profile::squid::server::port: 3128
profile::squid::server::cache_size: 4096
profile::squid::server::cvmfs_acl_regex:
  - '^(cvmfs-.*\.computecanada\.ca)$'
  - '^(cvmfs-.*\.computecanada\.net)$'
  - '^(.*-cvmfs\.openhtc\.io)$'
  - '^(cvmfs-.*\.genap\.ca)$'
  - '^(.*\.cvmfs\.eessi-infra\.org)$'
  - '^(.*s1\.eessi\.science)$'

dependencies

When profile::squid::server is included, these classes are included too:

profile::sssd::client

The System Security Services Daemon is software originally developed for the Linux operating system that provides a set of daemons to manage access to remote directory services and authentication mechanisms. reference

This class configures external authentication domains

parameters

Variable Description Type
domains Config dictionary of domains that can authenticate Hash[String, Any]
access_tags List of host tags that domain user can connect to Array[String]
deny_access Deny access to the domains on the host including this class, if undef, the access is defined by tags. Optional[Boolean]
default values
profile::sssd::client::domains: { }
profile::sssd::client::access_tags: ['login', 'node']
profile::sssd::client::deny_access: ~
example
profile::sssd::client::domains:
  MyOrgLDAP:
    id_provider: ldap
    auth_provider: ldap
    ldap_schema: rfc2307
    ldap_uri:
      - ldaps://server01.ldap.myorg.net
      - ldaps://server02.ldap.myorg.net
      - ldaps://server03.ldap.myorg.net
    ldap_search_base: ou=People,dc=myorg,dc=net
    ldap_group_search_base: ou=Group,dc=myorg,dc=net
    ldap_id_use_start_tls: False
    cache_credentials: true
    ldap_tls_reqcert: never
    access_provider: ldap
    filter_groups: 'cvmfs-reserved'

The domain's keys in this example are on an indicative basis and may not be mandatory. Some SSSD domain keys might also be missing. Refer to domain sections in sssd.conf manual for more informations.

profile::ssh::base

This class optimizer ssh server daemon (sshd) configuration to achieve an A+ audit score on https://www.sshaudit.com/.

profile::ssh::known_hosts

This class populates the file /etc/ssh/ssh_known_hosts with the cluster's instance ed25519 hostkeys using data provided by Terraform.

profile::ssh::hostbased_auth::client

This class allows instances to connect with SSH to instances including profile::ssh::hostbased_auth::server using SSH hostbased authentication.

profile::ssh::hostbased_auth::server

This class enables SSH hostbased authentication on the instance including it.

parameter

Variable Description Type
shosts_tags Tags of instances that can connect this server using hostbased authentication Array[String]
default values
profile::ssh::hostbased_auth::server::shosts_tags: ['login', 'node']

dependency

When profile::ssh::hostbased_auth::server is included, this class is included too:

profile::users::ldap

This class allows the definition of FreeIPA users directly in YAML. The alternatives being to use FreeIPA command-line, to use the FreeIPA web interface or to use Mokey.

parameters

Variable Description Type
users Dictionary of users to be created in LDAP Hash[profile::users::ldap_user]
access_tags List of 'tag:service' that LDAP user can connect to Array[String]

A profile::users::ldap_user is defined as a dictionary with the following keys:

Variable Description Type Optional ?
groups List of groups the user has to be part of Array[String] No
public_keys List of ssh authorized keys for the user Array[String] Yes
passwd User's password String Yes
manage_password If enable, agents verify the password hashes match Boolean Yes

By default, Puppet will manage the LDAP user(s) password and change it in LDAP if its hash no longer match to what is prescribed in YAML. To disable this feature, add manage_password: false to the user(s) definition.

default values
profile::users::ldap::users:
  'user':
    count: "%{alias('terraform.data.nb_users')}"
    passwd: "%{alias('terraform.data.guest_passwd')}"
    groups: ['def-sponsor00']
    manage_password: true

profile::users::ldap::access_tags: ['login:sshd', 'node:sshd', 'proxy:jupyterhub-login']

If profile::users::ldap::users is present in more than one YAML file in the hierarchy, all hashes for that parameter will be combined using Puppet's deep merge strategy.

examples

A batch of 10 users, user01 to user10, can be defined as:

profile::users::ldap::users:
  user:
    count: 10
    passwd: user.password.is.easy.to.remember
    groups: ['def-sponsor00']

A single user alice which can authenticate with SSH public key only can be defined as:

profile::users::ldap::users:
  alice:
    groups: ['def-sponsor00']
    public_keys: ['ssh-rsa ... user@local', 'ssh-ed25519 ...']

Allowing LDAP users to connect to the cluster only via JupyterHub:

profile::users::ldap::access_tags: ['proxy:jupyterhub-login']

profile::users::local

This class allows the definition of local users outside of FreeIPA realm.

Local user's home is local to the machine where it is created and can be found at the root of the filesytem i.e.: /username. Local users are the only type of users in Magic Castle allowed to be sudoers.

parameters

Variable Description Type
users Dictionary of users to be created locally Hash[profile::users::local_user]

A profile::users::local_user is defined as a dictionary with the following keys:

Variable Description Type Optional ?
groups List of groups the user has to be part of Array[String] No
public_keys List of ssh authorized keys for the user Array[String] No
sudoer If enable, the user can sudo without password Boolean Yes
selinux_user SELinux context for the user String Yes
mls_range MLS Range for the user String Yes
default values
profile::users::local::users:
  "%{alias('terraform.data.sudoer_username')}":
    public_keys: "%{alias('terraform.data.public_keys')}"
    groups: ['adm', 'wheel', 'systemd-journal']
    sudoer: true

If profile::users::local::users is present in more than one YAML file in the hierarchy, all hashes for that parameter will be combined using Puppet's deep merge strategy.

examples

A local user bob can be defined in hieradata as:

profile::users::local::users:
  bob:
    groups: ['group1', 'group2']
    public_keys: ['ssh-rsa...', 'ssh-dsa']
    # sudoer: false
    # selinux_user: 'unconfined_u'
    # mls_range: ''s0-s0:c0.c1023'