Skip to content
This repository has been archived by the owner on Jan 10, 2024. It is now read-only.

Commit

Permalink
Update main docs
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user committed Nov 23, 2023
1 parent 87f2003 commit bfd2250
Show file tree
Hide file tree
Showing 107 changed files with 18,593 additions and 0 deletions.
4 changes: 4 additions & 0 deletions static/docs/main/.buildinfo
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 585fa367c30d1c78afe54041dff68a3d
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file added static/docs/main/_images/beowulf_architecture.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
38 changes: 38 additions & 0 deletions static/docs/main/_sources/contents/background.rst.txt
@@ -0,0 +1,38 @@
==========
Background
==========

Warewulf is based on the design of the original Beowulf Cluster design
(and thus the name, soft\ **WARE** implementation of the beo\
**WULF**)

The `Beowulf Cluster <https://en.wikipedia.org/wiki/Beowulf_cluster>`_
design was developed in 1996 by Dr. Thomas Sterling and Dr. Donald
Becker at NASA. The architecture is defined as a group of similar
compute worker nodes all connected together using standard commodity
equipment on a private network segment. The control node (historically
referred to as the "master" or "head" node) is dual homed (has two
network interface cards) with one of these network interface cards
attached to the upstream public network and the other connected to a
private network which connects to all of the compute worker nodes (as
seen in the figure below).

.. image:: beowulf_architecture.png
:alt: Beowulf architecture

This simple topology is the foundation for creating every scalable HPC
cluster resource. Even today, almost 30 years after the inception of
this architecture, this is the baseline architecture that traditional
HPC systems are built to.

Other considerations for a working HPC-type cluster are storage,
scheduling and resource management, monitoring, interactive systems,
etc. For smaller systems, much of these requirements can be managed
all from a single control node, but as the system scales, it may need
to have groups of nodes dedicated to these different services.

Warewulf is easily capable of building simple and turnkey HPC
clusters, to giant massive complex multi-purpose computing clusters,
through next generation computing platforms.

Anytime a cluster of systems is needed, Warewulf is your tool!
178 changes: 178 additions & 0 deletions static/docs/main/_sources/contents/configuration.rst.txt
@@ -0,0 +1,178 @@
======================
Warewulf Configuration
======================

The default installation of Warewulf will put all of the configuration
files into ``/etc/warewulf/``. In that directory, you will find the
primary configuration files needed by Warewulf.

warewulf.conf
=============

The Warewulf configuration exists as follows in the current version of
Warewulf (4.3.0):

.. code-block:: yaml
WW_INTERNAL: 43
ipaddr: 192.168.200.1
netmask: 255.255.255.0
network: 192.168.200.0
warewulf:
port: 9873
secure: false
update interval: 60
autobuild overlays: true
host overlay: true
syslog: false
dhcp:
enabled: true
range start: 192.168.200.50
range end: 192.168.200.99
systemd name: dhcpd
tftp:
enabled: true
systemd name: tftp
nfs:
enabled: true
export paths:
- path: /home
export options: rw,sync
mount options: defaults
mount: true
- path: /opt
export options: ro,sync,no_root_squash
mount options: defaults
mount: false
systemd name: nfs-server
Generally you can leave this file as is, as long as you set the
appropriate networking information. Specifically the following
configurations:

* ``ipaddr``: This is the control node's networking interface
connecting to the cluster's **PRIVATE** network. This configuration
must match the host's network IP address for the cluster's private
interface.

* ``netmask``: Similar to the ``ipaddr``, this is the subnet mask for
the cluster's **PRIVATE** network and it must also match the host's
subnet mask for the cluster's private interface.

* ``dhcp:range start`` and ``dhcp:range end``: This address range must
exist in the network defined above. If it is outside of this
network, failures will occur. This specifies the range of addresses
you want DHCP to use.

.. note::

The network configuration listed above assumes the network layout
in the [Background](background.md) portion of the documentation.

The other configuration options are usually not touched, but they are
explained as follows:

* ``*:enabled``: This disables Warewulf's control of an external
service. This is useful if you want to manage that service directly.

* ``*:systemd name``: This is so Warewulf can control some of the
host's services. For the distributions that we've built and tested
this on, these will require no changes.

* ``warewulf:port``: This is the port that the Warewulf web server
will be listening on. It is recommended not to change this so there
is no misalignment with node's expectations of how to contact the
Warewulf service.

* ``warewulf:secure``: When ``true``, this limits the Warewulf server
to only respond to runtime overlay requests originating from a
privileged port. This prevents non-root users from requesting the
runtime overlay, which may contain sensitive information.

When ``true``, ``wwclient`` uses TCP port 987.

Changing this option requires rebuilding node overlays and rebooting
compute nodes, to configure them to use a privileged port.

* ``warewulf:update interval``: This defines the frequency (in
seconds) with which the Warewulf client on the compute node fetches
overlay updates.

* ``warewulf:autobuild overlays``: This determines whether per-node
overlays will automatically be rebuilt, e.g., when an underlying
overlay is changed.

* ``warewulf:host overlay``: This determines whether the special
``host`` overlay is applied to the Warewulf server during
configuration. (The host overlay is used to configure the dependent
services.)

* ``warewulf:syslog``: This determines whether Warewulf server logs go
to syslog or are written directly to a log file. (e.g.,
``/var/log/warewulfd.log``)

* ``nfs:export paths``: Warewulf will automatically set up the NFS
exports if you wish for it to do this.

nodes.conf
==========

The ``nodes.conf`` file is the primary database file for all compute
nodes. It is a flat text YAML configuration file that is managed by
the ``wwctl`` command, but some sites manage the compute nodes and
infrastructure via configuration management. This file being flat text
and very light weight makes management of the node configurations very
easy no matter what your configuration paradigm is.

For the purpose of this document, we will not go into the detailed
format of this file as it is recommended to edit with the ``wwctl``
command.

.. note::

This configuration is not written at install time, but the
first time you attempt to run ``wwctl``, this file will be generated
if it does not exist already.

defaults.conf
=============

The ``defaults.conf`` file configures default values used when none
are specified. For example: if a node does not have a "runtime
overlay" specified, the respective value from ``defaultnode`` is
used. If a network device does not specify a "device," the device
value of the ``dummy`` device is used.

If ``defaults.conf`` does not exist, the following values are used as
compiled into Warewulf at build-time:

.. code-block:: yaml
--
defaultnode:
runtime overlay:
- generic
system overlay:
- wwinit
kernel:
args: quiet crashkernel=no vga=791 net.naming-scheme=v238
init: /sbin/init
root: initramfs
ipxe template: default
profiles:
- default
network devices:
dummy:
device: eth0
type: ethernet
netmask: 255.255.255.0
There should never be a need to change this file: all site-local
parameters should be specified using either nodes or profiles.

Directories
===========

The ``/etc/warewulf/ipxe/`` directory contains *text/templates* that
are used by the Warewulf configuration process to configure the
``ipxe`` service.

0 comments on commit bfd2250

Please sign in to comment.