Skip to content

Building NUT for in‐place upgrades or non‐disruptive tests

Jim Klimov edited this page Nov 7, 2023 · 16 revisions

NOTE: Since PR https://github.com/networkupstools/nut/pull/1845 a variant of this article is tracked in https://github.com/networkupstools/nut/blob/master/INSTALL.nut document. That copy is anticipated to more closely reflect the procedures for accompanying NUT source code revision, than this version-agnostic Wiki article.

Overview

Since late 2022/early 2023 NUT codebase supports "in-place" builds which try their best to discover the configuration of an earlier build (configuration and run-time paths and OS accounts involved, maybe an exact configuration if stored in deployed binaries).

This optional mode is primarily intended for several use-cases:

  • Test recent GitHub "master" branch or proposed PR to see if it solves a practical problem for a particular user;
  • Replace an existing deployment, e.g. if OS-provided packages deliver obsolete code, to use newer NUT locally in "production mode". (In such cases ideally get your distribution, NAS vendor, etc. to provide current NUT -- and benefit from a better integrated and tested product)

Note that "just testing" often involves building the codebase and new drivers or tools in question, and running them right from the build workspace (without installing into the system and so risking an unpredictable-stability state). In case of testing new drivers, note you would need to stop the normally running instances to free up the communications resources (USB/serial ports, etc.), run the new driver in data-dump mode, and restart the normal systems operations. Such tests still benefit from matching the build configuration to what is already deployed, in order to request same configuration files and system access permissions (e.g. to device nodes for physical-media ports involved, and to read the production configuration files).

Pre-requisites

The https://github.com/networkupstools/nut/blob/master/docs/config-prereqs.txt document (also as a rendered page on NUT website) details tools and dependencies that were added on NUT CI build environments, which now cover many operating systems. This should provide a decent starting point for the build on yours (PRs to update the document are welcome!)

Note that unlike distribution tarballs, Git sources do not include a configure script and some other files -- these should be generated by running autogen.sh (or ci_build.sh that calls it).

Getting the right sources

To build the current tip of development iterations (usually after PR merges that passed CI, reviews and/or other tests), just clone the NUT repository and "master" branch should get checked out by default (also can request that explicitly, per example posted below).

If you want to quickly test a particular pull request, see the link on top of the PR page that says "...wants to merge ... from :" and copy the proposed-source URL of that "from" part. For example, this says "jimklimov:issue-1234" and links to "https://github.com/jimklimov/nut/tree/issue-1234". For git-cloning, just paste into the shell and replace the /tree/ with "-b" CLI option like this:

:; cd /tmp
### Checkout https://github.com/jimklimov/nut/tree/issue-1234
:; git clone https://github.com/jimklimov/nut -b issue-1234

Testing with CI helper

NOTE: this uses the ci_build.sh script to arrange some rituals and settings, in this case primarily to default the choice of drivers to auto-detection of what can be built, and to skip building documentation. Also note that this script supports many other scenarios for CI and developers, managed by BUILD_TYPE and other environment variables, which are not explored here.

An "in-place" testing build would probably go along the lines of:

:; cd /tmp
:; git clone -b master https://github.com/networkupstools/nut
:; cd nut
:; ./ci_build.sh inplace
### Temporarily stop your original drivers
:; ./drivers/nutdrv_qx -a DEVNAME_FROM_UPS_CONF -d1 -DDDDDD # -x override...=... -x subdriver=...
### Can start back your original drivers
### Analyze and/or post back the data-dump

NOTE: to probe a device for which you do not have an ups.conf section yet, you must specify -s name and all config options (including port) on command-line with -x arguments, e.g.:

:; ./drivers/nutdrv_qx -s tempups \
    -d1 -DDDDDD -x port=auto \
    -x vendorid=... -x productid=... \
    -x subdriver=...

Replacing a NUT deployment

While ci_build.sh inplace can be a viable option for preparation of local builds, you may want to have precise control over configure options (e.g. choice of required drivers, or enabled documentation).

A sound starting point would be to track down packaging recipes used by your distribution (e.g. RPM spec or DEB rules files, etc.) to detail the same paths if you intend to replace those, and copy the parameters for configure script from there -- especially if your system is not currently running NUT v2.8.1 or newer (which embeds this information to facilitate in-place upgrade rebuilds).

Note that the primary focus of in-place automated configuration mode is about critical run-time options, such as OS user accounts, configuration location and state/PID paths, so it alone might not replace your driver binaries that the package would put into an obscure location like /lib/nut. It would however install init-scripts or systemd units that would refer to new locations specified by the current build, so such old binaries would just consume disk space but not run.

This goes similar to usual build and install from Git:

:; cd /tmp
:; git clone https://github.com/networkupstools/nut
:; cd nut
:; ./autogen.sh
:; ./configure --enable-inplace-runtime # --maybe-some-other-options
:; make -j 4 all && make -j 4 check && sudo make install

Note that make install does not currently handle all the nuances that packaging installation scripts would, such as customizing filesystem object ownership, daemon restarts, etc. or even creating locations like /var/state/ups and /var/run/nut as part of the make target (but e.g. the delivered systemd-tmpfiles configuration can handle that for a large part of the audience) => issue #1298

At this point you should revise the locations for PID files (e.g. /var/run/nut) and pipe files (e.g. /var/state/ups) that they exist and permissions remain suitable for NUT run-time user selected by your configuration, and typically stop your original NUT drivers, data-server (upsd) and upsmon, and restart them using the new binaries.

Replacing a systemd deployment

For modern Linux distributions with systemd this could go like below, to re-enable services (creating proper symlinks) and get them started:

:; cd /tmp
:; git clone https://github.com/networkupstools/nut
:; cd nut
:; ./autogen.sh
:; ./configure --enable-inplace-runtime # --maybe-some-other-options
:; make -j 4 all && make -j 4 check && \
   { sudo systemctl stop nut-monitor nut-server || true ; } && \
   { sudo systemctl stop nut-driver.service || true ; } && \
   { sudo systemctl stop nut-driver.target || true ; } && \
   { sudo systemctl stop nut.target || true ; } && \
   sudo make install && \
   sudo systemctl daemon-reload && \
   sudo systemd-tmpfiles --create && \
   sudo systemctl disable nut.target nut-driver.target nut-monitor nut-server nut-driver-enumerator.path nut-driver-enumerator.service && \
   sudo systemctl enable  nut.target nut-driver.target nut-monitor nut-server nut-driver-enumerator.path nut-driver-enumerator.service && \
   { sudo systemctl restart udev || true ; } && \
   sudo systemctl restart nut-driver-enumerator.service nut-monitor nut-server

Note the several attempts to stop old service units -- naming did change from 2.7.4 and older releases, through 2.8.0, and up to current codebase.

You may also have to restart (or reload if supported) some system services if your updates impact them, like udev for updates USB support (note also PR #1342 regarding change from udev.rules to udev.hwdb file with NUT v2.8.0 or later -- may have to remove the older one manually).

Alternately, if you just want to test a newly built driver -- especially if you added support for new USB VID:PID pairs -- make sure it starts as root (sudo DRIVERNAME -u root ... on command line, or RUN_AS_USER in ups.conf), and does not care much about devfs permissions.

Iterating with a systemd deployment

If you are iterating NUT builds from GitHub, or local development branches, you may get away with shorter constructs to just restart the services (if you know there were no changes to unit file definitions), e.g.:

:; cd /tmp
:; git clone -b master https://github.com/networkupstools/nut
:; cd nut
:; git checkout -b issue-1234 ### your PR branch name, arbitrary
:; ./autogen.sh
:; ./configure --enable-inplace-runtime # --maybe-some-other-options

### Iterate your code changes (e.g. PR draft), build and install with:

:; make -j 4 all && make -j 4 check && \
   sudo make install && \
   sudo systemctl daemon-reload && \
   sudo systemd-tmpfiles --create && \
   sudo systemctl restart nut-driver-enumerator.service nut-monitor nut-server

Hope this helps,
Jim Klimov

Clone this wiki locally