Skip to content

An all in one developer guide govCMS SaaS

Joshua Graham edited this page Apr 23, 2024 · 21 revisions

Please update the govCMS/GovCMS wiki on Github if you have any corrections or improvements.

GovCMS SaaS project guide

Current as of 16 July 2021.

IMPORTANT: Drupal 7's EOL is 28 November 2022. Drupal 8 is reaching it's End Of Life (EOL) on 2 November 2021. All new site builds should use the latest version of Drupal 9.

While some of this documentation is specific to Drupal 7 (such as certain drush commands), much of it is still relevant. If something isn't working, please update the doco :)


You're reading the doco! Good on you, keep going!

This guide allows you to spin up a new govCMS Drupal 7 SaaS site locally using Docker, then import an existing SaaS site over it for local development. It covers using Windows, Linux or Mac OS, and assumes basic web developer knowledge and experience using a command line.

While anyone can create a new govCMS site via the official distro, importing a govCMS SaaS client site requires appropriate access to the site project repository to have already been granted by the Department of Finance.

This involves having an account created by the govCMS team, with access to the appropriate projects. Your project manager can request this by lodging a support ticket with the GovCMS Support Desk (preferred), or emailing support@govcms.gov.au.

Running commands from this documentation

When running commands listed in these instructions, anything wrapped in ${curly brackets like this} represents a placeholder that you need to swap with your own information, so something like this:

cd ${my-project-name}

would actually be run as:

cd  mygreatproject

To help ensure this guide works for people of all operating systems, some commands are shown twice, with the Ahoy version appearing first:

ahoy version:    ahoy nifty shortcut mycommand
full version:    the entire command in its elongated and transparent glory

Ahoy can normally be used for MacOS and Linux, but the full versions should work for everyone.

Be sure to read the 'Known issues and workarounds' section at the end of this document. You might also want to check out the Troubleshooting page, it contains common errors and fixes.

Contents

Official source material

You don't need to read it for this project, but sometimes it's nice to know the background :)

  1. The vanilla GovCMS (7.x-3.x) Distribution is available at Github Source and as Public DockerHub images
  2. Those GovCMS images are then customised for Lagoon and GovCMS, and are available at Github Source and as Public DockerHub images
  3. Those GovCMS lagoon images are then retrieved by the govcms-site scaffold repository.

Documentation on debugging Docker for Windows issues lives on the Amazeeio GitHub profile.

Requirements

Version numbers are noted where applicable.

Everyone

  • Docker Desktop v18.09 min - A container management system. You only need the Community Edition, but will need to create a Docker account and note your Docker ID prior to using it.
  • Git v1.8 min - a code version control program. This can be installed either on its own (Mac/linux) or with Git for Windows, or alternatively with a GUI such as SourceTree. Mac users with Xcode or Xcode Command Line Tools installed may already have Git. Git also comes with Windows Subsystem for Linux (WSL).

Linux/Mac users

  • Ruby - A dynamic programming language, allows running of pygmy
  • pygmy v0.9.10 min - A miniature local web server (You might need sudo for this depending on your ruby configuration)

Windows users

  • Windows 10 Pro or Enterprise (Docker can't run on Windows Home edition as it doesn't come with Hyper-V)
  • Amazee Docker for Windows - A set of additional Docker containers related to networking that run in Windows, in the absence of pygmy (it's just set of containers, not an Amazee-specific version of the program 'Docker for Windows'). Cloning instructions are in the 'Clone the project repo' section.

Optional

  • A package manager (this will vary depending on your OS). Linux ships with one out of the box. Some examples are:
    • Linux: npm (from NodeJS), apt, dpkg (default on Debian) or yum
    • Mac: homebrew, nix, pkgsrc
    • Windows: NodeJS provides npm on Windows
  • Ahoy - A shortcut manager for saving your precious fingers; all Docker shortcuts can be viewed in .ahoy.yml
  • NodeJS - Allows the use of npm - Node Package Manager, to run developer tooling
  • Portainer - A nice web UI for managing Docker containers

Development rules

  • You should create your theme(s) in folders under /themes
  • Tests specific to your site can be committed to the /tests folders
  • The files folder is not (currently) committed to GitLab.
  • Do not make changes to docker-compose.yml, lagoon.yml, .gitlab-ci.yml, .ahoy.yml, README.md, or the Dockerfiles under /.docker - Altering these files will result in the project being unable to deploy to GovCMS SaaS. These files are locked in GitLab, so attempting to push changes will fail anyway.
  • Some files can be extended/overridden by including additional files (see the 'Adding development modules' section)
  • You should never need to change anything inside a Docker container. Directly altering the contents of a container compromises it, meaning when a new instance of the project is spun up, those alterations won't be present.
  • Check for image updates every day! To ensure you are running the latest environment, and therefore that your local work runs in a setup that accurately reflects the production environment, be sure to pull down the latest images from the container registry every day.

Setting up a site

1. Connect to projects.govcms.gov.au

Before you can clone anything from projects.govcms.gov.au, you must have an account created by the GovCMS team. Your project manager can request this by emailing support@govcms.gov.au.

Note: You may experience connectivity issues depending on your internet connection. Government internet connections run traffic through firewalls and gateways/filters, and block certain ports. This may also vary depending on whether connecting to Wifi or a physical LAN (ethernet) connection. SSH access often requires port 22 to be available, and HTTPS often uses port 443.

For this reason, developers must often rely on external connections that do not have these restrictions, such as wireless 4G dongles or mobile hotspotting, to connect to external services.

You can either connect to projects.govcms.gov.au via HTTPS using a Personal Access Tokens (PAT) or via SSH using SSH keys. Using SSH keys is generally easier, as once set up they automatically authenticate you without needing to enter a password every time.

There's two scenarios where HTTPS is used:

Creating the personal access token is done via the GitLab web interface for projects.govcms.gov.au. The name of the token can be whatever you prefer, and the scope should be set to read_registry. Note that you need to keep a copy of the token when you generate it because you cannot access it later.

Check out the GitLab guides on:

If using SSH, you can test and debug your connectivity with this:

ssh -Tv git@projects.govcms.gov.au

The first time you connect to a new Host, you may get a message like this. Just enter 'yes' to continue:

The authenticity of this host has not been verified, do you want to add this host to the list of known hosts? 

If you create a new SSH key pair, and they do NOT use the default SSH key names id_rsa and id_rsa.pub, you may need to update your ~/.ssh/config file to use those particular keys for that particular connection. If SSH still refuses the connection, try using your existing id_rsa key.

2. Clone the project repo

Once you're all connected, navigate to wherever you want to keep the project and clone it down. This command will pull down the project and place it in a directory called '${project_name}':

SSH

git clone git@projects.govcms.gov.au:${profile_name}/${project_name}.git ${project_name}

HTTPS

git clone https://projects.govcms.gov.au/${profile_name}/${project_name}.git ${project_name}

Mac users only

Confirm the project root path is in Docker's file sharing config. File sharing is required for volume mounting if the project lives outside of the /Users directory.

Windows users only

In addition to the project repo, also clone the amazee-docker-windows repo - this can live outside the project root.

git clone https://github.com/amazeeio/amazeeio-docker-windows amazee-docker-windows

This is a public repo, so you don't need a PAT to access it. Attempting to use an SSH version of the URL will fail too, unless you have an account on Amazee's GitHub profile.

3. Build it and run it

Make sure you don't have anything running on port 80 on the host machine (like an Apache web server from XAMP, WAMP, LAMP etc, or Skype).

If you do, you may need to change the port it runs on or turn it off. Windows users may also need to keep port 443 available for the haproxy Docker container.

You can check what is running on these ports with:

Mac/Linux

sudo lsof -i -P -n | grep LISTEN

Windows

netstat -anb | findstr :80
  1. Ensure Docker is running. You can do this by running docker container ps in your command line. If you have to start it, it may take a while.

  2. Navigate into the project root

  3. Start the local network

    Mac/Linux

    pygmy up

    MacOS and Ubuntu 18 users may need to append --no-resolver if the site fails to load with an 'Unable to connect' error.

    Windows (This bit must be run from within the amazee-docker-windows root)

    i. Create a new network for Docker. This only needs to be done once after Docker is installed, you can then start, stop, rebuild and destroy containers without setting up another:

    docker network create amazeeio-network

    ii. Start up the amazee-docker-windows containers:

    docker-compose up -d

    If Docker throws an error, run docker-compose restart to reboot your containers. If that doesn't work, you may need to quit Docker altogether and run it as an Administrator. See the Amazeeio GitHub profile for help.

  4. Build and start the Docker containers (must be run from within the project root):

    ahoy up
    docker-compose up -d; docker-compose exec -T cli govcms-deploy

    The govcms-deploy command enables the stage_file_proxy module from within the cli container.

  5. Install the GovCMS Drupal profile via Drush into the new website container (this container is known as 'test'):

    ahoy install
    docker-compose exec -T test drush si -y govcms

    Note: This must be done before importing a backup of another site.

  6. Update the user account password using Drush:

    ahoy drush upwd admin --password='${your-new-password}'
    docker-compose exec -T test drush upwd admin --password='${your-new-password}'
  7. Login to your newly-installed Drupal site using the domain this spits out (should be ${project_name}.docker.amazee.io; using the full URL login spits out will prompt a password reset):

    ahoy login
    docker-compose exec -T test drush uli

Image updates

Once your site is running correctly, you should check for updates to the images on which it runs regularly. To check for updates:

GovCMS 9 (SaaS)

ahoy pull
docker image ls --format \"{{.Repository}}:{{.Tag}}\" | grep govcms/ | grep -v none | xargs -n1 docker pull | cat

GovCMS 7

ahoy pull
docker image ls --format \"{{.Repository}}:{{.Tag}}\" | grep govcmslagoon/ | grep -v none | xargs -n1 docker pull | cat

If still getting old images, you can also try these commands. (sometimes images are not shown as tagged)

docker-compose pull --include-deps
docker pull govcms/php:10.x-latest && 
docker pull govcms/nginx-drupal:10.x-latest && 
docker pull govcms/test:10.x-latest && 
docker pull govcms/govcms:10.x-latest && 
docker pull govcms/mariadb-drupal:10.x-latest && 
docker pull govcms/solr:10.x-latest &&
docker pull govcms/av:latest
Detailed instructions: (and other environments e.g. Windows)

Note: Make sure to change beteeen govcmslagoon8 and govcmslagoon accordingly.

All: (stops and deleted the running containers)

    docker-compose down

Linux/Mac: (update stored images)

    ahoy pull

Windows (cmd): (update stored images)

    docker image ls --format "{{.Repository}}:{{.Tag}}" | findstr "govcms/" | for /f %f in ('findstr /V "none"') do docker pull %f 

Windows (git bash): (update stored images)

    alias docker="winpty -Xallow-non-tty -Xplain docker"
    docker image ls --format \"{{.Repository}}:{{.Tag}}\" | grep govcms/ | grep -v none | awk "{print $1}" | xargs -n1 docker pull | cat
    # Change alias back (use "unalias docker", if you don't want to use winpty)
    alias docker="winpty docker"

Windows (powershell): (update stored images)

    docker image ls --format "{{.Repository}}:{{.Tag}}" | Select-String -Pattern "govcms/" | Select-String -Pattern "none" -NotMatch | ForEach-Object -Process {docker pull $_}

All: (build the new govcms containers now from new images)

    docker-compose build --no-cache

If any updates are found, you'll need to rebuild your containers with docker-compose up -d --build.

Importing an existing site

You can install a base govCMS site from this project, then import the files and database of your production site.

Important!

  • For imported backups to work, your local site must already have a vanilla govCMS site installed, otherwise you'll get a 503 error when loading the site. This is because the installation process includes scripts that must be run in multiple containers. See Step 5 of Build it and Run it
  • GovCMS Dashboard databases are automatically sanitized, so your normal login won't work! See the Notes section at the bottom for more information.

Importing a database

You can either import a local database file via Drush, or rest your fingers while Docker retrieves the latest production site database backup and imports it for you. The ability to take direct MySQL backups on demand is not currently supported.

Importing a local database dump

  1. Download a mysql container backup from the govCMS Dashboard. The resulting file should be saved into your defaul downloads location.

  2. Navigate to the downloaded file location, and extract the MySQL container backup from the tar file:

    tar -xf ${backup_name}.tar
  3. Navigate to the project location, and import the database into the test container

    ahoy mysql-import ${path/to/my/local-database}.sql
    docker-compose exec test bash -c 'drush sql-drop' && docker-compose exec -T test bash -c 'drush sql-cli' < ${path/to/my/local-database}.sql

    NOTE: Importing via ahoy can be painfully slow. It seems to run much faster if you run the import from inside the cli container:

    ahoy cli
    drush sql-cli < mydatabase.sql
    exit
    

    Mac/Linux users: If running ahoy mysql-import gives you an error, the command may not exist in your .ahoy.yml file. If this is the case, just run the docker-compose version of the command instead.

    Windows users: You may get this error: The input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'. If this appears, just run the command again with winpty ${command}

Using PV to import view database import progress

Tested in windows (git bash) / Linux:

Install pv utility in docker (use either mariadb/cli container command as according to sql pipe command below):

    docker-compose exec -T -u 0 mariadb apk update
    docker-compose exec -T -u 0 mariadb apk add pv
    docker-compose exec -T -u 0 cli apk update
    docker-compose exec -T -u 0 cli apk add pv

Import Gzipped SQL file (myfile.sql.gz) [Linux/Mac/Windows git bash version]

    # Store the path to the SQL file.
    sql_file=/path/to/database.sql.gz
    # Get size of file in bytes.
    size=$(wc -c $sql_file | awk '{print $1}')
    # If using windows alias - don't use winpty for this.
    unalias docker-compose
    cat $sql_file | docker-compose exec -T mariadb sh -c "pv -f -s $size | gunzip | mysql -u drupal -pdrupal drupal"
    # OR: (direct into mariadb container is faster)
    # cat $sql_file | docker-compose exec -T cli bash -c "pv -f -s $size | gunzip | drush sql-cli"
    # View progress
    $ 10 MB 0:00:00 [2.28MiB/s] [>                                  ]  0% ETA 0:15:00
    # If using windows alias - add back winpty for this.
    alias docker-compose="winpty docker-compose"

Import Gzipped SQL file (myfile.sql.gz) [Windows cmd version]

    set sql_file=./path/relative/to/project/database.sql.gz
    FOR %I in (%sql_file%) do set size=%~zI
    FOR /F "delims=" %F IN ("%sql_file%") DO SET "sql_file_full_path=%~fF"
    type %sql_file_full_path% | docker-compose exec -T mariadb sh -c "pv -f -s %size% | gunzip | mysql -u drupal -pdrupal drupal"

Import Plain SQL file:

    # Store the path to the SQL file.
    sql_file=/path/to/database.sql
    # Get size of file in bytes.
    size=$(wc -c $sql_file | awk '{print $1}')
    # If using windows alias - don't use winpty for this.
    unalias docker-compose
    cat $sql_file | docker-compose exec -T mariadb sh -c "pv -f -s $size | mysql -u drupal -pdrupal drupal"
    # OR: (direct into mariadb container is faster)
    # cat $sql_file | docker-compose exec -T cli bash -c "pv -f -s $size | drush sql-cli
    # View progress
    $ 10 MB 0:00:00 [2.28MiB/s] [>                                  ]  0% ETA 0:15:00
    # If using windows alias - add back winpty for this.
    alias docker-compose="winpty docker-compose"
  1. Flush the caches and refresh any asset locations

    ahoy drush cc all
    docker-compose exec -T drush cc all

Importing the latest backup of the current production website database from the GitLab Container Registry

This option allows Docker to retrieve this container (which will probably take a while). Using this approach requires rebuilding the containers, therefore destroying the current local site's database.

GitLab automatically saves the nightly production site database backups as containers in a private Container Registry. This allows the entire container to be restored including all settings and configuration, databases, tables etc, rather than just the database dump itself.

NOTE: If you uncomment the MARIADB line in the .env file, it will enable the Build job in the Gitlab CI/CD pipeline, and will test your changes against the PROD site database, rather than the feature site you are deploying to. This can lease to failures if config entities are deleted on the source site but not the destination.

To retrieve the live site's most revent database container backup:

  1. Log into the Docker Container Registry with your GitLab Personal Access Token (your account password won't work)

    docker login gitlab-registry-production.govcms.amazee.io -u ${your GitLab login email} -p ${your Personal Access Token}

    Windows users: If you get an error, try prefixing the command above with winpty

  2. There are 2 ways to retrieve the database image backup.

    2.1 In the file .env, uncomment the last line:

    MARIADB_DATA_IMAGE=gitlab-registry-production.govcms.amazee.io/${profile_name}/${project_name}/mariadb-drupal-data

    OR

    2.2 Manually download the image backup using:

    docker pull gitlab-registry-production.govcms.amazee.io/${profile_name}/${project_name}/mariadb-drupal-data
  3. Rebuild the containers

    ahoy up
    docker-compose up -d --build

This will import the latest database backup image from the Amazee Docker Registry over the top of the existing site.

If it works, your local site URL will load a copy of your prodution site. If this fails, you should just see a fresh govCMS site. If this happens, you can try manually pulling the latest database image backup, then rebuilding the containers:

docker pull gitlab-registry-production.govcms.amazee.io/${profile_name}/${project_name}/mariadb-drupal-data
docker-compose up -d --build

Once complete, you'll need to use Drush to make your user account an administrator, enable stage_file_proxy and other development modules (See Step 6 of 'Build it and Run it') etc.

Exporting a database

Exporting locally

Most of the time exporting locally in docker can be done by:

    ahoy mysql-dump ${path/to/my/local-database}.sql
    docker-compose exec cli bash -c 'drush sql-dump --ordered-dump' > ${path/to/my/local-database}.sql
Other ways to export a database

Tested in windows (git bash) / Linux:

Export a plain SQL file:

    # If using windows alias - don't use winpty for this
    unalias docker-compose
    docker-compose exec -T cli drush sql-dump > /path/to/file.sql.gz
    # If using windows alias - add back winpty for this.
    alias docker-compose="winpty docker-compose"

Import Gzipped SQL file (myfile.sql.gz)

    # If using windows alias - don't use winpty for this
    unalias docker-compose
    docker-compose exec -T cli drush sql-dump | gzip > /path/to/file.sql.gz
    # If using windows alias - add back winpty for this.
    alias docker-compose="winpty docker-compose"

Importing files

Files can be included in several ways.

  1. Use the stage_file_proxy module to dynamically download any files needed to load each page you view on your Docker site (preferred method); or
  2. Download a dump of the files into the local file system - this method uses Docker Volumes and will take up far more space locally depending on the size of the filebase.

To set up Stage File Proxy

The Stage File Proxy module is already included in govCMS SaaS, it just needs to be enabled. This can either be done via Drush or by running a govcms script.

Drush:

ahoy drush en -y stage_file_proxy
docker-compose exec -T drush en -y stage_file_proxy

govCMS script:

docker-compose exec -T cli govcms-deploy

If you have already imported a SaaS production site database, Stage File Proxy should already have the internal production domain set as the source of the files. No further action required!

If not, you'll need to set it up manually. To do this through the UI requires your User account to have administrator access.

  1. In a web browser, log into the site with an account that has administrator access
  2. If it's not already enabled, Visit ${site-url}/admin/modules and enable the stage_file_proxy module
  3. Visit ${site-url}/admin/config/system/stage_file_proxy
  4. Enter the production site URL into the origin site field and save. You may want to use an internal URL that bypasses any caching systems to ensure you get the latest versions of files (if you have one set up).

To do this through the command line, run:

For Drupal 7:

ahoy drush variable-set stage_file_proxy_origin '${https://myproductionsite.com}'

For Drupal, 8:

ahoy drush config-set stage_file_proxy.settings origin '${https://myproductionsite.com}'

Once Stage File Proxy is running, it will download and save copies of any assets requested by pages loaded in the Docker site, and save them automatically into the local /files directory on the Host machine. Docker maps this directory as a Volume to a corresponding location inside the containers.

Including a file dump in local Docker Volumes

'Volumes' in Docker are folders on the host machine that are mapped to a corresponding location inside one or more containers. Anything that happens to either the container folder or the host machine folder is immediately replicated in the others.

Altering or adding new Volumes to containers requires updating the project's docker-compose.yml file, which is locked from editing.

Instead, several local volumes already exist for the project, under /files and /tests. Any files added into these directories will be immediately available to the Docker containers.

The /files directory is mapped to ${site-url}/sites/default/files.

e.g. adding a PDF file to ${project-root}/files/myfile.pdf will make that file available under ${project_name}.docker.amazee.io/sites/default/files/myfile.pdf. This is what Stage File Proxy does.

You can add a copy of the entire production site filebase, however this is a heavy-handed solution when the majority of files included probably won't be required for your development tasks.

Adding development-only modules

For new modules to work i.e. ones not included in the govCMS installation profile, they need to be present in several Docker containers. Downloading them via Drush alone won't work, and you may get errors when trying to use them.

To propagate a module across several locations in several containers, we can set up additional Docker Volumes via a docker-compose.override.yml file.

This file is ignored by the GitLab deployment pipeline, so it cannot break anything if/when pushed to the remote repository.

This technique maps an additional local folder to multiple containers, using a Docker Volume.

Note: this technique involves rebuilding your containers, so any work within will be destroyed.

  1. Create a new file in the project root called docker-compose.override.yml, and paste in the following code:

    version: '2.3'
    
    services:
      cli:
        volumes:
          - ./dev_modules:/app/sites/default/modules/dev_modules:${VOLUME_FLAGS:-delegated}
      php:
        volumes:
          - ./dev_modules:/app/sites/default/modules/dev_modules:${VOLUME_FLAGS:-delegated}
      test:
        volumes:
          - ./dev_modules:/app/sites/default/modules/dev_modules:${VOLUME_FLAGS:-delegated}
  2. Create a folder called dev_modules in the project root. Remember to add it to gitignore, so your dev modules don't get saved in the project repository.

  3. Rebuild the Docker containers:

    ahoy down; ahoy build
    docker-compose down; docker-compose up -d --build
  4. Download the desired module to the dev_modules folder inside the test container (alternately, you can manually download modules into dev_modules locally and unzip them there, and they should also appear in the containers):

    ahoy drush dl ${module_name} --destination='/app/sites/default/modules/dev_modules'
    docker-compose exec -T test drush dl ${module_name} --destination='/app/sites/default/modules/dev_modules'

    If you see a message showing a different path, check the Known Issues.

  5. You should be able to enable your new module from the Modules UI (if you are an Admin) or via drush:

    ahoy drush en ${module_name}
    docker-compose exec -T test drush en ${module_name}

Shutting down your computer without losing the work inside your containers

Docker containers will remain intact unless you explicitly delete them. This means you can turn of your development machine without losing your work.

NOTE: Running ahoy down or docker-compose down will DESTROY your containers and any changes within (importantly, the database). Use with caution!!

To stop your containers for later use, run:

ahoy stop
docker-compose stop

To start them again later, run:

ahoy restart
docker-compose start

When the containers are stopped, you can safely shut down your computer without losing work.

Changes made to files outside the containers, such as in /files and /themes will remain intact if the containers are stopped or deleted.

Pushing commits

Before pushing anything back up to GitLab, you should confirm your local Git user name and email. If this differs from those used by your GitLab account, your commits will show as originating from a different user (but will stil be permitted).

You can check your global Git user account details by inspecting user.name and user.email via:

git config --global --list

You can then check your local Git details with:

git config --local --list

You can then specify different user details for specific repositories using this:

git config --local user.name '${your GitLab account username}'
git config --local user.email ${your GitLab account email}

Dumping Variables

In Twig you can use dsm()

In PHP you can use ksm() for variable dump in status message or kint() for variable dump at place of execution.

Advanced Dumping

When dumping deep structures, you may encounter Symfony's VarCloner limit of guaranteed at least 1 level and up to 2500 items in title.

To increase minimum limit, below code can be used (example uses 10 level deep).

$input = function_to_debug();
$cloner = new \Symfony\Component\VarDumper\Cloner\VarCloner();
$cloner->setMinDepth(10);
$dumper = new \Symfony\Component\VarDumper\Dumper\HtmlDumper();
$output = fopen('php://memory', 'r+b');
$data = $cloner->cloneVar($input);
$dumper->dump($data, $output);
$output = stream_get_contents($output, -1, 0);
$output = \Drupal\devel\Render\FilteredMarkup::create($output);
\Drupal::messenger()->addMessage($output);

Known issues and workarounds

  1. This process only applies to the 7.x-3.x branch of GovCMS
  2. Currently (Nov 2018), all local projects utilise the same LOCALDEV_URL. The URL used is hardcoded in. GovCMS is aware and working on a fix. To access different sites, shut down the containers of all except the one you want to see at that URL.
  3. The container 'test' cannot have its name changed. This prevents Drupal from being able to connect to the database for some reason.
  4. When logging into the site for the first time, the 'Reset password' page does not allow resetting the password, complaining Password may only be changed in 24 hours from the last change. See Step 6 of Setup for the workaround.
  5. Attempting to import database dumps from the govCMS Dashboard using ahoy mysql-import ${database.sql} will fail, as currently (10 Dec 2018) the Docker configuration only works with databases called drupal. See 'Importing a database, Step 2' below for the workaround.

Issues running Docker on Windows

  1. Windows users may get build errors when running docker-compose up -d, noting the filepaths listed in the dockerfiles are not valid. This relates to how Windows interprets POSIX filepaths (which are formatted/like/this, as opposed to Windows paths, which\look\like\this).

    The fix is to add this line to the project's .env file: COMPOSE_CONVERT_WINDOWS_PATHS=1.

    Alternately, you can switch Docker between using Windows and Linux containers under Docker's settings. Changing this settings requires Docker to pull down the appropriate Container versions, then reboot.

  2. If you have Docker set to start automatically when Windows boots up, you may get an error like this when starting your containers:

    Error starting userland proxy: mkdir /port/tcp:${host-ip}:${host-port}:tcp:${container-ip}:${container-port}: input/output error

    This is a known bug in Docker, that relates to Windows Fast Boot. Either disable Fast Boot, or just restart Docker once Windows loads. Annoying but effective.

  3. If you're using Git Bash on Windows as your CLI, you may get an error like this when referencing locations inside Containers, such as when Adding development modules:

    The directory C:/Program Files/Git/app/sites/default/modules/dev_modules does not exist.

    This is a known bug in Docker. Git Bash misinterprets the path you specify in the command; use another CLI to run these commands, like Command Prompt.

  4. If you have Docker set to automatically start when Windows boots, it may not start in Administrator mode, even if you have the program itself set to do so. You'll need to shut it down and manually start it in Admin mode.

  5. Even if you disable the setting 'Start Docker Desktop when you log in', Docker may start at boot regardless. Check your Startup apps list (Ctrl + Shift + Esc > Startup tab) to see if Windows is forcing it to start.

Notes

  • Windows users can find the docker commands listed in this guide inside the .ahoy.yml file.
  • Unless you import a database dump from another site, the out-of-the-box govCMS site will only contain the user 'admin'.
  • GovCMS SaaS sites have the uid 1 account disabled, you can enable it on your local developer site by running the following (linux/mac/windows git bash)

(GovCMS 8/9)

```bash
# [docker commands]
# If using Windows git bash / winpty for next line.
unalias docker-compose
docker-compose exec -T cli drush user-information --pipe --uid=1 | xargs | awk '{print $2}' | xargs docker-compose exec -T cli drush user-unblock
# If using Windows git bash / winpty for next line.
alias docker-compose="winpty docker-compose"
# [plain drush commands]
drush user-information --pipe --uid=1 | xargs | awk '{print $2}' | xargs drush user-unblock
```
  • If you import a database dump from a site where your user account is NOT an administrator, you can become one by assigning your account the administrator role running:

    ahoy drush urol 'administrator' ${account email, user ID or 'username in quotes'}
    docker-compose exec -T test drush urol 'administrator' ${account email, user ID or 'username in quotes'}

    The super admin user ID will always be '1'.

  • If you import a database from the govCMS Dashboard, it will be automatically sanitized. Emails and passwords will have changed, so to change them you must use the User ID or username.

  • If a Docker build seems to be taking forever, check a password prompt from Docker hasn't opened somewhere requesting access to your hard drive. There's no progress indicator for the build steps, so it can be hard to tell if anything is happening.