Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is a Volume Needed? #3

Open
jeden25 opened this issue May 27, 2015 · 71 comments
Open

Is a Volume Needed? #3

jeden25 opened this issue May 27, 2015 · 71 comments
Labels
question Usability question, not directly related to an error with the image

Comments

@jeden25
Copy link

jeden25 commented May 27, 2015

Drupal would require some local storage for images and the like, right? So in order for the container to be persistent wouldn't I need to include a volume as well as the database?

If it is needed, I'm confused as to why the official Wordpress Dockerfile specifies a volume while the official Drupal Dockerfile does not?

@jonpugh
Copy link

jonpugh commented Jul 8, 2015

Yes, this is the main problem with this image. There should be a volume for the source code files as well so we can add our own Drupal codebase.

Drupal core by itself isn't going to cut it for most people.

@yosifkit
Copy link
Member

yosifkit commented Jul 8, 2015

A VOLUME line is not necessary in the Dockerfile for --volumes-from or -v to work (yes, you would need a -v on the first container for --volumes-from to work to a second container), so docker run -v /local/images/path/:/drupal/path/to/images/ drupal will work fine But if you have a place that you think a VOLUME should be defined, a PR is welcome.

@tianon
Copy link
Member

tianon commented Jul 8, 2015 via email

@skyred
Copy link
Contributor

skyred commented Aug 5, 2015

D8 uses a different structure now. For example, contributed modules are downloaded at /modules/, unlike D7, contributed modules are located in /sites/*/modules/

I was thinking the same question. But, there are different ways people would use this image. For example, I can potentially use a docker d8 image for

  1. spin up sandboxes for testing. Then, I don't need Volume.
  2. start a new project. Then, I want the whole Drupal directory to be managed by Git (except user files). It doesn't matter D7 or D8, it's best practise to put the whole Drupal directory under Git. If we put the while Drupal directory in a Volume, then we don't we just use PHP container and a Data container to start with?

@alexanderjulo
Copy link

In my opinion we do need volumes, so that this image is actually usable. We should define mount points for the following directories (in D8):

  • /var/www/html/modules
  • /var/www/html/profiles
  • /var/www/html/themes

These three are no-brainers, they are empty by default (besides a README.txt) and will ensure that the container can be upgraded by just bumping the image version.

We should also include /var/www/html/sites/default/files, which will make sure user content survives.

Much more difficult is the question on how to deal with settings.php and default.settings.php. From what I understand we can not just define volumes on these files, as docker would just overwrite them with directories and the drupal install expects them to be files in a certain state with a certain content.

I'd be interested in any ideas. If we just mount /var/www/html/sites/ or a subdirectory we also have the problem that the structure is lost and drupal will either not install at all or refuse installation.

@alexanderjulo
Copy link

Actually, scratch that, you don't need it. You can run this fine without any volumes, by using a drupal:8 as the image for the storage, too. See the following docker-compose.yml for an example. This also fixes issues with settings.php and so on. We can bump the image for drupal without bumping storage-drupal:

drupal:
    image: drupal:8
    volumes_from:
        - storage-drupal
    links:
        - "db:mysql"
    ports:
        - "80:80"
db:
    image: mysql
    volumes_from:
        - storage-mysql
    environment:
        - MYSQL_USER=someuser
        - MYSQL_PASSWORD=thispasswordsucks
        - MYSQL_DATABASE=somedb
        - MYSQL_ROOT_PASSWORD=thispasswordsuckstoo
storage-drupal:
    image: drupal:8
    volumes:
        - /var/www/html/modules
        - /var/www/html/profiles
        - /var/www/html/themes
        - /var/www/html/sites
storage-mysql:
    image: mysql
    volumes:
        - /var/lib/mysql

@eduarddotgg
Copy link

Volume for Drupal's root folder would be great!

@alexanderjulo
Copy link

Then you could not upgrade anymore by upgrading the container, because the drupal core folder would be in the volume.

On 14 Nov 2015 21:46 +0100, admdhnotifications@github.com, wrote:

Volume for Drupal's root folder would be great!


Reply to this email directly orview it on GitHub(#3 (comment)).

@eduarddotgg
Copy link

oh, i see. than this folders would be great:
/var/www/html/modules
/var/www/html/profiles
/var/www/html/themes
/var/www/html/sites/default/files
that @alexex mentioned before

@aborilov
Copy link

Can't use this image with kubernetes, because after every restart, installation process start again.

@juliencarnot
Copy link

Interesting issue&thread! As a newcomer to Docker, I've been struggling to find definitive answers on data/volumes/volume containers management for drupal, let alone some kind of benchmark of the different options, so it's quite difficult to determine which one would apply better to my usecase... If there's some consensus, adding some pointers on the description page would be great!

@aborilov
Copy link

The problem is that there are no docker-entrypoint.sh, which have to copy /var/www/html to volume, if it is empty, like it does in WordPress Dockerfile.

@YvanDaSilva
Copy link

@alexex Your solution does fix the issue of upgrading to a new version.
However what it does not fix, is the ability to sync your data with the host.
"/var/www/html/sites" Can't still be mounted from host as they contain default settings that drupal wants.

So you are still prone to loose your data by removing the data container, or in the case of k8s users like in the previous comments loose your pod and by so loose your data.

Something still needs to be done here, so that drupal does become a real containerized application that can survive: stop & destroy

Update: Just noticed that on your example your are using for your storage ambassador container mariadb and drupal images and you are not passing a command to execute. This would have for effect to keep these containers up. It is a good idea to use the same images though, as those are already pulled and would not need extra space.
You can change this by adding -> command: bash or using another image (busybox for example)

@ahuffman
Copy link

I'd move the drupal core download and extract stuff into an entrypoint.sh (instead of the dockerfile,) with a VOLUME at /var/www/html, as part of entrypoint check to see if the core exists, and if not pull down the core and extract into place.

@ahuffman
Copy link

You could also use Environment variables to pull in all the config stuff that might get lost on restarting a container. Check out the Wordpress or Joomla containers. There should be something like a DRUPAL_DB_HOST, DRUPAL_DB_PASSWORD, DRUPAL_DATABASE at minimum.

@ahuffman
Copy link

@aborilov The copy doesn't have to be in entrypoint.sh. They need a VOLUME or multiple VOLUME entries in the Dockerfile. A VOLUME entry flushes the data to a bind mounted volume at runtime. I think this would solve image change concerns as well. Please see the reference here, as it explains perfectly: https://docs.docker.com/engine/reference/builder/#volume

@aborilov
Copy link

@ahuffman As you said here #3 (comment), there must be VOLUME and extract(or copy) in entrypoint.sh. Usually, you can download and extract in dockerfile, but extract in some src dir, and copy to /var/www/html only if it empty. There is nothing new here, just see how it work in wordpress docker-entrypoint.sh

if ! [ -e index.php -a -e wp-includes/version.php ]; then
        echo >&2 "WordPress not found in $(pwd) - copying now..."
        if [ "$(ls -A)" ]; then
            echo >&2 "WARNING: $(pwd) is not empty - press Ctrl+C now if this is an error!"
            ( set -x; ls -A; sleep 10 )
        fi
        tar cf - --one-file-system -C /usr/src/wordpress . | tar xf -
        echo >&2 "Complete! WordPress has been successfully copied to $(pwd)"

This is it, and will work everywhere, with any storages.

@ahuffman
Copy link

I've taken a crack at fixing this in my fork. I've almost got it, but could use some assistance troubleshooting, as I'm not too familiar with how the php-fpm image works/serves.

You can check out my fork here: https://github.com/ahuffman/drupal/tree/master/8/fpm

Made some changes to the Dockerfile build and created a 1/3 complete drupal_entrypoint.sh (seems to build out the settings.php properly from provided environment variables.)

I've only created one VOLUME at /var/www/html for now during testing.

Environment Variables to provide for MySQL:
MYSQL_DB_PASS
MYSQL_DB_HOST
MYSQL_DB_USER (falls back to root if not provided)
MYSQL_DB_PORT (falls back to 3306 if not provided)
MYSQL_DB_NAME (falls back to drupal if not provided)
DRUPAL_TBL_PREFIX (falls back to blank if not provided)
DRUPAL_DB_TYPE (falls back to 'mysql' if not provided, choices are mysql, postgres, and sqlite ***I've only written MySQL so far)

@yosifkit
Copy link
Member

yosifkit commented Mar 1, 2016

The docker run command initializes the newly created volume with any data that exists at the specified location within the base image.
https://docs.docker.com/engine/reference/builder/#volume (emphasis added)

Just to ensure that it is understood, docker only copies files within a container's directory to new volumes created at docker run and this never happens on a bind mount.

  • -v /container/dir/ or VOLUME /container/dir/ will copy
  • -v /my/local/dir/:/container/dir/ will only contain what was in /my/local/dir/

It does seem like we need to define a VOLUME; but you should be able to get around it by defining them when it is run (using the folders suggested previously):

$ docker run -d -v /var/www/html/modules  -v /var/www/html/profiles -v /var/www/html/themes -v /var/www/html/sites/default/files drupal

@ahuffman
Copy link

ahuffman commented Mar 2, 2016

https://github.com/ahuffman/drupal/tree/master/8/apache

Check my fork there. I now have a working apache and MySQL setup with an entrypoint.sh and VOLUME at /var/www/html.

It can also do auto upgrades if the container drupal source changes.

For kubernetes support we need to add some code into my entrypoint script to check the DB for tables and if not there kick off the schema install. I need help on that piece, because I'm not a PHP guy so I don't really know what that would look like.

The entrypoint.sh builds the settings.php off of the provided environment variables.
This seems to be working so far for MySQL, as I haven't written the postgres stuff yet.

Let me know what you think.

@karelbemelmans
Copy link

@alexex How do you solve the permission issue with running 2 mysql containers that use the same database files? When I start my compose I get this:

[ERROR] InnoDB: Unable to lock ./ibdata1 error: 11
[Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.

@karelbemelmans
Copy link

@ahuffman Nice work there! Make a pull request on the main repo to get this into the hub image.

@alexanderjulo
Copy link

@kbemelmans that's not a permissions issue, that would be two mysql server trying to store their databases in the same folder which will definitely lead to conflicts and not work. What are you trying to achieve?

@karelbemelmans
Copy link

@alexex I literally copy/pasted your docker-compose.yml from this thread, where you use the mysql image for both the db and the storage-mysql container.

I've been reading up on it yesterday, and what you need is some kind of "do not actually start this container, just use it for data" option. But I wonder if that file actually worked for you like this?

@alexanderjulo
Copy link

@kbemelmans ah, now I can see what you mean, just set /bin/true as command for the storage container. Also read up on the docker volume command and docker-compose v2 files, this is probably not the preferred method of storing data anymore. :-)

@ahuffman
Copy link

ahuffman commented Mar 7, 2016

@kbemelmans there's still a little bit of work that needs to be done here, which I need help on.

First, we need some php code written to be able to check the postgres/mysql db connections to see if the tables exist, and if not run through the table install procedure (similar to what the wordpress entrypoint does.)

The second piece is that we need to be able to see if there's a better (more drupal native way) to check on the drupal version, and if an upgrade is being performed automatically run the php code to upgrade the table schemas. I'm not familiar enough with the drupal code to know the answers to these questions. Other than that, my entrypoint script takes care of building the settings.php pretty well, and the drupal container can be scaled on a kubernetes environment.

I'm able to kill the running drupal containers (or their settings.php) and they return to normal after restarting.

@rjbrown99
Copy link

@ahuffman if you haven't explored drush that would be a good place to start.
https://github.com/drush-ops/drush

Assuming you package drush with your Drupal container, it can already check the database health, check the Drupal version, or even perform full upgrades of core or packages.

'drush core-status' will dump you a response that looks like this:

Drupal version : 7.43
Site URI : http://default
Database driver : mysql
Database username : dbusername
Database name : dbname
Database : Connected
Drupal bootstrap : Successful
Drupal user : Anonymous
Default theme : themename
Administration theme : seven
PHP executable : /usr/bin/php
PHP configuration : /etc/php.ini
PHP OS : Linux
Drush version : 6.2.0
Drush configuration :
Drush alias files :
Drupal root : /path/to/drupal/root
Site path : sites/default
File directory path : sites/default/files
Temporary file directory path : sites/default/tmp

Newer versions of drush support a '--format json' or '--format yaml' option to dump the above results in a format that is friendlier for parsing by other scripts or tools.

In nearly all cases you will want to package drush with drupal anyway, so this might kill a few birds with one stone. For example, drush can also be used to kick off the Drupal cron jobs that need to run periodically.

@ahuffman
Copy link

@rjbrown99 thanks, that looks like the best way to get this done. I'll start digging into it when I have more free time. If anyone else has some experience with drush and wants to add to the fork I have, please do. I have the script working up to recreating a missing settings.php off of environment variables. The next step is to check the db and if not there initialize it off of the env variables provided/settings.php.

@tianon
Copy link
Member

tianon commented Jan 26, 2017

Please keep the tone of conversion positive.

I'm happy to reopen this to track the "further enhancements in the image" the additional docs text references. I made the docs PR because that's a simple concrete change we can add now without agreement on what additional behavior the image should include to make this issue simpler to handle.

@tianon tianon reopened this Jan 26, 2017
@skyred
Copy link
Contributor

skyred commented Jan 26, 2017

Keep in mind, Drupal is more like a framework than something would just work out of box. Drupal has its own best practice and tools for DevOps and it keeps changing (In D8, there is a trend to let Composer manage dependence including Core, therefore a new file structure). Docker comes in as additional or a new approach to manage DevOps for Drupal. However, by far, it's really hard to standardize Drupal related DevOps on a single Image.

If you are a developer, who knows Drupal, then this Image can already offer a lot of efficiency.

If you are a system admin, who would treat Drupal as a black box, then you need to work with your dev team to figure out the file structure and what's considered "data", then extend this image, or use the parent of this image PHP.

@geerlingguy
Copy link

However, by far, it's really hard to standardize Drupal related DevOps on a single Image.

Very true; this image's main focus, I should hope, is to make it easy to build a new Drupal [current version] site locally, quickly.

Secondarily, it can be made flexible enough to support more modes of local development. But there's almost no chance there will be 'one Drupal Docker environment to rule them all', as Drupal is way more complex in the real world than any simple node, go, java, python, etc. app.

@pirog
Copy link

pirog commented Jan 26, 2017 via email

@jrmoserbaltimore
Copy link

Actually, you can always docker exec [drupal container] [command], so it is in fact possible to use drush or Composer or whatever is actually installed, so long as it's installed inside the Docker container.

As long as there's a separation between the system (the version-controlled code, installed packages, etc.) and the user files (paths to which users write data), you can build whatever you want. Replacing a Docker image is analogous to upgrading the deployed software version or upgrading the operating system: the files you change should be files the user hasn't changed.

Think like how Linux has configuration files in /etc and state data in /var, and expects the user to not modify /usr/share. The user might install some custom administration scripts or compiled software into /usr/local, which we expect.

You seem to be thinking about a Drupal docker image as a fancy command to make Drupal happen. It's actually a method to supply a software with its entire supporting system.

Consider node. A node container will pull down node modules to provide new commands like watchify or browserify, and uses tools like git and curl; node containers often volume-mount the node module cache, although that can be re-installed readily and so clearing it just means the container takes 2-3 minutes to run node install and restore everything you wiped out. Many node applications do use advanced commands made available by first installing required node modules, and the same container acts as a build system, a unit testing environment, and even the server for a node app.

Rather than claiming the problem is unapproachable, I suggest first defining the problem in terms of what use cases you expect to encounter. What do you expect administrators to do when managing Drupal?

@raymond880824
Copy link

May I know how to setup Drupal High Availability with Docker
is it possible using docker swarm

Thanks

@jayachandralingam
Copy link

I'm getting below errors when using mount point and I'm declaring as below, where /jaya is my mount point. Please help soon possible.
"docker run -v /jaya:/var/www/html/modules -v /jaya:/var/www/html/profiles -v /jaya:/var/www/html/sites -v /jaya:/var/www/html/themes -v /jaya:/var/www/html/sites/default/files -p 8080:80 drupal:8.2-apache"

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
[Thu Sep 14 18:03:58.753281 2017] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/7.1.5 configured -- resuming normal operations
[Thu Sep 14 18:03:58.753318 2017] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

@gateway
Copy link

gateway commented Apr 24, 2018

Im a bit late to this discussion and since the last post was back in Sept 2017 I thought I would see if anyone has tried to mount a aws EFS system, which is perfect for keeping anything like /sites/default/files intact esp if your scaling your containers, each one can mount the user content, css and js stuff. See https://aws.amazon.com/efs/ ..

any thoughts?

@geerlingguy
Copy link

@gateway - I am using EFS for a shared volume mount for Drupal files and Magento media directories and it works pretty well. One caveat—if you mount a shared folder that does a lot of small file writes, or flocks (file locks), then it can cause some performance problems.

But for serving up image, CSS, JS, etc., EFS is a pretty good solution. Note that I would recommend using some sort of CDN in front of your site if it gets a lot of traffic, as EFS reads can be a lot slower than local filesystems when you have a lot of load—and you can run into limits with EFS unless you drop some giant files inside your filesystem (see Getting the best performance out of Amazon EFS).

@wglambert wglambert added the question Usability question, not directly related to an error with the image label Apr 24, 2018
@gateway
Copy link

gateway commented Apr 24, 2018

thanks @geerlingguy , it seems silly to not really have a viable solution for drupals /default/files section. When you need to scale horizontally each instance needs to connect to some sort of shared file system for user type of content, css, js etc. I'm some what new to docker and just struggling with this part of to allow for proper scaling.. thanks for the link and hard work on it!

@fabthegreat
Copy link

may I sum up a little bit:

  • on D8: to ensure it works but with datas persistency one have to mount volumes for /var/www/html/{modules,profiles,themes} and /var/www/html/sites/{default,other_site}/files as well. Settings.php is copied in Dockerfile directly. Without doing it I was redirected after each container restart on install.php...
    Does it seem right? Would it work for full /var/www/html/sites? If not, why that?
  • on D7: I have the feeling this is more the mess, since modules/themes can be located at several places right? What is your guidelines?

Personally I have jumped to D8 because it seems to work flawlessly with the volumes depicted earlier. No loss of configuration nor data.

@dimasusername
Copy link

If you bind-mount /var/www/html/sites you'll get an error because Drupal expects some seed data there.

On DockerHub Drupal home page (Search for: stack.yml) there is a suggestion to use anonymous data volumes for mounting the directories, such as /var/www/html/sites. If a volume doesn't exist, Docker will actually use the data from the image (which contains the seed data).

But anonymous volumes can be a pain, and so you actually can use named volumes, e.g.

services:
  drupal:
    image: drupal
    ...
    volumes:
      - drupal_sites:/var/www/html/sites
      ...
volumes:
  drupal_sites:

in your docker-compose.yml or in your stack.yml

OR

docker container run -v drupal_sites:/var/www/html/sites ... (which the Drupal page on DockerHub mentions)

I hope this is helpful!

@bimalgrewal
Copy link

I'm trying to deploy Drupal 7.72 on kubernetes but it fails with Fatal error: require_once(): Failed opening required '/var/www/html/modules/system/system.install' (include_path='.:/usr/local/lib/php') in /var/www/html/includes/install.core.inc. My deployment.yaml looks something like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata: 
  labels: 
    app: drupal
  name: drupal
spec: 
  replicas: 1
  template: 
    metadata: 
      labels: 
        app: drupal
    spec: 
      initContainers:
        - name: init-sites-volume
          image: drupal:7.72
          command: ['/bin/bash', '-c']
          args: ['cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R']
          volumeMounts:
          - mountPath: /data
            name: vol-drupal
      containers: 
        - image: drupal:7.72
          name: drupal
          ports: 
            - containerPort: 80
          volumeMounts:
          - mountPath: /var/www/html/modules
            name: vol-drupal
            subPath: modules
          - mountPath: /var/www/html/profiles
            name: vol-drupal
            subPath: profiles
          - mountPath: /var/www/html/sites
            name: vol-drupal
            subPath: sites
          - mountPath: /var/www/html/themes
            name: vol-drupal
            subPath: themes 

      volumes:
        - name: vol-drupal
          persistentVolumeClaim: 
            claimName: drupal-pvc

However, it works when I remove the volumeMounts from the drupal container or I update the image to drupal:8.6. Does anyone know the solution to make it work with Drupal 7 with the volumeMounts?

@kaoet
Copy link

kaoet commented Jul 20, 2020

Here is the what I use in Kubernetes, based on @bimalgrewal 's solution. It presents you the installation guide at the first startup. Once you finished the guide, further Pod restarts won't prompt you to configure it again.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: drupal
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: drupal
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: drupal
  template:
    metadata:
      labels:
        app: drupal
    spec:
      initContainers:
      - name: init-sites-volume
        image: drupal
        command: ['/bin/bash', '-c']
        args:
        - "[[ -f /data/default/settings.php ]] || cp -av /var/www/html/sites/. /data/"
        volumeMounts:
        - name: drupal
          mountPath: /data
          subPath: sites
      containers:
      - image: drupal
        name: drupal
        volumeMounts:
        - name: drupal
          mountPath: /var/www/html/modules
          subPath: modules
        - name: drupal
          mountPath: /var/www/html/profiles
          subPath: profiles
        - name: drupal
          mountPath: /var/www/html/themes
          subPath: themes
        - name: drupal
          mountPath: /var/www/html/sites
          subPath: sites
      volumes:
      - name: drupal
        persistentVolumeClaim:
          claimName: drupal
      securityContext:
        fsGroup: 33
---
apiVersion: v1
kind: Service
metadata:
  name: drupal
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: drupal

Caveats

  • securityContext.fsGroup is used to ensure correct ownership of the volumes.
  • An initContainer is run before the main application to populate the initially empty sites volumes.
  • strategy.type: Recreate since at most one Pod should mount the volumes at the same time.

@sameer-kumar
Copy link

@gateway - I am using EFS for a shared volume mount for Drupal files and Magento media directories and it works pretty well. One caveat—if you mount a shared folder that does a lot of small file writes, or flocks (file locks), then it can cause some performance problems.

But for serving up image, CSS, JS, etc., EFS is a pretty good solution. Note that I would recommend using some sort of CDN in front of your site if it gets a lot of traffic, as EFS reads can be a lot slower than local filesystems when you have a lot of load—and you can run into limits with EFS unless you drop some giant files inside your filesystem (see Getting the best performance out of Amazon EFS).

@geerlingguy I am trying to find a solution and found your post here. I am using Azure WebApp for Containers to host my my single container image for drupal. Everytime container restarts, I lose all css, js etc. links. The reason in my guess is that the folder sites/default/files is lost with every container exit. I read that Azure provides a persistence storage as /home which we can use in our container. Do you know by any chance how to solve this problem? Any pointers are appreciated. Thanks.

@serhat-ozkara
Copy link

It's interesting that nobody actually thought about distros/ core patches. :) How can you even use these images if you ever need a core module patched or want to use a drupal distro.

@yookoala
Copy link

yookoala commented Sep 16, 2021

It's interesting that nobody actually thought about distros/ core patches. :) How can you even use these images if you ever need a core module patched or want to use a drupal distro.

You can easily fork this and write your own docker container configuration for other distro. Unless its actually easy to support all the distro in a single docker image, its usually better to build your own image for your own use case than have 1 image serve infinite purpose poorly.

@mstenta
Copy link

mstenta commented Sep 16, 2021

It's interesting that nobody actually thought about distros/ core patches.

We use this image for the farmOS Distro. It works great! We extend it with our own Dockerfile that handles building the codebase from a custom project composer.json file (similar to the Drupal recommended-project approach). We apply core and contrib module patches in our installation profile composer.json: https://github.com/farmOS/farmOS/blob/3db2aa485c6abbad84bb15aebceeee2d62238f2f/composer.json#L53

This allows for a lot of flexibility. And I agree this image should not try to handle any of these downstream decisions for you.

So I would say it's best to think of this image as both a base and an example. It includes all the dependencies necessary to run Drupal - therefore it is a "base" to build on, and it also builds a simple/default Drupal codebase - which can serve as an example, but can be completely replaced/overwritten downstream in your own derivative images.

@eRazeri
Copy link

eRazeri commented Oct 22, 2021

@geerlingguy I am trying to find a solution and found your post here. I am using Azure WebApp for Containers to host my my single container image for drupal. Everytime container restarts, I lose all css, js etc. links. The reason in my guess is that the folder sites/default/files is lost with every container exit. I read that Azure provides a persistence storage as /home which we can use in our container. Do you know by any chance how to solve this problem? Any pointers are appreciated. Thanks.

@sameer-kumar Here's my solution as a script that runs when container starts:

# move sites folder only once
if [ -d "/home/drupal/sites" ]; then
    echo "sites folder already persisted, removing sites from image."
    rm -rf /config/www/drupal/web/sites
else
	mkdir -p /home/drupal
	mv /config/www/drupal/web/sites /home/drupal
	echo "copied sites folder to home just this once"
fi

# make symbolic link in the old location
ln -sfiv /home/drupal/sites/ /config/www/drupal/web/

Also remember to set the WEBSITES_ENABLE_APP_SERVICE_STORAGE to true.

@sameer-kumar
Copy link

Thanks @eRazeri. Will give it a try.

@kjostling
Copy link

No you dont need a VOLUME, Its actually wrong to have one defined in the image. It breaks updates.
Wordpress has one, yes, but thats probably since most WP-users upgrade WP and plugins from the GUI.

@emircanerkul
Copy link

How can I manage composer.json and composer.lock files that hosting under /opt/drupal that I can't use volume mount these files without creating exact one in host machine (or at least I don't know).

Logic: copy composer.(json|lock) from container /opt/drupal/composer.(json|lock) if not in the host machine.

After that volume binding will work.

drupal:
image: drupal:9.4
volumes:
- ../src/composer.json:/opt/drupal/composer.json

@LeeKorbisCa
Copy link

LeeKorbisCa commented Jun 29, 2023

@kaoet thank you, you saint.

Edit: I was able to use their Kubernetes configuration with zero issues along with external-dns to get a hosted Drupal in seconds. Truly amazing.

@gagarine
Copy link

gagarine commented Oct 13, 2023

No you dont need a VOLUME, Its actually wrong to have one defined in the image. It breaks updates. Wordpress has one, yes, but thats probably since most WP-users upgrade WP and plugins from the GUI.

Auto update and core GUI update is a feature users cry to have on Drupal from decades now. WP is pragmatic their, some user update from GUI some with docker. Being religious about it will only bring problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Usability question, not directly related to an error with the image
Projects
None yet
Development

No branches or pull requests