New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconvenient setup, even harder upgrades #57
Comments
Plugins get installed into the shared volume, and afaict there is no "install all required plugins" step on container startup. So having an ephemeral volume would not work (atm). If you know what parts have to be stored on a shared volume and which should not, I would contribute to make upgrade smoother. I personally like a setup that uses two containers (and a cron job, somewhere) though. |
It's a point of view I can understand. But I would prefere an AIO container too. I don't really care if they use apache2, nginx or something else I just want something working out of the box which can get updated without loosing any data. (External db). I will try to do a docker image to achieve that. Would be a good exercice for me. |
I have created a Docker image here with advanced features like a cron to archive Matomo reports and update GeoLite data, SSMTP for SMTP relay to send emails. It is based on Alpine 3.7 with Nginx (+ GeoIP HTTP module enabled). I have also added a docker-compose.yml with Nginx proxy + Let's Encrypt and some instructions to use Redis for caching. Hope that's help. |
PR#52 has added an apache version (see #47 for discussion) and the Matomo docker hub page now includes info on how to run PHP-FPM with docker compose, shall we close this issue? |
I'm closing this. The main part of this issue has already been fixed an the update issue is the same as #161. |
Having the code base in a volume makes upgrades and migrations a pain, additionaly the file copies on initial startup can be really slow on NFS drives. Having a volume allows for the dynamic installation of plugins, but then that is specific to that instance, if you were to run a several stage environment dev/stage/prod then one would need to manually check that the plugins exist is each environment. Better would be to extend the image and pack the plugins in with the installation. @see matomo-org/docker#57 @see matomo-org/docker#161
Having the code base in a volume makes upgrades and migrations a pain, additionaly the file copies on initial startup can be really slow on NFS drives. Having a volume allows for the dynamic installation of plugins, but then that is specific to that instance, if you were to run a several stage environment dev/stage/prod then one would need to manually check that the plugins exist is each environment. Better would be to extend the image and pack the plugins in with the installation. @see matomo-org/docker#57 @see matomo-org/docker#161
Having the code base in a volume makes upgrades and migrations a pain, additionaly the file copies on initial startup can be really slow on NFS drives. Having a volume allows for the dynamic installation of plugins, but then that is specific to that instance, if you were to run a several stage environment dev/stage/prod then one would need to manually check that the plugins exist is each environment. Better would be to extend the image and pack the plugins in with the installation. @see matomo-org/docker#57 @see matomo-org/docker#161
* No longer use volume for Matomo install Having the code base in a volume makes upgrades and migrations a pain, additionaly the file copies on initial startup can be really slow on NFS drives. Having a volume allows for the dynamic installation of plugins, but then that is specific to that instance, if you were to run a several stage environment dev/stage/prod then one would need to manually check that the plugins exist is each environment. Better would be to extend the image and pack the plugins in with the installation. @see matomo-org/docker#57 @see matomo-org/docker#161 * Matomo: create a per-site administrator in addition to the super administrator for all sites.
I guess others have already pointed out that the setup is quite hard. Having to setup a separate nginx just to handle the fastcgi, plus a third container just for a simple cron.
There is an even bigger downside to this. Since the piwik codebase has to be shared between the nginx container and the piwik container, a volume is defined:
This will put the piwik code into a volume.
The problem is, when an update comes out, this volume will not be replaced and you'll just run a newer container with the code from the old one. To upgrade, one has to manually delete the volume and container, hoping that the configuration is kept in tact.
I think it would be much better to include everything into one image, including the cron, making installation and upgrading easy as we've come to expect from Docker.
Is there a reason why it is setup like this? Perhaps I'm overlooking something?
The text was updated successfully, but these errors were encountered: