Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container exits with no log messages if configuration file is provided #204

Open
jnordberg opened this issue Sep 1, 2021 · 7 comments
Open

Comments

@jnordberg
Copy link

Expected Behavior

The container should start using the config files provided.

Current Behavior

The container exits with code 1 and no log messages.

Possible Solution

Make configs in /etc/local.d/* readable before running couchdb in the docker entrypoint

Steps to Reproduce (for bugs)

stack.yml

services:
  couch1:
    image: couchdb:3.1
    configs:
      - source: admins
        target: /opt/couchdb/etc/local.d/10-admins.ini
configs:
  admins:
    file: ./admins.ini

admins.ini

[admins]
admin = -pbkdf2-2006b7b27f5f3624c54e37f5126f3db2bd515a0f,ac2121044edb47438573c80764ba7dc5,10000

Context

This bug prevents me from launching a cluster of couchdb instances in my docker swarm without setting the password as plaintext in the environment variables.

Workaround can be found here: #73 (comment)

Your Environment

Docker 20.10.8

@wohali
Copy link
Member

wohali commented Sep 1, 2021

Ensuring files have the right access is an ongoing challenge for CouchDB, because it expects to be able to write to the last config file in the chain.

If Docker Swarm can't mount a file in a way that couchdb running inside the container can write to the file, using non-root permissions, this is a WONTFIX.

Am I misunderstanding the issue?

@jnordberg
Copy link
Author

How about if the docker entrypoint script copies the config files given in e.g. /opt/couchdb/etc/copy.d/* to /opt/couchdb/etc/local.d/? That would allow swarm users to provide config files without workarounds

@wohali
Copy link
Member

wohali commented Sep 1, 2021

The docker container is mature at this point, and used in many environments other than Swarm, so changing the functionality that dramatically is a non-starter.

The whole point of the file being external (in our recommended approach, where you externalize the entire etc/local.d directory) is so that it is persisted after the container exits, too. This is important in non-Swarm scenarios. Copying it in loses that advantage.

@jnordberg
Copy link
Author

Adding what I suggested won't change functionality for anyone not explicitly mounting config files into etc/copy.d.

@wohali
Copy link
Member

wohali commented Sep 1, 2021

We'll take it under consideration, but I would not expect a change soon.

@sridharlreddy
Copy link

sridharlreddy commented Feb 16, 2022

Hi @wohali,

I also tried the same to update the configuration by putting a new file in /opt/couchdb/etc/local.d/ through configmap from k8s. Container was crashing without any error message at all and could not figure what is the reason for container to fail.
I checked the helm chart on how it is handled there, it copies the file to target path via init container and that works.

From your comment only last file loaded in config has to be writable by non-root. Is the last file decided based on alphabetical order? In that case mounting a file less than the name 10-xxxx will solve the issue?

Can we add some errors to indicate the reason in this case?

@josegonzalez
Copy link

Would a PR adding trace mode to the entrypoint scripts be accepted? This would at least allow folks to figure out where the entrypoint is bailing out. Something like the following is what I'm thinking:

[ -n "$TRACE" ] && set -x

Users can then set the TRACE env var to figure out why its not starting. This pattern is used across many heroku buildpacks for debugging purposes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants