Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need environment variable with the "number" of the container in a scaled service #626

Closed
jamshid opened this issue Nov 7, 2014 · 6 comments

Comments

@jamshid
Copy link

jamshid commented Nov 7, 2014

When scaling a service (fig scale myserver=3) is there any way for each container to "know" which node it is?

I have a service where each node writes to a file in a shared volume (eg, /var/mydata/file.dat), but I cannot figure out how to make each node use a unique filename.

I could use $HOSTNAME (random container id), but that changes when container is rebuilt.

myserver:   # starts a node that creates and uses /var/mydata/file.dat
  volumes:
    # "fig scale myserver=3" causes each node to see the same file.dat!
    - /var/mydata/

A related feature would be the ability to see the node "number" in a variable that can be used in the fig.yml. E.g., if I have a bank of drives attached to the docker server at /dev/USB_1, /dev/USB_2, etc. then I can make each node use a different drive with:

myserver:   # starts a node that creates and uses /var/mydata/file.dat
  volumes:
    - /dev/USB_${FIG_NODE_NUMBER}:/var/mydata/

I'm probably missing something, or this is an intentional restriction...

@jamshid
Copy link
Author

jamshid commented Nov 7, 2014

Just noticed one problem with my desired approach, although fig maintains the correct number of services up at a time, it does not consistently number the containers. So, after some scaling up and down, I ended up with myserver=3 nodes but they were named myserver_3, myserver_4, myserver_5.

Any ideas? Since this is for small environments anyway (until fig spans docker servers), I guess I could just duplicate the services in the fig.yml (myserver1 - myserver3) and not use "scale".

@aanand
Copy link

aanand commented Nov 7, 2014

You're right that Fig doesn't guarantee a consistent name. I'm not sure it's a good idea to have your app logic depend on a consistent name. Is there a reason your containers can't just write to a file with a random name?

(Incidentally, multiple containers for a single service all sharing a volume is not a scenario we designed for.)

@yonkeltron
Copy link

Actually, for the purposes of diagnosing issues (in addition to logs which do have labels), it would really help to know which container a message or report came from.

@jamshid
Copy link
Author

jamshid commented Feb 15, 2015

Another case, similar to the "bank of numbered USB drives": each container in my scaled service writes a database file. I could write it with a random/unique name, but how do new containers (eg, after restarting the service with an upgraded container image) know which database file to use?
I guess that's why you're saying all the containers should not share a volume? Instead, each container should have its own volume, and use whatever database file it sees.
Btw to iterate over all these database files e.g. for backup I guess I would have to "fig ps myservice" and access each volume. Not as easy as doing a directory listing in a shared volume, but doable.

@jdmarshall
Copy link

I'm not sure it's a good idea to have your app logic depend on a consistent name.

How do you configure nginx upstreams unless you know the machine names? Jenkins, nginx, mongodb and a whole bunch of other tools prefer to 'farm out' work rather than wait for someone to phone home and ask for work, and these are all the sorts of things you might want to use Docker containers for, if not in prod then at least through dev and QA.

In nginx and I think in mongos this is just a line in a very short config file but in Jenkins and some other cases it's normally configured through a UI and you wouldn't want to have to re-enter that every time the docker host goes down, or someone spins it up on another machine.

@dnephin
Copy link

dnephin commented Aug 31, 2015

I think the way to accomplish this is using something like https://github.com/crosbymichael/skydock, https://github.com/jwilder/docker-gen, or https://github.com/ehazlett/interlock (there are a bunch of others as well, this is just a short list).

Every time a container starts or stops, you get an event, and you update the configuration to match. That way you never have a hardcoded list, you can scale out with a single command, and everything else is reactive.

Since we're likely going to be removing the number with #1516, I'm going to close this issue.

@dnephin dnephin closed this as completed Aug 31, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants