Replies: 5 comments
-
One obvious way to solve this without the help of docker-compose would be: |
Beta Was this translation helpful? Give feedback.
-
Do you need to specifically use IP Addresses or would hostnames be sufficient? If hostnames are okay, then the default network should alias each service with their name and you can utilize that to communicate between the two services. For example If you must use an IP, then you'll likely want to leverage static IP addresses in your compose file. You will need to tell your service the IP address that you expect the container to be using, but hopefully that isn't too burdensome. |
Beta Was this translation helpful? Give feedback.
-
Thank you for answering. I solved this now using the approach I described: I am using a single container containing a simple controller process. The controller process starts and manages the two (or more) processes which perform the actual work. While I created the controller process from scratch in Python, more elaborated controllers are available, e.g. pypm, chaperone or gmp, which work in a similar way. |
Beta Was this translation helpful? Give feedback.
-
Compared with podman&co, on docker side the default is for "rootful" mode. That's why the default network mode can be the implicit "bridge" and it's really a useful default - so useful that compose creates by default one such bridge per compose (if defaults are let to work). This allows the applications from containers to freely bind on their "preferred" ports. Yet, the "host" network mode is one configuration away: https://docs.docker.com/compose/compose-file/05-services/#network_mode Setting this mode for your services would allow different containers to share the same loopback address, as otherwise each would have its very own, distinct, loopback/127.0.0.1 address, and, furthermore, binding on it, would kinda' prevent any "outside" access too. Maybe its worth noting that for docker desktop users, the host network mode would use the VM's loopback address and that would be distinct from the one on the "client" side. Also, speaking of enterprise linux, the "init" variants of these https://catalog.redhat.com/software/base-images#overview images come with systemd inside and that's definitely a more elaborated controller :D |
Beta Was this translation helpful? Give feedback.
-
From the answers given so far, I don't see an obvious solution or I just don't understand them. |
Beta Was this translation helpful? Give feedback.
-
Disclaimer: I started with Podman, trying to use docker-compose instead for older enterprise linux releases where Podman cannot be installed.
My application consists of two components "Frontend" and "Backend". These components are designed to run on the same machine.
The "Backend" listens on a port on 127.0.0.1.
The "Backend" receives commands from the "Frontend" through the socket. It creates files which the frontend then uses.
In reality, it's a bit more complex, but this sufficess to demonstrate the issue, and modifying the application would be quite possible, but quite an effort.
Later we started to make the application container-friendly. This worked fine with Podman, using volumes for file sharing.
Note that in a Podman pod, both containers are running on the same "machine", so using the loopback address 127.0.0.1 just works.
Now I try to get it running with docker-compose.
Here, "Frontend" and "Backend" are running on different "machines" (sorry, I'm not used to the terminologoy). Instead, they are using 172.18.0.2 and 172.18.0.3 IP addresses.
So 127.0.0.1 does not work for communicating between "FrontEnd" and "Backend" in a docker-compose environment.
Is there any way to make it work with docker-compose similar to Podman?
Beta Was this translation helpful? Give feedback.
All reactions