Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different database password in Django and PostgreSQL containers #159

Open
rvishna opened this issue Aug 5, 2020 · 2 comments
Open

Different database password in Django and PostgreSQL containers #159

rvishna opened this issue Aug 5, 2020 · 2 comments

Comments

@rvishna
Copy link

rvishna commented Aug 5, 2020

In the current example template, the Django and PostgreSQL containers are in separate DeploymentConfig's. I have a CI pipeline that processes the template, and since I leave the DATABASE_PASSWORD parameter blank (intentional), it is generated every time the pipeline runs. However, this doesn't trigger a ConfigChange in either DeploymentConfig because the environment variables are secret references (it seems like this would be the obvious thing to do). The Django container does get re-built as part of my CI pipeline and triggers an ImageChange, so the result is that I end up with a Django container that has a different database password in its environment variables from my PostgreSQL container.

Has anyone figured out how to solve this problem? I thought maybe if we put both containers in the same DeploymentConfig or have both the containers be restarted when the Django image changes... but neither seems ideal. Is there a way to specify an annotation that results in the Deployments being recreated when the Secret is updated?

@bparees
Copy link
Contributor

bparees commented Aug 5, 2020

However, this doesn't trigger a ConfigChange in either DeploymentConfig because the environment variables are secret references (it seems like this would be the obvious thing to do).

obvious, perhaps, but probably not trivial to implement since it means watching a separate resource for changes, when that resource is referenced by a primary resource.

Is there a way to specify an annotation that results in the Deployments being recreated when the Secret is updated?

You could also have a post-deployment hook on your django deploymentconfig that triggers the postgres deploymentconfig.

or you could have your pipeline twiddle an otherwise unused value(dummy env var, or an annotation on the pod template) on the postgres deploymentconfig to force it to redeploy.

@rvishna
Copy link
Author

rvishna commented Aug 7, 2020

@bparees I should've mentioned that I'm trying to do a Rolling update on the Django container to avoid any down time at all, but this doesn't seem possible with the current template (at least I don't know how). If I start the PostgreSQL container during a post-deployment hook of the Django DeploymentConfig, the old replication controller has already scaled to zero and the new replication controller is serving requests, so until the database container is ready, my application is not available. If I do it as a pre-deployment hook, the application becomes unavailable until my new replication controller has scaled up. I have 2 replicas for Django and 1 replica for PostgreSQL.

I have disabled all triggers on my deployments, so the only way new deployments are rolled out is manually (or through my CI pipeline). If I run the following steps in my CI, I expected to have zero downtime, but I can confirm this is not the case.

oc process -f template.yml --param-file params.env | oc apply -f - # this step changes the database password secret
oc rollout latest django
oc rollout latest db

Here, the database container is not started during a pre/post hook, but right after the oc rollout latest django has returned. What I expect to happen:

  1. The first oc process command changes the secret.
  2. In all likelihood the oc rollout latest db command will recreate the postgres container first.
  3. The old replication controller for django is scaling down... it may find that the database connection is broken because the new postgres container refuses the old password. However, as soon as the new replication controller has spun up 1 pod, my application should be available again.

What happens instead is that not only do I see a 500 error from my django app (expected due to #3 above), but for a period of time, my application is simply unavailable (I get an OpenShift-generated page with the message "The application is currently not serving requests at this endpoint. It may not have been started or is still starting.").

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants