Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre populate data in opa container on startup. #221

Open
eshepelyuk opened this issue Jun 21, 2023 · 8 comments
Open

Pre populate data in opa container on startup. #221

eshepelyuk opened this issue Jun 21, 2023 · 8 comments

Comments

@eshepelyuk
Copy link
Contributor

eshepelyuk commented Jun 21, 2023

Hello @opalmer, created this issue from your comment in #211


I came to this issue from #207 which was marked as a duplicate of #189. After reading through the initial description, I'm not sure that this issue (#211) is going to address #207. I do agree that liveness probes that automatically restarts the management container would be an improvement but I don't think it will solve #207.

Some specific scenarios in which this approach might not work:

  • The time between when the OPA container comes back online, the management container's health check failing (multiple times) and the management container being restated causing the policies to be pushed can be seconds at best. During that time, the opa container will continue to process requests without any configuration being present.
  • The OPA container's own health check could fail and cause the container to be restarted, this can eventually cause the management container's health check to fail since it can't connect to the opa container. Like the point above however, this will take time.

Even if it were less than a second that opa didn't have policies loaded, hundreds of requests could get through without being run through the proper policies. For something like opa where a policy could be blocking privileged containers, ensuring images can't come from an unknown registry, ensuring pods end up on the right nodes, etc this can have some major side effects from both a security and an operational perspective.

Now, I've tried thinking of ways to work around the current state:

  • Use the existing bootstrap & extra volume arguments in the chart to mount policies. We have a chart that creates configmaps with regos that are tagged so the management container can push those into opa. We could turn those into volumes too except now we don't have a single source of truth and if we update the policies, which are controlled by a different chart, we also need to update opa causing a restart of the pods.
  • Add extra arguments to pull policy bundles from a remote source.

The extra argument path might work except the docs state:

By default, the OPA REST APIs will prevent you from modifying policy and data loaded via bundles. If you need to load policy and data from multiple sources, see the section below.

.. so that would mean you can't really use remote bundles and the current approach with the REST API? Looking around a bit more, I found #76 which is talking about adding a bundle api. Then after reading the ensuring operational readiness docs:

On the other hand, if you use the Bundle service OPA will start up without any policies and immediately start downloading a bundle. But even before the bundle has successfully downloaded, OPA will answer policy queries if asked (which is in every case except the bootstrap case the right thing to do).

So.... that got me thinking. What if there was an init container, or an entrypoint for the opa container, that either runs a subcommand of the management container command or reaches out to the management container (may have to wait for it to be up) and pulls down the policies and drops those on disk before opa starts?

This way when opa comes up, all the policies are pre-loaded and it can't serve a request without those policies. The management container could then continue as normal pushing policies as they are updated in kubernetes. This would also have an advantage that you still have a single source of truth (config maps) and if there's something else wrong (k8s api issues, kubelet problems, etc) your pod will never be in a ready state. This should also tightly couple the chain of events leading up to opa starting so no matter how the pod dies or how the chart is configured it should always load up the policies you've defined every time.

... at least that's the idea. I'll admit, I'm not an OPA expert so I could be missing a glaring issue with this idea somewhere. If that's the case, happy to learn something new haha 😄

@eshepelyuk eshepelyuk changed the title I came to this issue from https://github.com/open-policy-agent/kube-mgmt/issues/207 which was marked as a duplicate of https://github.com/open-policy-agent/kube-mgmt/issues/189. After reading through the initial description, I'm not sure that this issue (#211) is going to address #207. I do agree that liveness probes that automatically restarts the management container would be an improvement but I don't think it will solve #207. Pre populate data in opa container on startup. Jun 21, 2023
@eshepelyuk
Copy link
Contributor Author

eshepelyuk commented Jun 21, 2023

Hello @opalmer

I would like to propose a different view on the solution of the issue you've described.
It is my personal preference because I assume it's easier to implement.

  1. The solution is introducing a new operational mode for kube-mgmt binary, let's call it "one-shot#.
  2. Currently kube-mgmt works as a daemon, listening to kis events and calling OPA REST API.
  3. In one-shot mode, enabled by new command line options, kube-mgmt instead will execute one time query against k8s API to get all data / policy config maps as well as other k8s resources that have to be replicated, serializing them to a directory in file system as OPA bundle. And then exit.
  4. So, this new mode can be later used as an init container to OPA container. Via shared volume and OPA container configuration that will load OPA from the volume's bundle we can guarantee tht OPA container will start with pre populated data.
  5. OPA container should be configured not to reload data from the file system, and the readiness probe should be configured to not allow traffik until init container completes.

@opalmer
Copy link

opalmer commented Jun 21, 2023

That's pretty much exactly what I was going for, only difference would be a step 6:

  1. kube-mgmt side car will start but not run the kube-mgmt binary until opa is up and has been loaded with policies.

This could be maybe be accomplished by sharing a volume between the two containers and having kube-mgmt in the opa container writing out a file after the rules are loaded. Or kube-mgmt could simply attempt to POST to rules to opa until it's ready (all the while kube-mgmt's health check would fail until that's successful).

@eshepelyuk
Copy link
Contributor Author

eshepelyuk commented Jun 21, 2023

That's pretty much exactly what I was going for, only difference would be a step 6:

  1. kube-mgmt side car will start but not run the kube-mgmt binary until opa is up and has been loaded with policies.

This could be maybe be accomplished by sharing a volume between the two containers and having kube-mgmt in the opa container writing out a file after the rules are loaded. Or kube-mgmt could simply attempt to POST to rules to opa until it's ready (all the while kube-mgmt's health check would fail until that's successful).

This is already implemented in #210
I.e. kube-mgmt is not started until OPA readiness check passed.

@eshepelyuk
Copy link
Contributor Author

@opalmer are you able / willing to contribute ?

@opalmer
Copy link

opalmer commented Jun 21, 2023

I can try and give it a shot yeah, may not be immediate but given this is an issue I've come across in production I should be able to make time for it.

I'll dig into the code base and see if I've got questions about where to start. If you have any pointers though in terms of the bits I'll want to look at first, that would be helpful.

@eshepelyuk
Copy link
Contributor Author

eshepelyuk commented Jun 21, 2023

I can try and give it a shot yeah, may not be immediate but given this is an issue I've come across in production I should be able to make time for it.

I'll dig into the code base and see if I've got questions about where to start. If you have any pointers though in terms of the bits I'll want to look at first, that would be helpful.

Hello

I 'll be very appreciated if you can try to address this.

I am not golang dev, so wont suggest any useful, I am usually doing in trial and error mode.
From testing perspective - take a look at justfile, you can find command there to setup local k8s cluster, build and test project.

Also, from Helm side you will need to create unit tests for chart rendering (test/unit) and e2e tests (test/e2e),we can discuss scenarios later.

@opalmer
Copy link

opalmer commented Jun 21, 2023

No worries that's enough to start with, thanks!

I dove into some of the code yesterday when trying to track down exactly what the management container was doing so I'm familiar enough that I can start with that I think. If you had some specifics that would be great but I spend most of my day in Go so I can dig around. Pointers to those tests is useful though!

@saranyareddy24
Copy link
Contributor

Hi @opalmer, I guess you are working on this issue. we are also waiting for this fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants