Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Ability to add custom labels to EE pods #242

Open
RylandDeGregory opened this issue Apr 23, 2021 · 8 comments
Open

Feature request: Ability to add custom labels to EE pods #242

RylandDeGregory opened this issue Apr 23, 2021 · 8 comments
Labels
help wanted Extra attention is needed type:enhancement New feature or request

Comments

@RylandDeGregory
Copy link

RylandDeGregory commented Apr 23, 2021

I am attempting to use AAD Pod Identity to grant my AWX environment implicit access to Azure Key Vault using a user-assigned Managed Identity. One of the requirements is that the pods be labeled with the name of the Managed Identity to use, like this:

apiVersion: v1
kind: Pod
metadata:
  name: demo
  labels:
    aadpodidbinding: <MANAGED_IDENTITY_NAME>

The process works as expected, and I was able to validate by performing kubectl label pod awx-pod aadpodidbinding=MANAGED_IDENTITY_NAME, but AWX generates a new pod for each job. This prevents the authentication from working unless the EE pods can be created by the operator with that label.

I understand that there are already variables that set pod labels for use in node selection, and we already have variables that control annotations for ingress resource definitions. My question is, can there be another AWX spec variable added to set custom labels on EE pods? I know that my ask is specifically about AAD pod identity, but I'm sure there are other use cases for user-defined pod labels.

@lukasertl
Copy link

I'd like to add to this request the ability to add custom annotations to the EE pod.

With a custom annotation the EE could use the Hashicorp Vault agent injector to access secrets in Vault.

@lukasertl
Copy link

Meanwhile I found that you can do this already by changing the pod definition in the tower instances Interface.

@RylandDeGregory
Copy link
Author

@lukasertl thank you so much, that worked perfectly!

@tchellomello
Copy link
Contributor

tchellomello commented Apr 29, 2021

Just for the documentation, here is a screenshot of how you extend custom labels to your ee pods;

Instance Groups --> edit the instance group ---> Customize pod specification

image

So I guess we already have a place for that in the AWX itself which allows more freedom than setting it on the spec itself.

Are you good with this approach @RylandDeGregory?

@tchellomello tchellomello added type:enhancement New feature or request and removed state:needs_info labels Apr 29, 2021
@RylandDeGregory
Copy link
Author

RylandDeGregory commented Apr 29, 2021

Hey @tchellomello I understand that this is possible, and I've gotten it working using this approach.

But, telling me to go make a pod spec customization in the GUI is not the answer I wanted. This is squarely a code change, not a DB or UI change, and should be possible to set in the code that I use to deploy AWX.

@tchellomello tchellomello added the help wanted Extra attention is needed label Apr 29, 2021
@tchellomello
Copy link
Contributor

I hear you @RylandDeGregory. As this can be overridden by the webUI, this change would need to be coordinated with AWX as to what would happen when you have in both places.

For now, leaving it open with help wanted and PR's are always welcome.

@pabelanger
Copy link
Contributor

Looks like I maybe hitting this issue too, our use case is we want to setup a dedicated nodepool in AKS, specifically for execution environments. So, we'd need the ability to configure all EE pods to use this label (and namespace) ideally via awx-operator.

I think the solution listed here is great, we just need to take the next step to expose it to automation.

kdelee added a commit to kdelee/awx that referenced this issue Nov 29, 2021
This will allow us to control the default container group created via settings, meaning
we could set this in the operator and the default container group would get created with it applied.

We need this for ansible/awx-operator#242
@kdelee
Copy link
Member

kdelee commented Nov 29, 2021

@pabelanger for applying changes to the pod spec for the default container group (which is what gets applied to pods provisioned for jobs) I think we need something like ansible/awx#11395. I don't think any changes to the operator are necessary, as we can use extra_settings https://github.com/ansible/awx-operator#extra-settings

Not 100% sure on the formatting we will need if it is multi-line

From my reading, #676 would apply to the awx deployment pods themselves. That is probably a good use case but I don't think it solves your problem

kdelee added a commit to kdelee/awx that referenced this issue Dec 7, 2021
This will allow us to control the default container group created via settings, meaning
we could set this in the operator and the default container group would get created with it applied.

We need this for ansible/awx-operator#242
kdelee added a commit to kdelee/awx that referenced this issue Dec 10, 2021
This will allow us to control the default container group created via settings, meaning
we could set this in the operator and the default container group would get created with it applied.

We need this for ansible/awx-operator#242

Deepmerge the default podspec and the override

With out this, providing the `spec` for the podspec would override everything
contained, which ends up including the container used, which is not desired

Also, use the same deepmerge function def, as the code seems to be copypasted from
the utils
TinLe pushed a commit to TinLe/awx that referenced this issue Jan 21, 2022
This will allow us to control the default container group created via settings, meaning
we could set this in the operator and the default container group would get created with it applied.

We need this for ansible/awx-operator#242

Deepmerge the default podspec and the override

With out this, providing the `spec` for the podspec would override everything
contained, which ends up including the container used, which is not desired

Also, use the same deepmerge function def, as the code seems to be copypasted from
the utils
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed type:enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants