Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] How to assume IAM role inside the escalator pod? Getting 403 despite instructions #231

Open
FilipSwiatczak opened this issue Oct 10, 2023 · 10 comments
Labels
question Further information is requested

Comments

@FilipSwiatczak
Copy link

Hello guys,
It's a wonderful project and I've almost got it working. Having followed Readme instructions in (https://github.com/atlassian/escalator/blob/master/docs/deployment/aws/README.md)
I have those ticked off:

      ---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: escalator
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: escalator
      role: escalator
  template:
    metadata:
    # I'm really not sure all three are required as below: https://github.com/atlassian/escalator/blob/master/docs/deployment/aws/README.md#deployment
      annotations:
        iam.amazonaws.com/role: arn:aws:iam::XXX:role/bitbucket-pipelines-escalator-role
      labels:
        app: escalator
        role: escalator
    spec:
      serviceAccountName: escalator
      containers:
      - image: atlassian/escalator
        command:
        - ./main
        - --nodegroups
        - /opt/conf/nodegroups/nodegroups_config.yaml
        - --cloud-provider
        - aws
        # this bit: https://github.com/atlassian/escalator/blob/master/docs/deployment/aws/README.md#sts-assume-role
        - --aws-assume-role-arn
        - arn:aws:iam::XXX:role/bitbucket-pipelines-escalator-role
        name: escalator
        ports:
        - containerPort: 8080
        env:
        # this bit: https://github.com/atlassian/escalator/blob/master/docs/deployment/aws/README.md#aws-credentials
        - name: AWS_ROLE_ARN
          value: arn:aws:iam::XXX:role/bitbucket-pipelines-escalator-role
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: AWS_REGION
          value: eu-west-1
        volumeMounts:
        - name: escalator-nodegroups
          mountPath: /opt/conf/nodegroups
          readOnly: true

Given all that I'm still getting 403 on attempt to assume role.
AccessDenied: User: arn:aws:sts::XXX:assumed-role/eksctl-bitbucketpipelines-nodegro-NodeInstanceRole-XXX is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XXX:role/bitbucket-pipelines-escalator-role\n\tstatus code: 403

  1. I am missing something? Is the documentation complete?
  2. Other sources suggest creating OIDC Provider for the cluster (https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) I've done that with eksctl and it has no impact on it's own
  3. Is there a specific trust relationship on the IAM role required before the escalator pod can assume it please?

Any pointers would be much appreciated. Thank you!

@FilipSwiatczak FilipSwiatczak added the question Further information is requested label Oct 10, 2023
@awprice
Copy link
Member

awprice commented Oct 10, 2023

Thanks for giving Escalator a go @FilipSwiatczak!

Based on the following error:

AccessDenied: User: arn:aws:sts::XXX:assumed-role/eksctl-bitbucketpipelines-nodegro-NodeInstanceRole-XXX is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XXX:role/bitbucket-pipelines-escalator-role\n\tstatus code: 403

I'd say the trust relationship isn't setup correctly between the two roles to allow eksctl-bitbucketpipelines-nodegro-NodeInstanceRole-XXX to assume bitbucket-pipelines-escalator-role.

Have a look at this page on how to allow a role to assume another role - https://nelson.cloud/aws-iam-allowing-a-role-to-assume-another-role/, it has instructions on how to allow assuming a role either in the same account or in a different account.

@awprice
Copy link
Member

awprice commented Oct 10, 2023

I'd also like to mention that documentation on how to configure a role to another assume role is going to be missing from our documentation as it will depend on the configuration of the end user's cluster/AWS accounts and we can't cater for all scenarios.

@FilipSwiatczak
Copy link
Author

thanks @awprice, it worked with these two changes:

  1. Run EksCtl to create OIDC for the Cluster like:
    eksctl utils associate-iam-oidc-provider --cluster <cluster-name> --approve --region <your-region>

  2. and then modify Trust relationship on your aws Role by adding:

        {
            "Effect": "Allow",
            "Principal": {
                -- the exact name of the sts role the pod starts under, right now can be gleaned from the initial error log on the pod
                "AWS": "arn:aws:sts::ACCOUNT:assumed-role/eksctl-CLUSTER_NAME-nodegro-NodeInstanceRole-RANDOM_VALUE_PER_CLUSTER"
            },
            "Action": "sts:AssumeRole"
        }
  1. modify Policy which the Role references with:
        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::ACCOUNT:role/eksctl-CLUSTER_NAME-nodegro-NodeInstanceRole-*"
        }

So while this works, it's not fully automated as I can't find a way to fetch the sts role the pod starts under from the cluster.
If you know that or how to structure that better, please share :)

I've mostly raised this question to save other people time, to have a copy paste solution that would be as easy as the rest of instructions in the project Readme!

@FilipSwiatczak
Copy link
Author

FilipSwiatczak commented Oct 20, 2023

Also @awprice if escalator runs in the same node group that it controls, how can it prevent tainting it's own node and forcing escalator re-deployment? Really can't find an answer in the docs!
On scale down, using the Oldest-first approach, my setup taints the original node on which escalator pod runs first:

time="2023-10-20T16:10:42Z" level=info msg="Sent delete request to 1 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Reaper: There were -1 empty nodes deleted this round"
time="2023-10-20T16:10:42Z" level=info msg="untainted nodes close to minimum (1). Adjusting taint amount to (0)"
time="2023-10-20T16:10:42Z" level=info msg="Scaling Down: tainting 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Tainted a total of 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:11:08Z" level=info msg="Signal received: terminated"
time="2023-10-20T16:11:08Z" level=info msg="Stopping autoscaler gracefully"
time="2023-10-20T16:11:08Z" level=info msg="Stop signal received. Stopping cache watchers"
time="2023-10-20T16:11:08Z" level=fatal msg="main loop stopped"
rpc error: code = NotFound desc = an error occurred when try to find container "50d71de1cd6378c134bcc3870d3c378860855a379a40d3a7163cf4a913733a6a": not found%  

I apologise if those are noobish questions, I'm not a kubernetes expert! (yet!)

@FilipSwiatczak
Copy link
Author

Also @awprice if escalator runs in the same node group that it controls, how can it prevent tainting it's own node and forcing escalator re-deployment? Really can't find an answer in the docs! On scale down, using the Oldest-first approach, my setup taints the original node on which escalator pod runs first:

time="2023-10-20T16:10:42Z" level=info msg="Sent delete request to 1 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Reaper: There were -1 empty nodes deleted this round"
time="2023-10-20T16:10:42Z" level=info msg="untainted nodes close to minimum (1). Adjusting taint amount to (0)"
time="2023-10-20T16:10:42Z" level=info msg="Scaling Down: tainting 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Tainted a total of 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:11:08Z" level=info msg="Signal received: terminated"
time="2023-10-20T16:11:08Z" level=info msg="Stopping autoscaler gracefully"
time="2023-10-20T16:11:08Z" level=info msg="Stop signal received. Stopping cache watchers"
time="2023-10-20T16:11:08Z" level=fatal msg="main loop stopped"
rpc error: code = NotFound desc = an error occurred when try to find container "50d71de1cd6378c134bcc3870d3c378860855a379a40d3a7163cf4a913733a6a": not found%  

I apologise if those are noobish questions, I'm not a kubernetes expert! (yet!)

Using instance protection like:

# protect instance on which escalator is running from termination
aws autoscaling set-instance-protection --instance-ids XXX --auto-scaling-group-name eks-bitbucketpipelines-ng-on-demand-XXX --protected-from-scale-in --region eu-west-1

also does not work and the Node is terminated after being tainted. Though if it did work it would probably leave escalator stuck trying to remove the node over and over.

@awprice
Copy link
Member

awprice commented Oct 22, 2023

@FilipSwiatczak No problem!

So while this works, it's not fully automated as I can't find a way to fetch the sts role the pod starts under from the cluster.
If you know that or how to structure that better, please share :)

We tend to use IAM roles for service accounts on EKS, as this will prevent the need to deal with node instance roles. This documentation from AWS gives a good introduction and steps to use them: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

if escalator runs in the same node group that it controls, how can it prevent tainting it's own node and forcing escalator re-deployment? Really can't find an answer in the docs! On scale down, using the Oldest-first approach, my setup taints the original node on which escalator pod runs first:

We avoid this by running multiple node groups in our clusters and running Escalator on a node group that isn't being scaled up/down by Escalator to prevent Escalator terminating the node that it itself is running on.

Escalator is primarily designed for scaling node groups that are running job-based workloads - so ones that will end. Escalator itself could be considered a service based workload - meaning that it will run forever. So it isn't really the sort of thing that should be run on the node groups that Escalator is scaling.

@FilipSwiatczak
Copy link
Author

We tend to use IAM roles for service accounts on EKS, as this will prevent the need to deal with node instance roles. This documentation from AWS gives a good introduction and steps to use them: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

thank you @awprice ! I've followed above link and at the very end of Pod checks realised the Escalator pod does not have AWS_WEB_IDENTITY_TOKEN_FILE set.
Those docs suggest an amazon-eks-pod-identity-webhook is required to run to inject the Token but I suspect you are using kube2iam instead right? Thanks again for your patience

@FilipSwiatczak
Copy link
Author

It appears when the escalator is deployed in a separate node-group, with custom label escalator:worker at both node and pod level, escalator does't see any cpu or mem utilisation (0). It only works when it's in the same node group for me.

apiVersion: v1
kind: ConfigMap
metadata:
  name: escalator-config
  namespace: kube-system
data:
  nodegroups_config.yaml: |
    node_groups:
      - name: "bitbucketpipelines-ng-spot"
        label_key: "escalator"
        label_value: "worker"

With this and the IAM injection issue I'm a bit stuck. Are there any more complete deployment examples in existence please?

@FilipSwiatczak
Copy link
Author

When escalator is attempting to scale node-group different to one it's deployed in, it throws:

time="2023-10-24T10:46:18Z" level=info msg="Node IP.eu-west-1.compute.internal, aws:///eu-west-1c/ID ready to be deleted" drymode=false nodegroup=bitbucketpipelines-ng-spot
time="2023-10-24T10:46:18Z" level=error msg="failed to terminate node in cloud provider IP.eu-west-1.compute.internal, aws:///eu-west-1c/ID" error="node ip.eu-west-1.compute.internal, aws:///eu-west-1c/id belongs in a different node group than eks-bitbucketpipelines-ng-spot-id"
time="2023-10-24T10:46:18Z" level=fatal msg="node ip.eu-west-1.compute.internal, aws:///eu-west-1c/id belongs in a different node group than eks-bitbucketpipelines-ng-spot-id"

@awprice
Copy link
Member

awprice commented Oct 26, 2023

@FilipSwiatczak Some answers to your questions:

  • Escalator definitely works with IAM roles for service accounts, as we have it working that way at the moment and not using kube2iam. You can either add that environment variable manually to the Escalator deployment yourself or rely on something like that pod identity webhook to add it manually. Up to you, but adding it manually is a lot more simpler.
  • In terms of running Escalator in a different node group - this is definitely possible as we have it running this way internally. It's hard to say what the exact issue is without access to your cluster, but I would check the following things: Labels on nodes are correct, nodeSelectors on pods are correct, nodeAffinities on the pods are correct and IAM permissions are correct. The values for all of these will really depend on your environment, so I can't say what these should be set to. I'd also recommend having a read of https://github.com/atlassian/escalator/blob/master/docs/pod-node-selectors.md, as this explains how Escalator selects pods/nodes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants