Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API Authorization from Outside EKS Cluster throws Unauthorized error after 20+ minutes #590

Open
Zhenye-Na opened this issue Apr 12, 2023 · 8 comments

Comments

@Zhenye-Na
Copy link

We have tried to implement similar methods defined in section https://github.com/kubernetes-sigs/aws-iam-authenticator#api-authorization-from-outside-a-cluster in Golang

func GetEKSToken(ctx context, clusterName string) (*KubeToken, error) {

	request, _ := NewStsClientFrom(ctx).GetCallerIdentityRequest(&sts.GetCallerIdentityInput{})
	# clusterName in the request header
        request.HTTPRequest.Header.Add("x-k8s-aws-id", clusterName)

	presignUrl, err := request.Presign(60) # presign 60 seconds
	if err != nil {
            .....
	}

	return &KubeToken{
		Kind:       "ExecCredential",
		ApiVersion: "client.authentication.k8s.io/v1beta1",
		Status: &KubeTokenStatus{
			ExpirationTimestamp: time.Now().Local().Add(time.Hour * time.Duration(1)).Format("2006-01-02T15:04:05Z"),
			Token:               "k8s-aws-v1." + base64.RawURLEncoding.EncodeToString([]byte(presignUrl)),
		},
	}, nil
}

It works fine.

However, we are having intermittent issue that the Kubernetes Client created with this token is throwing Unauthorized error when performing kubernetes operation, for example, equivalent to kubectl get nodes

We are using https://kubernetes.io/docs/reference/config-api/client-authentication.v1beta1/#client-authentication-k8s-io-v1beta1-ExecCredential instead of using similar headers = {'Authorization': 'Bearer ' + get_bearer_token('my_cluster', 'us-east-1')} as the tutorial.

I am wondering what could be the potential reason for this Unauthorized error ? The API successfully ran for almost 20 minutes and then suddenly this error is thrown back.

I am thinking:

  1. Is the bearerToken expired exactly at the timestamp defined in ExpirationTimestamp or after some magical time delta ? Currently we configured the ExpirationTimestamp to be 1 hour after the token is generated. Does this conflict with the sts presign 60 seconds ?
  2. I noticed that Something to note though is that the IAM Authenticator explicitly omits base64 padding to avoid any = characters thus guaranteeing a string safe to use in URLs. is mentioned and the python code example
    # remove any base64 encoding padding:
    return 'k8s-aws-v1.' + re.sub(r'=*', '', base64_url)

is explicitly replace = with empty string, which is absent from our golang methods, how so far everything related to kubernetes operation is working fine.

Also trying to get some feedback on this if there is anything else that I am missing.

@Zhenye-Na
Copy link
Author

Trying to bump this issue up again since no replies received after a month

@iamnoah
Copy link

iamnoah commented May 18, 2023

Also seeing this. We will see a window where sometimes a generated token gets Unauthorized for 3-5 minutes before it expires. The structure of the go-lang client auth plugins keeps us from detecting the problem and regenerating the token, so we have short-lived outages. Seems to happen after about an hour.

Our client is running in an EKS cluster using IRSA and communicating with a different EKS cluster.

@iamnoah
Copy link

iamnoah commented May 19, 2023

I think I understand what is going on in our case. We are using the token.Generator without passing in a Session, in a k8s Pod using IRSA. The IAM Role's MaxSessionDuration is 1 hour. So what happens is:

  1. t0: Pod Starts Up
    a. Using the WEB_IDENTITY_TOKEN_FILE, does an sts:AssumeRoleWithWebIdentity, getting a session that is valid for 1 hour
    b. Using that session, generate an EKS token (by presigning a GetCallerIdentity request)
  2. +14m - token expires, the same session is used to generate a new token
  3. +28m, 42m, 56m - generate a new token with the same session
  4. At 1 hour, the original session is expired, but the last token we generated says it has 10m before it expires.
  5. Around 1h3m, EKS starts to reject the last generated token. Not sure why there is a 3m delay, but it's pretty consistent. Probably some kind of grace period in STS.
  6. When that token expires, the SDK detects that the session is also expired, creates a new session, and uses that to generate a new token and everything is fine again.

So the problem is that if you use a session that is about to expire to presign, you get less than the 15 minutes assumed in the code. The correct expiration would be min(session.Expiry, time.Now() + 15m)

@iamnoah
Copy link

iamnoah commented May 19, 2023

One workaround is to expire the session early:

s, err := session.NewSessionWithOptions(session.Options{
	SharedConfigState: session.SharedConfigEnable,
	CredentialsProviderOptions: &session.CredentialsProviderOptions{
		WebIdentityRoleProviderOptions: func(provider *stscreds.WebIdentityRoleProvider) {
			// When the session expires, pre-signed tokens seem to become invalid within 3 minutes,
			// even if they were created <15 minutes ago. Expiring the session 12.5 minutes early
			// should keep the token from falling into this window.
			provider.ExpiryWindow = 12*time.Minute + 30*time.Second
		},
	},
})
tok, err := gen.GetWithOptions(&token.GetTokenOptions{
	Session: s,
        // set ClusterID, etc.
})

(last comment, sorry for hijacking this ticket)

iamnoah added a commit to SecurityJourney/aws-iam-authenticator that referenced this issue May 19, 2023
Fixes kubernetes-sigs#590

This comes up when using a long-lived `token.Generator` instance where the underlying assume-role session might expire, invalidating the token.
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2024
@Zhenye-Na
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2024
@hjkatz
Copy link

hjkatz commented Feb 6, 2024

We're also experiencing this issue using kubectl --kubeconfig with these settings:

        {
            "name": "example-user",
            "user": {
                "exec": {
                    "command": "aws",
                    "args": [
                        "--profile",
                        "my-profile",
                        "--region",
                        "us-west-2",
                        "eks",
                        "get-token",
                        "--cluster-name",
                        "my-cluster"
                    ],
                    "env": [],
                    "apiVersion": "client.authentication.k8s.io/v1beta1",
                    "provideClusterInfo": false
                }
            }

Should we enable any additional config options?

@pravinchandar
Copy link

Hey folks, I'm running into this issue as well, wondering if there's an update?

@iamnoah I also tried your patch, but still seeing Unauthorized after about a few minutes after the pod starts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants