-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No way to tackle Karpenter when KMS is mandatory by default #3037
Comments
can you format you code by surrounding with code fences (```hcl` ) and provide psuedo code of what you are trying to do - its not very clear what you are trying to do or how you are approaching it |
Hi Bryan, thanks for your response. I added Karpenter code too. Does it explain now? |
not really - theres a bunch of variables that are unknown. If you are trying to re-use the cluster KMS key (that is used to encrypt cluster secrets), you will need to add the necessary permissions to use with EBS volumes - the module does not do this by default |
yes, i am trying to re-use the cluster KMS key and I need to know which is the good way to tackle this. Since when I add permissions in the key outside of the module, they get overritten at the next terraform run. I think creating a separate key for karpenter might be a better choice. Let me try that out. Meanwhile you can comment if that's the best way |
why not just add the required permissions into the key that is created via terraform? check the variables that are provided |
that indeed worked ! thanks.. maybe add this to docmentation somewhere so it will help others facing the same issue later on |
The variables are in the documentation - any variable definitions are automatically added to our documentation |
I meant to mention it here in the example as it is described here |
we can't duplicate all of the docs within Karpenter, EKS, MNG, Fargate, etc. We focus on documentation related to the module itself |
@zohairraza How did you got this resolved ? I have having the same issue. |
By creating another key for karpenter: resource "aws_kms_key" "KarpenterKMSKey" {
description = "Karpenter KMS Key"
policy = local.merged_policy
depends_on = [module.eks]
}
resource "aws_kms_alias" "KarpenterKMSKey" {
name = "alias/eks-karpenter-key"
target_key_id = aws_kms_key.KarpenterKMSKey.key_id
}
resource "kubectl_manifest" "karpenter_node_class" {
yaml_body = <<-YAML
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: default
spec:
amiFamily: AL2
role: ${module.karpenter.node_iam_role_name}
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
tags:
karpenter.sh/discovery: ${module.eks.cluster_name}
blockDeviceMappings:
- deviceName: /dev/xvda
ebs:
volumeSize: 30Gi
volumeType: gp3
iops: 10000
encrypted: true
kmsKeyID: ${aws_kms_key.KarpenterKMSKey.key_id}
deleteOnTermination: true
throughput: 125
YAML
depends_on = [
helm_release.karpenter
]
} |
just to be clear - there is no KMS key for Karpenter. There are two use case where KMS keys are involved within EKS
If you want to re-use the KMS key created by this module that was created for encrypting secrets within the cluster, you MUST update the key policy to ensure it will work for encrypting EBS volumes with the solution that is creating the instances (EKS managed node group, self-managed node group, Karpenter, etc.) Lines 238 to 243 in f90f15e
|
@zohairraza thanks for the response.
terraform-aws-eks/examples/eks_managed_node_group/main.tf Lines 429 to 451 in f90f15e
The issue I am facing is, when a karpenter tries to scale the instance, it gets terminated immediately, on further investigation it seems to use the default kms key mentioned in the account, instead of using the one created as part of this module. Even on launchTemplate also i can see the new EBS KMS is being used. This is how EC2NodeClass is:
Btw.. i am trying to use Pod-Identity (not sure, if there are any extra setting needed) Thanks |
looks like a configuration error on your end - I would check the Karpenter documentation https://karpenter.sh/docs/troubleshooting/#node-terminates-before-ready-on-failed-encrypted-ebs-volume |
Thanks i did looked into it, but couldn't resolve it, i have added that policy to my KMS, still the same.. |
have you set a default on the account/region? https://docs.aws.amazon.com/cli/latest/reference/ec2/get-ebs-default-kms-key-id.html |
how can karpenter pick a random/default kms key? Even though it's been clearly mentioned in EC2NodeClass to use the KMS created as part of this module ? actually i can see the get-ebs-kms command it's the alies/aws/ebs created as part of this is a default, so back to square one, god .....how is it even getting that kms key ;(..... i am missing a critical point here.. |
most likely your account is configured with https://docs.aws.amazon.com/ebs/latest/userguide/work-with-ebs-encr.html#encryption-by-default |
maybe that's the case... Any idea how I can overwrite it ? make it use the new kms created, also i am using create_instance_profile option with karpenter (not sure if that makes any difference). |
that was my situation too, so I created another key dedicated for karpenter and used it in karpenter configuration which worked. Before I was using EKS module cluster key in Karpenter Nodepool
|
The issue with my setup was related to the KMS key used while creating the AMI. Even though I specified the KMS in MG and EC2NodePool, it couldn't re-encrypt because the Karpenter role lacked permission for the KMS key created as part of the AMI. My initial idea was not to use one KMS key for all the clusters, but this issue has brought me back to square one. |
Description
In my AWS account, KMS encryption is mandatory for EBS volumes. According to the Karpenter documentation (https://karpenter.sh/docs/troubleshooting/#node-terminates-before-ready-on-failed-encrypted-ebs-volume), when encountering issues with encrypted EBS volumes, an additional policy needs to be added to the KMS Key of the EKS cluster. However, I encountered difficulties implementing this additional policy using the kms_key_source_policy_documents feature provided by the Terraform AWS EKS module.
The issue arises from the fact that kms_key_source_policy_documents expects an IAM policy as input, but the policy in the KMS Key includes a Principal, which is not supported in IAM policy definitions. When attempting to create an IAM Policy without a Principal, I received the following error: "MalformedPolicyDocument: Policy document should not specify a principal."
Additionally, since the KMS key is created by the module, any modifications made to it are overwritten by subsequent Terraform runs. I have been unable to find a solution to ignore these changes.
Versions
Module version [Required]: v20.10.0
Terraform version: v1.5.7
Provider version(s): 5.40.0
Reproduction Code [Required]
###Steps to Reproduce the Behavior
Enable KMS encryption by default for EBS volumes.
Use the Terraform AWS EKS module to manage the EKS cluster.
Attempt to add an additional policy to the KMS Key using kms_key_source_policy_documents.
Encounter difficulties due to the inclusion of a Principal in the KMS Key policy.
###Expected Behavior
I expected to be able to seamlessly add an additional policy to the KMS Key of the EKS cluster, as recommended in the Karpenter documentation, without encountering errors related to IAM policy definitions.
###Actual Behavior
I encountered difficulties when attempting to create an IAM policy without a Principal, as the policy in the KMS Key includes a Principal. This resulted in a "MalformedPolicyDocument" error.
###Additional Context
I have already cleared the local cache and ensured that I am not using workspaces. Any assistance in resolving this issue would be greatly appreciated.
The text was updated successfully, but these errors were encountered: