Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Unauthorized #340

Open
mc-stack88 opened this issue Sep 1, 2023 · 0 comments
Open

Error: Unauthorized #340

mc-stack88 opened this issue Sep 1, 2023 · 0 comments
Assignees

Comments

@mc-stack88
Copy link

Please answer the following questions for yourself before submitting an issue. YOU MAY DELETE THE PREREQUISITES SECTION.

  • [+] I am running the latest version
  • [+] I checked the documentation and found no answer
  • [+] I checked to make sure that this issue has not already been filed

Expected Behavior

Please describe the behavior you are expecting
terragrunt run-all apply should start the deployment with no errors.

Current Behavior

What is the current behavior?
The command terragrunt run-all apply terminates with Error: Unauthorized while creating the resource kubernetes_ingress_v1.default[0] in layer2-k8s while layer 1 gets completed successfully.

Failure Information (for bugs)

  1. When running terragrunt apply on the second layer
    kubernetes_ingress_v1.default[0]: Still creating... [16m31s elapsed]
    kubernetes_ingress_v1.default[0]: Still creating... [16m41s elapsed]

    │ Error: Unauthorized

    │ with kubernetes_ingress_v1.default[0],
    │ on eks-aws-loadbalancer-controller.tf line 419, in resource "kubernetes_ingress_v1" "default":
    │ 419: resource "kubernetes_ingress_v1" "default" {


    ERRO[1636] Terraform invocation failed in /Users/mcstack88/Desktop/withub.nosync/terraform-k8s/terragrunt/demo/us-east-1/k8s-addons/.terragrunt-cache/6HZ7k4s-6z_huPJiLJAHl27bkk0/CbQdP3lVuAib8elqdxtBj0ChKcA/layer2-k8s prefix=[/Users/mcstack88/Desktop/withub.nosync/terraform-k8s/terragrunt/demo/us-east-1/k8s-addons]
    ERRO[1636] 1 error occurred:
    * exit status 1

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

  • Affected module version:
  • OS: macOS 13.4.1 (c)
  • Terraform version: Terraform v1.4.4
    on darwin_arm64

Any other relevant info including logs

running terragrunt plan on layer 2 also errors with a different error.
(base) mcstack88@192 k8s-addons % terragrunt plan
random_string.kube_prometheus_stack_grafana_password[0]: Refreshing state... [id=47CsS:9pzmU&9[z:eCtD]
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-podmonitors.yaml"]: Reading...
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-probes.yaml"]: Reading...
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-alertmanagers.yaml"]: Reading...
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-alertmanagerconfigs.yaml"]: Reading...
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-thanosrulers.yaml"]: Reading...
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-prometheusrules.yaml"]: Reading...
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-servicemonitors.yaml"]: Reading...
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-prometheuses.yaml"]: Reading...
tls_private_key.aws_loadbalancer_controller_webhook[0]: Refreshing state... [id=ad7c55e8386198b5a5f15994ddf0cff313d2c119]
tls_private_key.aws_loadbalancer_controller_webhook_ca[0]: Refreshing state... [id=c6030f289eddab5b8915639128508318edbd17c0]
tls_self_signed_cert.aws_loadbalancer_controller_webhook_ca[0]: Refreshing state... [id=337862517115337565471544379678345511710]
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-prometheusrules.yaml"]: Read complete after 1s [id=https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-prometheusrules.yaml]
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-servicemonitors.yaml"]: Read complete after 1s [id=https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-servicemonitors.yaml]
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-podmonitors.yaml"]: Read complete after 1s [id=https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-podmonitors.yaml]
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-probes.yaml"]: Read complete after 1s [id=https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-probes.yaml]
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-alertmanagerconfigs.yaml"]: Read complete after 1s [id=https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-alertmanagerconfigs.yaml]
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-thanosrulers.yaml"]: Read complete after 1s [id=https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-thanosrulers.yaml]
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-alertmanagers.yaml"]: Read complete after 1s [id=https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-alertmanagers.yaml]
data.http.kube_prometheus_stack_operator_crds["https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-prometheuses.yaml"]: Read complete after 1s [id=https://raw.githubusercontent.com/prometheus-community/helm-charts/kube-prometheus-stack-45.11.0/charts/kube-prometheus-stack/crds/crd-prometheuses.yaml]
data.aws_caller_identity.current: Reading...
module.aws_iam_aws_loadbalancer_controller[0].aws_iam_role.this: Refreshing state... [id=withub-k8s-demo-use1-aws-lb-controller20230901081013670000000002]
module.aws_iam_kube_prometheus_stack_grafana[0].aws_iam_role.this: Refreshing state... [id=withub-k8s-demo-use1-grafana20230901081015300600000003]
module.aws_iam_autoscaler[0].aws_iam_role.this: Refreshing state... [id=withub-k8s-demo-use1-autoscaler20230901081013171200000001]
data.aws_eks_cluster_auth.main: Reading...
data.aws_secretsmanager_secret.infra: Reading...
data.aws_eks_cluster.main: Reading...
data.aws_eks_cluster_auth.main: Read complete after 0s [id=withub-k8s-demo-use1]
data.aws_caller_identity.current: Read complete after 1s [id=584058546818]
data.aws_eks_cluster.main: Read complete after 1s [id=withub-k8s-demo-use1]
module.ingress_nginx_namespace[0].kubernetes_namespace.this[0]: Refreshing state... [id=ingress-nginx]
module.cluster_autoscaler_namespace[0].kubernetes_namespace.this[0]: Refreshing state... [id=cluster-autoscaler]
module.aws_load_balancer_controller_namespace[0].kubernetes_namespace.this[0]: Refreshing state... [id=aws-load-balancer-controller]
module.external_secrets_namespace[0].kubernetes_namespace.this[0]: Refreshing state... [id=external-secrets]
module.reloader_namespace[0].kubernetes_namespace.this[0]: Refreshing state... [id=reloader]
module.aws_node_termination_handler_namespace[0].kubernetes_namespace.this[0]: Refreshing state... [id=aws-node-termination-handler]
data.aws_secretsmanager_secret.infra: Read complete after 2s [id=arn:aws:secretsmanager:us-east-1:584058546818:secret:/withub-k8s-demo/infra/layer2-k8s-H7n6V3]
kubernetes_storage_class.advanced: Refreshing state... [id=advanced]
module.fargate_namespace.kubernetes_namespace.this[0]: Refreshing state... [id=fargate]
module.kube_prometheus_stack_namespace[0].kubernetes_namespace.this[0]: Refreshing state... [id=monitoring]
kubectl_manifest.kube_prometheus_stack_operator_crds["prometheuses.monitoring.coreos.com"]: Refreshing state... [id=/apis/apiextensions.k8s.io/v1/customresourcedefinitions/prometheuses.monitoring.coreos.com]
kubectl_manifest.kube_prometheus_stack_operator_crds["servicemonitors.monitoring.coreos.com"]: Refreshing state... [id=/apis/apiextensions.k8s.io/v1/customresourcedefinitions/servicemonitors.monitoring.coreos.com]
kubectl_manifest.kube_prometheus_stack_operator_crds["podmonitors.monitoring.coreos.com"]: Refreshing state... [id=/apis/apiextensions.k8s.io/v1/customresourcedefinitions/podmonitors.monitoring.coreos.com]
kubectl_manifest.kube_prometheus_stack_operator_crds["probes.monitoring.coreos.com"]: Refreshing state... [id=/apis/apiextensions.k8s.io/v1/customresourcedefinitions/probes.monitoring.coreos.com]
kubectl_manifest.kube_prometheus_stack_operator_crds["thanosrulers.monitoring.coreos.com"]: Refreshing state... [id=/apis/apiextensions.k8s.io/v1/customresourcedefinitions/thanosrulers.monitoring.coreos.com]
kubectl_manifest.kube_prometheus_stack_operator_crds["prometheusrules.monitoring.coreos.com"]: Refreshing state... [id=/apis/apiextensions.k8s.io/v1/customresourcedefinitions/prometheusrules.monitoring.coreos.com]
data.aws_secretsmanager_secret_version.infra: Reading...
module.aws_iam_kube_prometheus_stack_grafana[0].aws_iam_role_policy.this: Refreshing state... [id=withub-k8s-demo-use1-grafana20230901081015300600000003:withub-k8s-demo-use1-grafana20230901081027147800000006]
kubectl_manifest.kube_prometheus_stack_operator_crds["alertmanagerconfigs.monitoring.coreos.com"]: Refreshing state... [id=/apis/apiextensions.k8s.io/v1/customresourcedefinitions/alertmanagerconfigs.monitoring.coreos.com]
kubectl_manifest.kube_prometheus_stack_operator_crds["alertmanagers.monitoring.coreos.com"]: Refreshing state... [id=/apis/apiextensions.k8s.io/v1/customresourcedefinitions/alertmanagers.monitoring.coreos.com]
data.aws_secretsmanager_secret_version.infra: Read complete after 0s [id=arn:aws:secretsmanager:us-east-1:584058546818:secret:/withub-k8s-demo/infra/layer2-k8s-H7n6V3|AWSCURRENT]
module.aws_iam_autoscaler[0].aws_iam_role_policy.this: Refreshing state... [id=withub-k8s-demo-use1-autoscaler20230901081013171200000001:withub-k8s-demo-use1-autoscaler20230901081026581500000004]
module.aws_iam_aws_loadbalancer_controller[0].aws_iam_role_policy.this: Refreshing state... [id=withub-k8s-demo-use1-aws-lb-controller20230901081013670000000002:withub-k8s-demo-use1-aws-lb-controller20230901081026851500000005]
module.cluster_autoscaler_namespace[0].kubernetes_network_policy.this[3]: Refreshing state... [id=cluster-autoscaler/allow-egress]
module.cluster_autoscaler_namespace[0].kubernetes_network_policy.this[0]: Refreshing state... [id=cluster-autoscaler/default-deny]
module.cluster_autoscaler_namespace[0].kubernetes_network_policy.this[1]: Refreshing state... [id=cluster-autoscaler/allow-this-namespace]
module.cluster_autoscaler_namespace[0].kubernetes_network_policy.this[2]: Refreshing state... [id=cluster-autoscaler/allow-monitoring]
module.reloader_namespace[0].kubernetes_network_policy.this[2]: Refreshing state... [id=reloader/allow-egress]
module.cluster_autoscaler_namespace[0].kubernetes_limit_range.this[0]: Refreshing state... [id=cluster-autoscaler/cluster-autoscaler]
module.reloader_namespace[0].kubernetes_network_policy.this[0]: Refreshing state... [id=reloader/default-deny]
module.reloader_namespace[0].kubernetes_network_policy.this[1]: Refreshing state... [id=reloader/allow-this-namespace]
module.reloader_namespace[0].kubernetes_limit_range.this[0]: Refreshing state... [id=reloader/reloader]
module.aws_load_balancer_controller_namespace[0].kubernetes_limit_range.this[0]: Refreshing state... [id=aws-load-balancer-controller/aws-load-balancer-controller]
module.aws_load_balancer_controller_namespace[0].kubernetes_network_policy.this[0]: Refreshing state... [id=aws-load-balancer-controller/default-deny]
module.aws_load_balancer_controller_namespace[0].kubernetes_network_policy.this[1]: Refreshing state... [id=aws-load-balancer-controller/allow-this-namespace]
module.aws_load_balancer_controller_namespace[0].kubernetes_network_policy.this[2]: Refreshing state... [id=aws-load-balancer-controller/allow-control-plane]
module.aws_load_balancer_controller_namespace[0].kubernetes_network_policy.this[3]: Refreshing state... [id=aws-load-balancer-controller/allow-egress]
module.ingress_nginx_namespace[0].kubernetes_network_policy.this[3]: Refreshing state... [id=ingress-nginx/allow-control-plane]
module.ingress_nginx_namespace[0].kubernetes_network_policy.this[0]: Refreshing state... [id=ingress-nginx/default-deny]
module.ingress_nginx_namespace[0].kubernetes_network_policy.this[1]: Refreshing state... [id=ingress-nginx/allow-this-namespace]
module.ingress_nginx_namespace[0].kubernetes_network_policy.this[4]: Refreshing state... [id=ingress-nginx/allow-monitoring]
module.ingress_nginx_namespace[0].kubernetes_network_policy.this[2]: Refreshing state... [id=ingress-nginx/allow-ingress]
module.ingress_nginx_namespace[0].kubernetes_network_policy.this[5]: Refreshing state... [id=ingress-nginx/allow-egress]
module.ingress_nginx_namespace[0].kubernetes_limit_range.this[0]: Refreshing state... [id=ingress-nginx/ingress-nginx]
module.fargate_namespace.kubernetes_limit_range.this[0]: Refreshing state... [id=fargate/fargate]
module.external_secrets_namespace[0].kubernetes_limit_range.this[0]: Refreshing state... [id=external-secrets/external-secrets]
module.external_secrets_namespace[0].kubernetes_network_policy.this[0]: Refreshing state... [id=external-secrets/default-deny]
module.external_secrets_namespace[0].kubernetes_network_policy.this[1]: Refreshing state... [id=external-secrets/allow-this-namespace]
module.external_secrets_namespace[0].kubernetes_network_policy.this[3]: Refreshing state... [id=external-secrets/allow-egress]
module.external_secrets_namespace[0].kubernetes_network_policy.this[2]: Refreshing state... [id=external-secrets/allow-webhooks]
module.kube_prometheus_stack_namespace[0].kubernetes_network_policy.this[1]: Refreshing state... [id=monitoring/allow-this-namespace]
module.kube_prometheus_stack_namespace[0].kubernetes_network_policy.this[2]: Refreshing state... [id=monitoring/allow-ingress]
module.kube_prometheus_stack_namespace[0].kubernetes_network_policy.this[3]: Refreshing state... [id=monitoring/allow-control-plane]
module.kube_prometheus_stack_namespace[0].kubernetes_network_policy.this[4]: Refreshing state... [id=monitoring/allow-egress]
module.kube_prometheus_stack_namespace[0].kubernetes_network_policy.this[0]: Refreshing state... [id=monitoring/default-deny]
module.kube_prometheus_stack_namespace[0].kubernetes_limit_range.this[0]: Refreshing state... [id=monitoring/monitoring]
module.aws_node_termination_handler_namespace[0].kubernetes_limit_range.this[0]: Refreshing state... [id=aws-node-termination-handler/aws-node-termination-handler]
module.aws_node_termination_handler_namespace[0].kubernetes_network_policy.this[2]: Refreshing state... [id=aws-node-termination-handler/allow-egress]
module.aws_node_termination_handler_namespace[0].kubernetes_network_policy.this[0]: Refreshing state... [id=aws-node-termination-handler/default-deny]
module.aws_node_termination_handler_namespace[0].kubernetes_network_policy.this[1]: Refreshing state... [id=aws-node-termination-handler/allow-this-namespace]
helm_release.cluster_autoscaler[0]: Refreshing state... [id=cluster-autoscaler]
helm_release.reloader[0]: Refreshing state... [id=reloader]
tls_cert_request.aws_loadbalancer_controller_webhook[0]: Refreshing state... [id=9f5c9940242c1160c0cb1e283d2cd1453e92a774]
helm_release.ingress_nginx[0]: Refreshing state... [id=ingress-nginx]
helm_release.external_secrets[0]: Refreshing state... [id=external-secrets]
tls_locally_signed_cert.aws_loadbalancer_controller_webhook[0]: Refreshing state... [id=79331537395011851649172978156419928777]
helm_release.aws_loadbalancer_controller[0]: Refreshing state... [id=aws-load-balancer-controller]
helm_release.prometheus_operator[0]: Refreshing state... [id=kube-prometheus-stack]
helm_release.aws_node_termination_handler[0]: Refreshing state... [id=aws-node-termination-handler]
kubernetes_ingress_v1.default[0]: Refreshing state... [id=ingress-nginx/ingress-nginx-controller]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:

  • create
    -/+ destroy and then create replacement

Terraform planned the following actions, but then encountered a problem:

aws_route53_record.default_ingress[0] will be created

  • resource "aws_route53_record" "default_ingress" {
    • allow_overwrite = (known after apply)
    • fqdn = (known after apply)
    • id = (known after apply)
    • name = "*.withub-cloud.com"
    • records = (known after apply)
    • ttl = 360
    • type = "CNAME"
    • zone_id = "Z04610372HQ6UVHRJFK7T"
      }

kubernetes_ingress_v1.default[0] is tainted, so must be replaced

-/+ resource "kubernetes_ingress_v1" "default" {
~ id = "ingress-nginx/ingress-nginx-controller" -> (known after apply)
~ status = [
- {
- load_balancer = [
- {
- ingress = []
},
]
},
] -> (known after apply)
# (1 unchanged attribute hidden)

  ~ metadata {
      ~ generation       = 1 -> (known after apply)
      - labels           = {} -> null
        name             = "ingress-nginx-controller"
      ~ resource_version = "878336" -> (known after apply)
      ~ uid              = "3665f7e7-8424-4795-864f-ff0eb927be01" -> (known after apply)
        # (2 unchanged attributes hidden)
    }

  ~ spec {
      + ingress_class_name = (known after apply)

        # (1 unchanged block hidden)
    }
}

Plan: 2 to add, 0 to change, 1 to destroy.

│ Error: error running dry run for a diff: another operation (install/upgrade/rollback) is in progress

│ with helm_release.prometheus_operator[0],
│ on eks-kube-prometheus-stack.tf line 509, in resource "helm_release" "prometheus_operator":
│ 509: resource "helm_release" "prometheus_operator" {


ERRO[0142] Terraform invocation failed in /Users/mcstack88/Desktop/withub.nosync/terraform-k8s/terragrunt/demo/us-east-1/k8s-addons/.terragrunt-cache/6HZ7k4s-6z_huPJiLJAHl27bkk0/CbQdP3lVuAib8elqdxtBj0ChKcA/layer2-k8s prefix=[/Users/mcstack88/Desktop/withub.nosync/terraform-k8s/terragrunt/demo/us-east-1/k8s-addons]
ERRO[0142] 1 error occurred:
* exit status 1

(base) mcstack88@192 k8s-addons %

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants