Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to retrieve the complete list of server APIs #6361

Closed
planetf1 opened this issue Sep 5, 2019 · 63 comments · Fixed by #6908
Closed

unable to retrieve the complete list of server APIs #6361

planetf1 opened this issue Sep 5, 2019 · 63 comments · Fixed by #6908
Labels
bug Categorizes issue or PR as related to a bug. v3.x Issues and Pull Requests related to the major version v3
Milestone

Comments

@planetf1
Copy link

planetf1 commented Sep 5, 2019

Output of helm version:
version.BuildInfo{Version:"v3.0+unreleased", GitCommit:"180db556aaf45f34516f8ddb9ddac28d71736a3e", GitTreeState:"clean", GoVersion:"go1.13"}

Output of kubectl version:
lient Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:36:28Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3+IKS", GitCommit:"66a72e7aa8fd2dbf64af493f50f943d7f7067916", GitTreeState:"clean", BuildDate:"2019-08-23T08:07:38Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):
IBM Cloud

Helm chart deployment fails with:

➜  charts git:(h2update2) helm install vdc -f ~/etc/cloud-noes.yaml vdc                                                                                                                                <<<
coalesce.go:155: warning: skipped value for image: Not a table.
Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: custom.metrics.k8s.io/v1beta1: the server is currently unable to handle the request

(The first error is in a confluent chart... here I discuss the second issue)

Looking at the error I see a similar problem with

➜  charts git:(h2update2) kubectl api-resources
NAME                              SHORTNAMES      APIGROUP                           NAMESPACED   KIND
bindings                                                                             true         Binding
componentstatuses                 cs                                                 false        ComponentStatus
configmaps                        cm                                                 true         ConfigMap
endpoints                         ep                                                 true         Endpoints
events                            ev                                                 true         Event
limitranges                       limits                                             true         LimitRange
namespaces                        ns                                                 false        Namespace
nodes                             no                                                 false        Node
persistentvolumeclaims            pvc                                                true         PersistentVolumeClaim
persistentvolumes                 pv                                                 false        PersistentVolume
pods                              po                                                 true         Pod
podtemplates                                                                         true         PodTemplate
replicationcontrollers            rc                                                 true         ReplicationController
resourcequotas                    quota                                              true         ResourceQuota
secrets                                                                              true         Secret
serviceaccounts                   sa                                                 true         ServiceAccount
services                          svc                                                true         Service
mutatingwebhookconfigurations                     admissionregistration.k8s.io       false        MutatingWebhookConfiguration
validatingwebhookconfigurations                   admissionregistration.k8s.io       false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds        apiextensions.k8s.io               false        CustomResourceDefinition
apiservices                                       apiregistration.k8s.io             false        APIService
controllerrevisions                               apps                               true         ControllerRevision
daemonsets                        ds              apps                               true         DaemonSet
deployments                       deploy          apps                               true         Deployment
replicasets                       rs              apps                               true         ReplicaSet
statefulsets                      sts             apps                               true         StatefulSet
meshpolicies                                      authentication.istio.io            false        MeshPolicy
policies                                          authentication.istio.io            true         Policy
tokenreviews                                      authentication.k8s.io              false        TokenReview
localsubjectaccessreviews                         authorization.k8s.io               true         LocalSubjectAccessReview
selfsubjectaccessreviews                          authorization.k8s.io               false        SelfSubjectAccessReview
selfsubjectrulesreviews                           authorization.k8s.io               false        SelfSubjectRulesReview
subjectaccessreviews                              authorization.k8s.io               false        SubjectAccessReview
horizontalpodautoscalers          hpa             autoscaling                        true         HorizontalPodAutoscaler
metrics                                           autoscaling.internal.knative.dev   true         Metric
podautoscalers                    kpa,pa          autoscaling.internal.knative.dev   true         PodAutoscaler
cronjobs                          cj              batch                              true         CronJob
jobs                                              batch                              true         Job
images                            img             caching.internal.knative.dev       true         Image
certificatesigningrequests        csr             certificates.k8s.io                false        CertificateSigningRequest
certificates                      cert,certs      certmanager.k8s.io                 true         Certificate
challenges                                        certmanager.k8s.io                 true         Challenge
clusterissuers                                    certmanager.k8s.io                 false        ClusterIssuer
issuers                                           certmanager.k8s.io                 true         Issuer
orders                                            certmanager.k8s.io                 true         Order
adapters                                          config.istio.io                    true         adapter
attributemanifests                                config.istio.io                    true         attributemanifest
handlers                                          config.istio.io                    true         handler
httpapispecbindings                               config.istio.io                    true         HTTPAPISpecBinding
httpapispecs                                      config.istio.io                    true         HTTPAPISpec
instances                                         config.istio.io                    true         instance
quotaspecbindings                                 config.istio.io                    true         QuotaSpecBinding
quotaspecs                                        config.istio.io                    true         QuotaSpec
rules                                             config.istio.io                    true         rule
templates                                         config.istio.io                    true         template
leases                                            coordination.k8s.io                true         Lease
brokers                                           eventing.knative.dev               true         Broker
channels                          chan            eventing.knative.dev               true         Channel
clusterchannelprovisioners        ccp             eventing.knative.dev               false        ClusterChannelProvisioner
eventtypes                                        eventing.knative.dev               true         EventType
subscriptions                     sub             eventing.knative.dev               true         Subscription
triggers                                          eventing.knative.dev               true         Trigger
events                            ev              events.k8s.io                      true         Event
daemonsets                        ds              extensions                         true         DaemonSet
deployments                       deploy          extensions                         true         Deployment
ingresses                         ing             extensions                         true         Ingress
networkpolicies                   netpol          extensions                         true         NetworkPolicy
podsecuritypolicies               psp             extensions                         false        PodSecurityPolicy
replicasets                       rs              extensions                         true         ReplicaSet
channels                          ch              messaging.knative.dev              true         Channel
choices                                           messaging.knative.dev              true         Choice
inmemorychannels                  imc             messaging.knative.dev              true         InMemoryChannel
sequences                                         messaging.knative.dev              true         Sequence
nodes                                             metrics.k8s.io                     false        NodeMetrics
pods                                              metrics.k8s.io                     true         PodMetrics
certificates                      kcert           networking.internal.knative.dev    true         Certificate
clusteringresses                                  networking.internal.knative.dev    false        ClusterIngress
ingresses                         ing             networking.internal.knative.dev    true         Ingress
serverlessservices                sks             networking.internal.knative.dev    true         ServerlessService
destinationrules                  dr              networking.istio.io                true         DestinationRule
envoyfilters                                      networking.istio.io                true         EnvoyFilter
gateways                          gw              networking.istio.io                true         Gateway
serviceentries                    se              networking.istio.io                true         ServiceEntry
sidecars                                          networking.istio.io                true         Sidecar
virtualservices                   vs              networking.istio.io                true         VirtualService
ingresses                         ing             networking.k8s.io                  true         Ingress
networkpolicies                   netpol          networking.k8s.io                  true         NetworkPolicy
poddisruptionbudgets              pdb             policy                             true         PodDisruptionBudget
podsecuritypolicies               psp             policy                             false        PodSecurityPolicy
clusterrolebindings                               rbac.authorization.k8s.io          false        ClusterRoleBinding
clusterroles                                      rbac.authorization.k8s.io          false        ClusterRole
rolebindings                                      rbac.authorization.k8s.io          true         RoleBinding
roles                                             rbac.authorization.k8s.io          true         Role
authorizationpolicies                             rbac.istio.io                      true         AuthorizationPolicy
clusterrbacconfigs                                rbac.istio.io                      false        ClusterRbacConfig
rbacconfigs                                       rbac.istio.io                      true         RbacConfig
servicerolebindings                               rbac.istio.io                      true         ServiceRoleBinding
serviceroles                                      rbac.istio.io                      true         ServiceRole
priorityclasses                   pc              scheduling.k8s.io                  false        PriorityClass
configurations                    config,cfg      serving.knative.dev                true         Configuration
revisions                         rev             serving.knative.dev                true         Revision
routes                            rt              serving.knative.dev                true         Route
services                          kservice,ksvc   serving.knative.dev                true         Service
apiserversources                                  sources.eventing.knative.dev       true         ApiServerSource
awssqssources                                     sources.eventing.knative.dev       true         AwsSqsSource
containersources                                  sources.eventing.knative.dev       true         ContainerSource
cronjobsources                                    sources.eventing.knative.dev       true         CronJobSource
githubsources                                     sources.eventing.knative.dev       true         GitHubSource
kafkasources                                      sources.eventing.knative.dev       true         KafkaSource
csidrivers                                        storage.k8s.io                     false        CSIDriver
csinodes                                          storage.k8s.io                     false        CSINode
storageclasses                    sc              storage.k8s.io                     false        StorageClass
volumeattachments                                 storage.k8s.io                     false        VolumeAttachment
clustertasks                                      tekton.dev                         false        ClusterTask
pipelineresources                                 tekton.dev                         true         PipelineResource
pipelineruns                      pr,prs          tekton.dev                         true         PipelineRun
pipelines                                         tekton.dev                         true         Pipeline
taskruns                          tr,trs          tekton.dev                         true         TaskRun
tasks                                             tekton.dev                         true         Task
error: unable to retrieve the complete list of server APIs: custom.metrics.k8s.io/v1beta1: the server is currently unable to handle the request
➜  charts git:(h2update2)

Then looking at 'action.go' in the source I can see that if this api call fails, we exit getCapabilities(). I understand why ... but is this failure too 'hard' - in the case above the error was a minor service?

This seems to have come up recently due to some changes on the k8s service with metrics.
I will persue that seperately... but was after thoughts on how helm handles this situation
Also a heads up helm3 may be broken on IKS - but I'm not knowledgeable enough to dig much further?

@bacongobbler bacongobbler added v3.x Issues and Pull Requests related to the major version v3 question/support labels Sep 6, 2019
@kalioz
Copy link

kalioz commented Sep 11, 2019

I have the same issue on AKS, though the error message is

Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

my config :

  • kubectl version :

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:23:26Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

  • helm version: alpine/helm:3.0.0-beta.2 (docker)

  • kubectl api-resources

bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
events                            ev                                          true         Event
limitranges                       limits                                      true         LimitRange
namespaces                        ns                                          false        Namespace
nodes                             no                                          false        Node
persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
persistentvolumes                 pv                                          false        PersistentVolume
pods                              po                                          true         Pod
podtemplates                                                                  true         PodTemplate
replicationcontrollers            rc                                          true         ReplicationController
resourcequotas                    quota                                       true         ResourceQuota
secrets                                                                       true         Secret
serviceaccounts                   sa                                          true         ServiceAccount
services                          svc                                         true         Service
mutatingwebhookconfigurations                  admissionregistration.k8s.io   false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io   false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io           false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io         false        APIService
controllerrevisions                            apps                           true         ControllerRevision
daemonsets                        ds           apps                           true         DaemonSet
deployments                       deploy       apps                           true         Deployment
replicasets                       rs           apps                           true         ReplicaSet
statefulsets                      sts          apps                           true         StatefulSet
tokenreviews                                   authentication.k8s.io          false        TokenReview
localsubjectaccessreviews                      authorization.k8s.io           true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io           false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io           false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io           false        SubjectAccessReview
horizontalpodautoscalers          hpa          autoscaling                    true         HorizontalPodAutoscaler
cronjobs                          cj           batch                          true         CronJob
jobs                                           batch                          true         Job
certificatesigningrequests        csr          certificates.k8s.io            false        CertificateSigningRequest
leases                                         coordination.k8s.io            true         Lease
events                            ev           events.k8s.io                  true         Event
daemonsets                        ds           extensions                     true         DaemonSet
deployments                       deploy       extensions                     true         Deployment
ingresses                         ing          extensions                     true         Ingress
networkpolicies                   netpol       extensions                     true         NetworkPolicy
podsecuritypolicies               psp          extensions                     false        PodSecurityPolicy
replicasets                       rs           extensions                     true         ReplicaSet
ingresses                         ing          networking.k8s.io              true         Ingress
networkpolicies                   netpol       networking.k8s.io              true         NetworkPolicy
runtimeclasses                                 node.k8s.io                    false        RuntimeClass
poddisruptionbudgets              pdb          policy                         true         PodDisruptionBudget
podsecuritypolicies               psp          policy                         false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io      false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io      false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io      true         RoleBinding
roles                                          rbac.authorization.k8s.io      true         Role
priorityclasses                   pc           scheduling.k8s.io              false        PriorityClass
csidrivers                                     storage.k8s.io                 false        CSIDriver
csinodes                                       storage.k8s.io                 false        CSINode
storageclasses                    sc           storage.k8s.io                 false        StorageClass
volumeattachments                              storage.k8s.io                 false        VolumeAttachment
error: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request```

@planetf1
Copy link
Author

I believe in my case this issue started recently... it seems to be in relation to having knative installed in my case (On IBM Cloud IKS this is a managed option). I've uninstalled knative and am ok for now, but there could be an interop issue here

@kalioz out of interest are you using knative on AWS? It looks not actually since I can't see the tekton objects

@EmperorArthur
Copy link

I have just seen this issue myself. In my case it was cert-manager that triggered the problem. Still working on how to get it back to how it was.

@kalioz
Copy link

kalioz commented Sep 12, 2019

@planetf1 I'm not using knative (or i think i don't), but the problem only exist on the new cluster I deployed for this test.
The differences between the working cluster and the not-working are :

working not-working
kube version 1.13.5 1.14.6
azure AD authentification disabled enabled
RBAC disabled enabled

So i have some major changes.

To me the problem is that helm3 crash because of the lack of access to some apis, who are not used for the chart i'm trying to deploy.

@rvairaashtak
Copy link

I am using it on k8 cluster version 1.13.9, same error is coming for deploying any stable chart.

helm version
version.BuildInfo{Version:"v3.0.0-beta.3", GitCommit:"5cb923eecbe80d1ad76399aee234717c11931d9a", GitTreeState:"clean", GoVersion:"go1.12.9"}

helm.go:81: [debug] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request.

@kalioz
Copy link

kalioz commented Sep 22, 2019

After resolving the issue from the metrics pod (can't remember how I solved it, i think it might have to do with hostNetwork or simply restarting the associated pod) helm3 function as expected.
So it might be a 'feature' as it forces to maintain the cluster in good health, but it'll require someone to manually go in the cluster each time an api break (and thus might prevent using helm3 to deploy pods able to be listed on this).

@EmperorArthur
Copy link

It's really, really annoying as someone starting out with Kubernetes. I'm hand rolling a solution for certificates using acme, since I can't guarantee that cert manager won't still be broken even after configuring it.

The really annoying part is I can't just use helm to uninstall cert manager and get back to where I was! Anything which allows a strongly recommended service to break it, and won't undo the change is broken.

@brendandburns
Copy link

For anyone who hits this, it's caused by api-services that no longer have backends running...

In my case it was KEDA, but there are a number of different services that install aggregated API servers.

To fix it:

kubectl get apiservice

Look for ones the AVAILABLE is False

If you don't need those APIs any more, delete them:

kubectl delete apiservce <service-name>

Then Helm should work properly. I think improving the Helm error message for this case may be worthwhile...

@planetf1
Copy link
Author

planetf1 commented Oct 4, 2019

Thanks for the explanation - is there a way Helm could code around this too?

@technosophos
Copy link
Member

We think so, though we're still investigating. My first look suggests that this is just related to our usage of the Discovery API, which is used for the Capabilities object in template rendering. We might be able to trap this particular error and warn the user instead of failing.

@bacongobbler bacongobbler added bug Categorizes issue or PR as related to a bug. and removed question/support labels Oct 4, 2019
@sjentzsch
Copy link

sjentzsch commented Oct 21, 2019

Same with 2.15.0 now:

Error: Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request

This is pretty annoying. Warning instead of failing would be much better indeed.
Any updates on this so far?

EDIT: can s/o confirm 2.15 also being affected? Then I would suggest to adjust the labels of this ticket.

@kennyqn
Copy link

kennyqn commented Oct 28, 2019

@sjentzsch I am also seeing the same using Helm 2.15.0 and k8s 1.16.0.

@EmperorArthur
Copy link

If this does also affect 2.x then everyone using "cert-manager" (possibly only pre-configuration) is going to have a bad time.

@hayorov
Copy link

hayorov commented Oct 29, 2019

Here we have two different cases with the same behavior from helm side.
Both 2.15.1 and 3 beta versions are affected.

As @technosophos mentioned helm uses discovery API functionality and fails if any of API response fails

helm/pkg/action/action.go

Lines 105 to 118 in f1dc847

}
// force a discovery cache invalidation to always fetch the latest server version/capabilities.
dc.Invalidate()
kubeVersion, err := dc.ServerVersion()
if err != nil {
return nil, errors.Wrap(err, "could not get server version from Kubernetes")
}
apiVersions, err := GetVersionSet(dc)
if err != nil {
return nil, errors.Wrap(err, "could not get apiVersions from Kubernetes")
}
c.Capabilities = &chartutil.Capabilities{
APIVersions: apiVersions,

  1. cert-manager's admission.certmanager.k8s.io/v1beta1 is a good example:
kubectl get apiservice | grep certmanager
v1beta1.admission.certmanager.k8s.io   service/cert-manager-webhook   False (ServiceNotFound)   111d

and for this case you can easily fix it by kubectl delete apiservice v1beta1.admission.certmanager.k8s.io
as @brendandburns described.

  1. Another case of failure when helm cannot retrieve the response from any of api service
    e.g. "metrics.k8s.io/v1beta1: the server is currently unable to handle the request" and it happens from case to case

Currently, it's alive and running but was down accidentally during the helm's request.

⇒  k get apiservice | grep metrics
v1beta1.metrics.k8s.io                 kube-system/metrics-server     True        1y

I'm sure that helm must be more robust for such type of issues,

  1. maybe it's a good idea to convert the error to warning (I don't know how the info from api service uses during the template rendering)
  2. implement retries for such type of requests

@dmitry-irtegov
Copy link

We have similar issue with 2.15.1 on Kubernetes 1.15.5, but NOT with helm 2.14.3.

The issue is floating: some charts are installed OK, but then they begin to fail.
Our message is:

Error: Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request: exit status 1

kubectl get apiservice lists metrics.k8s.io/v1beta1 as available. May be we have transient issue with this service, but helm 2.14.3 on mostly identical cluster works reliably.

@unguiculus
Copy link
Member

unguiculus commented Nov 3, 2019

We hit this issue when trying to upgrade to Helm 2.15.2 on the charts CI cluster. So, it's not only a Helm 3 issue. Deleting the missing API service fixed it. I wonder if Helm could be more graceful here, especially since this could probably pop up again any time.

@NeilW
Copy link

NeilW commented Nov 4, 2019

Hit a similar problem installing the stable/metrics-server chart on a kubeadm installed cluster.

When you attempt to uninstall the chart, the uninstall fails with an api-server error (because metrics server is fubar), and that leaves a load of dangling resources lying around that you have to clean up by hand - since helm has removed the release from its database anyway.

$ helm version
version.BuildInfo{Version:"v3.0.0-rc.2", GitCommit:"82ea5aa774661cc6557cb57293571d06f94aff0c", GitTreeState:"clean", GoVersion:"go1.13.3"}

@jglick
Copy link

jglick commented Nov 6, 2019

Started hitting this recently in freshly created GKE clusters, using 2.15.1 (might have upgraded recently via Snap). Also reported as kubernetes/kubernetes#72051 (comment). Seem to be able to work around by preceding every helm install command with:

kubectl --namespace=kube-system wait --for=condition=Available --timeout=5m apiservices/v1beta1.metrics.k8s.io

@technosophos
Copy link
Member

@jglick In your case is it happening only when the cluster is first created?

The problem is deep down in the Kubernetes Go discovery client. I am experimenting with just printing a warning. However, that could have negative consequences for charts that heavily rely on the Capabilities object.

technosophos added a commit to technosophos/k8s-helm that referenced this issue Nov 7, 2019
This blocks a particular error (caused by upstream discovery client),
printing a warning instead of failing. It's not a great solution, but is
a stop-gap until Client-Go gets fixed.

Closes helm#6361

Signed-off-by: Matt Butcher <matt.butcher@microsoft.com>
technosophos added a commit to technosophos/k8s-helm that referenced this issue Nov 7, 2019
This blocks a particular error (caused by upstream discovery client),
printing a warning instead of failing. It's not a great solution, but is
a stop-gap until Client-Go gets fixed.

Closes helm#6361

Signed-off-by: Matt Butcher <matt.butcher@microsoft.com>
technosophos added a commit to technosophos/k8s-helm that referenced this issue Nov 7, 2019
This blocks a particular error (caused by upstream discovery client),
printing a warning instead of failing. It's not a great solution, but is
a stop-gap until Client-Go gets fixed.

Closes helm#6361

Signed-off-by: Matt Butcher <matt.butcher@microsoft.com>
@nodox
Copy link

nodox commented Jun 19, 2020

Can confirm I'm having this issue as well. Hoping for a fix.

@Sanket110297
Copy link

Solution:

The steps I followed are:

  1. kubectl get apiservices : If metric-server service is down with the error CrashLoopBackOff try to follow the step 2 otherwise just try to restart the metric-server service using kubectl delete apiservice/"service_name". For me it was v1beta1.metrics.k8s.io .

  2. kubectl get pods -n kube-system and found out that pods like metrics-server, kubernetes-dashboard are down because of the main coreDNS pod was down.

For me it was:

NAME                          READY   STATUS             RESTARTS   AGE
pod/coredns-85577b65b-zj2x2   0/1     CrashLoopBackOff   7          13m
  1. Use kubectl describe pod/"pod_name" to check the error in coreDNS pod and if it is down because of /etc/coredns/Corefile:10 - Error during parsing: Unknown directive proxy, then we need to use forward instead of proxy in the yaml file where coreDNS config is there. Because CoreDNS version 1.5x used by the image does not support the proxy keyword anymore.

https://stackoverflow.com/questions/62442679/could-not-get-apiversions-from-kubernetes-unable-to-retrieve-the-complete-list

@marcelloromani
Copy link

@brendandburns Glad to have found your answer after a few hours of googling! :-D Too bad this is not StackOverflow, you'd deserve quite a few upvotes ;-)

@pcgeek86
Copy link

On Amazon EKS I had to uninstall their metrics server. That cleaned up the error.

kubectl delete --filename https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Use this command to verify that you don't get any more errors.

kubectl api-resources

https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html

@rubenpetrosyan1
Copy link

For anyone who hits this, it's caused by api-services that no longer have backends running...

In my case it was KEDA, but there are a number of different services that install aggregated API servers.

To fix it:

kubectl get apiservice

Look for ones the AVAILABLE is False

If you don't need those APIs any more, delete them:

kubectl delete apiservce <service-name>

Then Helm should work properly. I think improving the Helm error message for this case may be worthwhile...

This helped a lot.
In my case there was restarting pod causing the issue. Solved the issue with pod and everything beck operational.
Thanks

gabe-l-hart added a commit to gabe-l-hart/operator-sdk that referenced this issue Dec 15, 2022
Similar to the fix in helm (helm/helm#6361), this
fix allows GroupDiscoveryFailedError to not error out the process of
managing apiserver resource types.

https://github.com/gabe-l-hart/operator-sdk/issues/5596

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
gabe-l-hart added a commit to gabe-l-hart/operator-sdk that referenced this issue Dec 15, 2022
…work#5596)

Similar to the fix in helm (helm/helm#6361), this
fix allows GroupDiscoveryFailedError to not error out the process of
managing apiserver resource types.

Fixes operator-framework#5596

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
@uqix
Copy link

uqix commented Feb 2, 2023

For anyone who hits this, it's caused by api-services that no longer have backends running...

In my case it was KEDA, but there are a number of different services that install aggregated API servers.

To fix it:

kubectl get apiservice

Look for ones the AVAILABLE is False

If you don't need those APIs any more, delete them:

kubectl delete apiservce <service-name>

Then Helm should work properly. I think improving the Helm error message for this case may be worthwhile...

There is a typo: apiservce.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Categorizes issue or PR as related to a bug. v3.x Issues and Pull Requests related to the major version v3
Projects
None yet
Development

Successfully merging a pull request may close this issue.