Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom: deploy prometheus-k8s error: error converting YAML to JSON: yaml: line 21 #130

Open
249043822 opened this issue May 27, 2019 · 3 comments

Comments

@249043822
Copy link

My OS is Centos7;

[root@k8s-node3 prometheus-kubernetes-2.8.0]# ./deploy
Check for uncommitted changes

fatal: Not a git repository (or any of the parent directories): .git
OK! No uncommitted changes detected
Enter desired namespace to deploy prometheus [monitoring]:
Creating monitoring namespace.

Error from server (AlreadyExists): namespaces "monitoring" already exists

  1. AWS
  2. GCP
  3. Azure
  4. Custom
    Please select your cloud provider:4
    Deploying on custom providers without persistence
    Setting components version
    Enter Prometheus Operator version [v0.29.0]:

Enter Prometheus version [v2.8.1]:

Enter Prometheus storage retention period in hours [168h]:

Enter Prometheus storage volume size [40Gi]:

Enter Prometheus memory request in Gi or Mi [1Gi]:

Enter Grafana version [6.0.2]:

Enter Alert Manager version [v0.16.1]:

Enter Node Exporter version [v0.17.0]:

Enter Kube State Metrics version [v1.5.0]:

Enter Prometheus external Url [http://127.0.0.1:9090]:

Enter Alertmanager external Url [http://127.0.0.1:9093]:

Do you want to use NodeSelector to assign monitoring components on dedicated nodes?
Y/N [N]:

Do you want to set up an SMTP relay?
Y/N [N]:

Do you want to set up slack alerts?
Y/N [N]:

Removing all the sed generated files

Deploying Prometheus Operator

serviceaccount "prometheus-operator" unchanged

clusterrole.rbac.authorization.k8s.io "prometheus-operator" configured
clusterrolebinding.rbac.authorization.k8s.io "prometheus-operator" configured
service "prometheus-operator" unchanged
deployment.apps "prometheus-operator" configured
Waiting for Operator to register custom resource definitions...

done!

Deploying Alertmanager
secret "alertmanager-main" unchanged
service "alertmanager-main" unchanged
alertmanager.monitoring.coreos.com "main" configured

Deploying node-exporter
daemonset.extensions "node-exporter" unchanged
service "node-exporter" unchanged

Deploying Kube State Metrics exporter
serviceaccount "kube-state-metrics" unchanged
clusterrole.rbac.authorization.k8s.io "kube-state-metrics" configured
clusterrolebinding.rbac.authorization.k8s.io "kube-state-metrics" configured
role.rbac.authorization.k8s.io "kube-state-metrics-resizer" unchanged
rolebinding.rbac.authorization.k8s.io "kube-state-metrics" unchanged
deployment.apps "kube-state-metrics" configured
service "kube-state-metrics" unchanged

Deploying Grafana
configmap "grafana-dashboards" unchanged
configmap "grafana-dashboard-k8s-cluster-rsrc-use" unchanged
configmap "grafana-dashboard-k8s-node-rsrc-use" unchanged
configmap "grafana-dashboard-k8s-resources-cluster" unchanged
configmap "grafana-dashboard-k8s-resources-namespace" unchanged
configmap "grafana-dashboard-k8s-resources-pod" unchanged
configmap "grafana-dashboard-nodes" unchanged
configmap "grafana-dashboard-pods" unchanged
configmap "grafana-dashboard-statefulset" unchanged
configmap "grafana-dashboard-deployments" unchanged
configmap "grafana-dashboard-k8s-cluster-usage" unchanged
configmap "grafana-datasources" unchanged
deployment.apps "grafana" configured
serviceaccount "grafana" unchanged
service "grafana" unchanged

Grafana default credentials
user: admin, password: admin

Deploying Prometheus
serviceaccount "prometheus-k8s" unchanged
role.rbac.authorization.k8s.io "prometheus-k8s" unchanged
role.rbac.authorization.k8s.io "prometheus-k8s" unchanged
role.rbac.authorization.k8s.io "prometheus-k8s" unchanged
clusterrole.rbac.authorization.k8s.io "prometheus-k8s" configured
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" unchanged
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" unchanged
rolebinding.rbac.authorization.k8s.io "prometheus-k8s" unchanged
clusterrolebinding.rbac.authorization.k8s.io "prometheus-k8s" configured
servicemonitor.monitoring.coreos.com "kube-apiserver" configured
servicemonitor.monitoring.coreos.com "kube-controller-manager" configured
servicemonitor.monitoring.coreos.com "kube-scheduler" configured
prometheusrule.monitoring.coreos.com "prometheus-k8s-rules" configured
servicemonitor.monitoring.coreos.com "alertmanager" configured
servicemonitor.monitoring.coreos.com "kube-dns" configured
servicemonitor.monitoring.coreos.com "kube-state-metrics" configured
servicemonitor.monitoring.coreos.com "kubelet" configured
servicemonitor.monitoring.coreos.com "node-exporter" configured
servicemonitor.monitoring.coreos.com "prometheus-operator" configured
servicemonitor.monitoring.coreos.com "prometheus" configured
service "prometheus-k8s" unchanged
error: error converting YAML to JSON: yaml: line 21: mapping values are not allowed in this context

Skipping rules for self hosted clusters

Removing local changes
fatal: Not a git repository (or any of the parent directories): .git

Done

While I run sed -i -e '1,8d;32,45d' manifests/prometheus/prometheus-k8s.yaml manually, the file was changes like below
apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: name: k8s labels: prometheus: k8s spec: replicas: 2 version: PROMETHEUS_VERSION externalUrl: PROMETHEUS_EXTERNAL_URL serviceAccountName: prometheus-k8s serviceMonitorSelector: matchExpressions: - {key: k8s-app, operator: Exists} ruleSelector: matchLabels: role: alert-rules prometheus: k8s nodeSelector: node_label_key: node_label_value resources: requests: memory: PROMETHEUS_MEMORY_REQUEST storageClassName: STORAGE_CLASS_TYPE resources: requests: storage: PROMETHEUS_STORAGE_VOLUME_SIZE alerting: alertmanagers: - namespace: CUSTOM_NAMESPACE name: alertmanager-main port: web

Does the code 1,8d;32,45d should be changed to 1,8d;32,49d

@jxsrlsl1234
Copy link

You are right ,you should change 1,8d;32,45d to 1,8d;32,49d when you chose Custom as cloud provider such as minikube.

@KevinDavidMitnick
Copy link

问题咋解决的,我也遇到这个问题。选择的clustomer存储。

@jxsrlsl1234
Copy link

jxsrlsl1234 commented Aug 27, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants