Skip to content
Philippe Coval edited this page Jan 29, 2020 · 27 revisions

KUBE:

webthing-iotjs was made to target MCU but it can also support other platforms like GnuLinux and its containers (ie: Docker).

This page will explain how to create microservices and run them in a cluster.

There are various Kubernetes (K8S) distributions, a few are described from simplest to more hazardous environments.

MICROK8S:

Cannonical is providing to the community a simplified K8S snap package, Even if snap is part of Ubuntu, it is also supported by others GnuLinux distros:

Note that MicroK8s was designed to build a single node cluster (WIP for multinode) :

MICROK8S : DEPLOY AND TEST

project="webthing-iotjs"
name="$project"
org="rzrfreefr"
image="${org}/${project}:latest"
kubectl=microk8s.kubectl
port=8888

sudo apt-get install snapd curl
sudo snap install microk8s --classic --channel=1.14/stable

microk8s.status --wait-ready

${kubectl} cluster-info
#| Kubernetes master is running at https://127.0.0.1:16443

${kubectl} get services
#| kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   14h

microk8s.enable dns # Will restart kubelet

${kubectl} run "${project}" --image="${image}"

${kubectl} describe "deployment/${name}"

# Wait running pod
time ${kubectl} get all --all-namespaces | grep "pod/$name"  | grep ' Running ' \
    || time ${kubectl} get all --all-namespaces 
    
pod=$(${kubectl} get all --all-namespaces \
  | grep -o "pod/${project}.*" | cut -d/ -f2 | awk '{ print $1}' \
  || echo failure) && echo "# pod=${pod}"

# Wait until ready
${kubectl} describe pod "$pod" | grep 'Status: * Running' \
   || ${kubectl} describe pod "$pod"
   
# Try server (direct to pod)
ip=$(microk8s.kubectl describe pod "$pod" | grep 'IP:' | awk '{ print $2 }') && echo "# log: ip=${ip}"
url=http://$ip:${port}/

curl -i $url/1/properties
  || curl -i http://$ip:${port}
#| {"level":42.8888}

# Remove service and uninstall
${kubectl} delete deployment/${name}
${kubectl} get all --all-namespaces | grep ${name}

microk8s.reset

sudo snap remove microk8s

OK we have verified our base, Next step is to deploy a service.

MICROK8S: SERVICE

Reinstall microk8s

name="webthing-iotjs"
specUrl="https://raw.githubusercontent.com/rzr/${name}/master/extra/tools/kube/$name.yml"
service_port=30080
# K8S env
unit=microk8s
export PATH="/snap/$unit/current/:/snap/bin/:$PATH"
kubectl="$unit.kubectl"

time ${kubectl} apply -f "${specUrl}" \
  || curl "$url"

#| deployment.extensions/webthing-iotjs created
#| service/webthing-iotjs created

# Check ports: container port (docker) and kube port (public)
${kubectl} get svc ${name} 
#| NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
#| webthing-iotjs   NodePort   10.152.183.83   <none>        8888:30080/TCP   3s

service_port=$(${kubectl} get svc ${name} -o=jsonpath="{.spec.ports[?(@.port==$port)].nodePort}") && echo $service_port
service_url="http://127.0.0.1:${service_port}" && echo "# log: service_url=${service_url}"
#| log: service_url=http://127.0.0.1:30080

time ${kubectl} get all --all-namespaces | grep "pod/$name" # wait "Running" status
#| default     pod/webthing-iotjs-FFFFFFFFFF-FFFFF   1/1     Running   0          72s

curl -i ${service_url}/1/properties \
  || curl -i ${service_url}
#| {"level":42}

${kubectl} delete deployment ${name}
${kubectl} delete service/${name}

Next step is to setup ingress and public backend.

MICROK8S: LOCAL INGRESS

name="webthing-iotjs"
specUrl="https://raw.githubusercontent.com/rzr/${name}/master/extra/tools/kube/$name.yml"
port=8888
nodePort=32018
domain=localhost # May edit /etc/hosts
host=${name}.${domain}
url="http://${host}/"
# K8s env
unit=microk8s
export PATH="/snap/$unit/current/:/snap/bin/:$PATH"
kubectl="$unit.kubectl"

# Creating deployent then service from yaml script
	
# Creating deployent then service from yaml script
curl "${specUrl}" | sed -e "s|nodePort: .*|nodePort: ${nodePort}|g" \
  | ${kubectl} apply -f -
  
microk8s.enable ingress
${kubectl} get deployment ${name} # Ready 1/1

${kubectl} delete ingress.extensions/${name} \
  || ${kubectl} get ing

cat<<EOF | ${kubectl} create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ${name}
spec:
  rules:
  - host: ${host}
    http:
      paths:
      - backend:
          serviceName: ${name}
          servicePort: ${port}
EOF
ping -c1 ${host} || echo "127.0.0.1 $host" | sudo tee -a /etc/hosts

curl -kLi ${url}
#| [{"id":"urn:dev:ops:my-lamp-1234"

# Extra
microk8s.enable dns # Will restart kubelet
${kubectl} delete ingress.extensions/${name}

MICROK8S: PUBLIC INGRESS

name="webthing-iotjs"
nodePort=32018
# Common vars
specUrl="https://raw.githubusercontent.com/rzr/${name}/master/extra/tools/kube/$name.yml"
port=8888
topdomain="volatile.cloudns.org" # must be registered
host="${name}.${topdomain}" # must be registered
publicPort=80
url="http://${host}:${publicPort}"
ingress_name="${name}-ingress"
ingressObject=ingress.extensions/${ingress_name}
# K8s env
unit=microk8s
export PATH="/snap/$unit/current/:/snap/bin/:$PATH"
kubectl="$unit.kubectl"

${kubectl} version
${kubectl} get ingress.extensions

${kubectl} delete ${ingressObject}

${kubectl} get all --all-namespaces  | grep ${name} | awk '{ print $2 }' \
  | xargs -n 1 ${kubectl} delete \
  || ${kubectl} get all --all-namespaces
# wait Terminating pod:
${kubectl} get all --all-namespaces  | grep ${name}

# Creating deployent then service from yaml script
curl "${specUrl}" | sed -e "s|nodePort: .*|nodePort: ${nodePort}|g" \
  | ${kubectl} apply -f -
        
microk8s.enable ingress # TODO Adapt for other K8S
${kubectl} get deployment ${name} # Ready 1/1

${kubectl} delete ingress.extensions/${name} \
  || ${kubectl} get ingress.extensions

    cat<<EOF | ${kubectl} apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ${ingress_name}

spec:
  rules:
  - host: ${host}
    http:
      paths:
      - backend:
          serviceName: ${name}
          servicePort: ${port}
EOF
# wait from 404 to 503 to 200
ping -c 1 ${host} && curl -kLi "${url}" 
#| Server: nginx/1.15.10
#| [{"id":"urn:dev:ops:my-lamp-1234"

${kubectl} delete ingress.extensions/${name}

For multiple services it's same just update, run again but replace:

name=webthing-go
nodePort=32019
    
name=iotjs-express
nodePort=32016

Each service will run on separate hostname.

MICROK8S HELPER

From Dockerfile to cluster:

export PATH="${PATH}:/snap/bin"


git clone --depth 1 https://github.com/rzr/iotjs-express && cd iotjs-express

sudo systemctl stop apt-daily.timer
make -C extra/tools/kube delete setup user # v1.15.4
make -C extra/tools/kube user && sudo su -l ${USER} # If needed
make -C extra/tools/kube start status
make -C extra/tools/kube status client
#| curl -kLi http://iotjs-express.localhost/
#| HTTP/1.1 200 OK
#| Server: nginx/1.15.10
#| (...)
#| x-powered-by: iotjs-express
#| {}


domain="${HOSTNAME}"
curl -kLi http://iotjs-express.${domain}
#| curl: (6) Could not resolve host: iotjs-express.duo.local

make -C extra/tools/kube status domain="${domain}" ingress 
make -C extra/tools/kube status domain="${domain}" host status client
curl -kLi http://iotjs-express.${domain}
# x-powered-by: iotjs-express

domain="${HOSTNAME}.local"
make -C extra/tools/kube status domain="${domain}" ingress host host status client # OK
  • TODO: Client should wait ingress container ("ContainerCreating", "Running")

MICROK8S: HTTPS

Work In progress:

domain="example.tld.localhost" # TODO: replace with user's one
email="$USER@$domain" # TODO: replace with user's one
#
name=iotjs-express
spec_url=https://raw.githubusercontent.com/rzr/${name}/master/extra/tools/kube/${name}.yml
host=${name}.${domain}
port=8888
kubectl=/snap/bin/microk8s.kubectl
export PATH="/snap/bin:${PATH}"

sudo microk8s.reset # Will erase all env

${kubectl} get all
${kubectl} get all --all-namespaces
${kubectl} delete all
${kubectl} delete namespace cert-manager
# ${kubectl} delete apiservice --all
sudo reboot

microk8s.enable dns
microk8s.enable ingress

${kubectl} apply -f "${spec_url}"

ping -c 1 ${host} # Should resolve to public ip

${kubectl} delete ingress.networking.k8s.io/${name}
cat<<EOF | ${kubectl} apply -f -
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ${name}
spec:
  rules:
  - host: ${host}
    http:
      paths:
      - backend:
          serviceName: ${name}
          servicePort: ${port}
EOF
${kubectl} get ingress.networking.k8s.io/${name} # ADDRESS
curl -ikL http://${host}/ # OK

${kubectl} get all # ContainerCreating , then Running

ping -c 1 ${domain} # TODO

# Install Cert Manager for LetsEncrypt
${kubectl} create namespace cert-manager
${kubectl} label namespace cert-manager certmanager.k8s.io/disable-validation=true --overwrite
${kubectl} apply -f https://github.com/jetstack/cert-manager/releases/download/v0.9.1/cert-manager.yaml
${kubectl} get pods --namespace cert-manager 

# Wait: Expected Status: ContainerCreating, Running (cainjector, webhook...)

# Create staging https
${kubectl} delete clusterissuer letsencrypt-staging # ErrRegisterACMEAccount
cat<<EOF | ${kubectl} apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    email: ${email}
    privateKeySecretRef:
      name: letsencrypt-staging
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    solvers:
    - http01:
        ingress:
          class: nginx
EOF
${kubectl} describe clusterissuer letsencrypt-staging # ACMEAccountRegistered


${kubectl} delete certificate ${name}
cat<<EOF | ${kubectl} apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: ${name}
spec:
  secretName: ${name}-crt
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
    namespace: cert-manager
  commonName: ${domain}
  dnsNames:
  - ${name}.${domain}
EOF
${kubectl} describe certificate ${name} # OrderCreated, wait CertIssued
${kubectl} get certificate # TODO READY

${kubectl} logs -n cert-manager deploy/cert-manager -f
#| I0903 16:48:26.160021       1 logger.go:58] Calling FinalizeOrder
#| I0903 16:48:27.029928       1 logger.go:43] Calling GetOrder
#| I0903 16:48:27.536371       1 conditions.go:143] Found status change for Certificate "iotjs-express" condition "Ready": "False" -> "True"; setting lastTransitionTime to 2019-09-03 16:48:27.536364705 +0000 UTC m=+780.621478774


${kubectl} get all

${kubectl} delete ingress.networking.k8s.io/${name}
cat<<EOF | ${kubectl} apply -f -
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ${name}
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
  tls:
  - hosts:
    - ${domain}
    secretName: letsencrypt-staging
  rules:
  - host: ${host}
    http:
      paths:
      - backend:
          serviceName: ${name}
          servicePort: ${port}
EOF
${kubectl} describe ingress.networking.k8s.io/${name} # TLS
#|   Normal  CreateCertificate  2s    cert-manager  Successfully created Certificate "letsencrypt-staging"

${kubectl} get all

curl -iL http://${host}/ # OK
#| HTTP/1.1 308 Permanent Redirect

curl -ikL https://${host}/ # OK

curl -i https://${host}/ # OK
#| curl: (60) SSL certificate problem: unable to get local issuer certificate

${kubectl} delete clusterissuer.certmanager.k8s.io/letsencrypt-prod
cat<<EOF | ${kubectl} apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    email: ${email}
    privateKeySecretRef:
      name: letsencrypt-prod
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
    - http01:
        ingress:
          class: nginx
EOF
${kubectl} describe clusterissuer.certmanager.k8s.io/letsencrypt-prod

${kubectl} get secret 
# ${kubectl} describe secret letsencrypt-prod

${kubectl} delete certificate.certmanager.k8s.io/${name}
cat<<EOF | ${kubectl} apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: ${name}
spec:
  secretName: ${name}-crt
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
    namespace: cert-manager
  commonName: ${domain}
  dnsNames:
  - ${name}.${domain}
EOF

${kubectl} get certificate ${name} # wait True
#| iotjs-express   True    iotjs-express-crt   69s

${kubectl} describe certificate ${name}
#|  Message:               Certificate is up to date and has not expired

${kubectl} describe challenges

${kubectl} logs -n cert-manager deploy/cert-manager -f
# ${kubectl} log -n cert-manager  pod/cert-manager-*

${kubectl} get ing

${kubectl} delete ingress.networking.k8s.io/${name}
cat<<EOF | ${kubectl} apply -f -
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ${name}-prod
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
  - hosts:
    - ${host}
    secretName: ${name}-prod
  rules:
  - host: ${host}
    http:
      paths:
      - backend:
          serviceName: ${name}
          servicePort: ${port}
EOF
${kubectl} describe ingress.networking.k8s.io/${name}

#     nginx.ingress.kubernetes.io/rewrite-target: /


${kubectl} describe ingress.networking.k8s.io/${name}

curl -i https://${host}/ 

curl -vLi  -H "Host: ${host}" https://localhost
#| * TLSv1.2 (OUT), TLS alert, unknown CA (560):
#| * SSL certificate problem: unable to get local issuer certificate

openssl s_client -connect $host:443 -showcerts
#| (...)
#| 0 s:O = Acme Co, CN = Kubernetes Ingress Controller Fake Certificate
#| (...)


${kubectl} get events
#| (...)
#| 52m         Normal    GenerateSelfSigned    certificate/iotjs-express   Generated temporary self signed certificate
#| (...)

${kubectl} get challenges --all-namespaces
#| No resources found.

${kubectl} get order
#| iotjs-express-786239982   pending   5m45s

${kubectl} describe order
#|  Warning  NoMatchingSolver  47s   cert-manager  Failed to create challenge for domain "example.tld": no configured challenge solvers can be used for this challenge

${kubectl} get certificates 
#| NAME            READY   SECRET              AGE
#| iotjs-express   False   iotjs-express-tls   7m6s

${kubectl} describe certificates 
#| Message:               Certificate issuance in progress. Temporary certificate issued.

${kubectl} log -n cert-manager  pod/cert-manager-78d45b9d8-m8rst


curl -i https://${host}/ #

${kubectl} get events

${kubectl} describe challenges

Websites prove their identity via certificates. Firefox does not trust this site because it uses a certificate that is not valid for iotjs-express.example.tld. The certificate is only valid for ingress.local.

Error code: MOZILLA_PKIX_ERROR_SELF_SIGNED_CERT

kubectl delete clusterissuer/letsencrypt-staging # Type: Ready
 
cat<<EOF | ${kubectl} apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: ${domain}
spec:
  secretName: ${domain}-crt
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
    namespace: cert-manager
  commonName: ${domain}
  dnsNames:
  - "*.${domain}"
EOF

${kubectl} describe certificate ${domain}

#| Status:                False
#| Type:                  Ready

${kubectl} get order
#| example.tld-324294416              pending   119s

${kubectl} get clusterissuer.certmanager.k8s.io/letsencrypt-prod

${kubectl} logs -n cert-manager deploy/cert-manager -f
#| I0903 14:09:13.202698       1 logger.go:93] Calling HTTP01ChallengeResponse
#| I0903 14:09:13.203411       1 logger.go:73] Calling GetAuthorization
Warning  Failed     50s   cert-manager  Accepting challenge authorization failed: acme: authorization for identifier example.tld.teuz.eu is invalid

Resources:

MICROK8S TROUBLESHOOT:

export kubectl=microk8s.kubectl


make -C extra/tools/kube status
default       pod/iotjs-express-7c8fb4cfbf-td8cc            0/1     ImagePullBackOff    0          20m
make -C extra/tools/kube log/iotjs-express
#| Warning  Failed          1s                     kubelet, duo       Failed to pull image "tmp/iotjs-express:v0.0.11-3-ga399167": rpc error: code = Unknown desc = failed to resolve image "docker.io/tmp/iotjs-express:v0.0.11-3-ga399167": no available registry endpoint: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed

Ingress : if does not work make sure node was not evicted:

make -C extra/tools/kube log/nginx-ingress
 Warning  Evicted              5s (x3 over 85s)   kubelet, duo       The node was low on resource: ephemeral-storage.
file=/var/snap/microk8s/current/args/kubelet
grep eviction "$file"
sudo sed -b -e 's|1Gi|100Mi|g' -i "$file"
sudo systemctl restart snap.microk8s.daemon-containerd.service

Rebuild docker image is optional, latest published one can be also used, here public internet domain is also assigned (it must be registered before):

export PATH="/snap/microk8s/current/:/snap/bin:${PATH}"
make -C extra/tools/kube delete apply enable ingress status proxy client \
 username=rzrfreefr \
 domain=volatile.${USER}.cloudns.org
make -C extra/tools/kube status
microk8s.ctr --namespace k8s.io image list | grep "tmp/iotjs-express:v0.0.11-3-ga399167"
#| ctr: failed to dial "/var/snap/microk8s/common/run/containerd.sock": context deadline exceeded
unpacking docker.io/rzr/iotjs-express:v0.0.11-3-ga399167 (sha256:22a3cda14fb4d35f2c6c70ccc778bc14cf015097d9f38b32de8b1909b0ac3e0c)...
Warning  FailedScheduling  <unknown>            default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
time ${kubectl} describe pod/iotjs-express-55ffd7c4d8-65qs2

If low on storage:

sudo journalctl --vacuum-size=100M
sudo rm -rfv /var/lib/snapd/cache
journalctl -xe
# Oct 11 10:23:15 duo microk8s.daemon-kubelet[6137]: I1011 10:23:15.461030    6137 image_gc_manager.go:300] [imageGCManager]: Disk usage on image filesystem is at 88% which is over the high threshold (85%). Trying to free 1124634624 bytes down to the low threshold (80%).

ISSUE: INGRESS SSE

microk8s.kubectl get pods --all-namespaces
# ingress       nginx-ingress-microk8s-controller-qnp45   0/1     CrashLoopBackOff   17         14h


microk8s.kubectl describe pod/nginx-ingress-microk8s-controller-qnp45  --namespace ingress
# Normal   Created         14h (x4 over 14h)     kubelet, duo       Created container nginx-ingress-microk8s
# Warning  Unhealthy       14h (x10 over 14h)    kubelet, duo       Liveness probe failed: HTTP probe failed with statuscode: 500
# Warning  BackOff         14h (x21 over 14h)    kubelet, duo       Back-off restarting failed container
# Normal   SandboxChanged  28m                   kubelet, duo       Pod sandbox changed, it will be killed and re-created.

pod=pod/nginx-ingress-microk8s-controller-qnp45 # TODO
microk8s.kubectl logs  $pod  --namespace ingress | less 
# E1011 08:19:05.189636       6 controller.go:145] Unexpected failure reloading the backend:
# -------------------------------------------------------------------------------
# Error: signal: illegal instruction (core dumped)
# (..)
# I1011 08:21:06.375221      11 nginx.go:417] Stopping NGINX process
# I1011 08:21:06.699118      11 main.go:158] Error during shutdown: signal: illegal instruction (core dumped)
# I1011 08:21:06.699598      11 main.go:162] Handled quit, awaiting Pod deletion

grep --color sse /proc/cpuinfo  # sse sse2 
dmesg
# [  839.958430] traps: nginx[15621] trap invalid opcode ip:7fbe63ac0cf2 sp:7ffff82c1ff0 error:0 in libluajit-5.1.so.2.1.0[7fbe63ac0000+73000]

Workaround:

make -C ~/local/tmp/iotjs-express/extra/tools/kube setup snap_install_args="--channel 1.15/stable"    
sudo snap info microk8s
# installed:        v1.15.4             (876) 171MB classic

MICROK8S: ISTEO (TODO)

microk8s.enable istio

MICROK8S: LINKERD (TODO)

DNS (WIP)

microk8s.status | grep dns
#| dns: disabled
microk8s.enable dns # Will restart kubelet

TODO avoid definition of each domain and only register a portal hostname

MICROK8S RESOURCES:

K3S

K3S is another distribution designed for Edge devices, Make sure to uninstall other K8S to avoid overlap of resources

project="webthing-iotjs"
org="rzrfreefr"
image="${org}/${project}:latest"
kubectl="sudo kubectl"

curl -sfL https://get.k3s.io | sh -
sudo snap remove microk8s
sudo systemctl restart k3s.service
sudo systemctl status k3s.service

${kubectl} get nodes
#| ...   Ready    master   51s   v1.14.4-k3s.1

${kubectl} run "${project}" --image="${image}"

pod=$(${kubectl} get all --all-namespaces \
  | grep -o "pod/${project}.*" | cut -d/ -f2 | awk '{ print $1}' \
  || echo failure) && echo pod="$pod"
${kubectl} describe pod "$pod"  | grep 'Status:             Running' 
ip=$(${kubectl} describe pod "$pod" | grep 'IP:' | awk '{ print $2 }') && echo "ip=${ip}"

curl http://$ip:8888
#| [{"name":"My Lamp"," ...

sudo grep server /etc/rancher/k3s/k3s.yaml
#| server: https://localhost:6443

curl -k -i https://localhost:6443
#| HTTP/1.1 401 Unauthorized
#| Content-Type: application/json
#| Www-Authenticate: Basic realm="kubernetes-master"

# token=$(sudo cat /var/lib/rancher/k3s/server/node-token)
# curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=${token} sh -

MINIKUBE

An other K8S system designed for prototyping (WIP)

To get started see README.md of :

Or try the webthing-go version as explained at:

name=iotjs-express
url="https://raw.githubusercontent.com/rzr/${name}/master/extra/tools/kube/${name}.yml"

sudo snap remove kubeadm ||: 

dpkg -s minikube | grep 'Version: ' | cut -d' ' -f2
#| 1.5.0~beta.0

minikube version
#| minikube version: v1.5.0-beta.0
#| commit: c2a0ac0147abdb457dba3e3c454829ab1959b490

time minikube start || minikube logs --alsologtostderr
#| 🎉  minikube 1.5.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.5.1

#| (...)
#|     > minikube-v1.5.0-beta.0.iso: 124.25 MiB / 143.77 MiB  86.43% 9.20 MiB p/s 
#| (...)
#| 🔥  Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
#| (...)
#| 💾  Downloading kubelet v1.16.1
#| 💾  Downloading kubeadm v1.16.1
#| (...)
# 🏄  Done! kubectl is now configured to use "minikube"

kubectl=kubectl
kubectl version || kubectl="sudo kubectl" # Maybe you'd better check your env

$kubectl version
#| Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-17T17:16:09Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
#| Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T16:51:36Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

time $kubectl apply -f "${url}"
#| deployment.apps/iotjs-express created
#| service/iotjs-express created

$kubectl get services
#| iotjs-express   NodePort    10.111.152.126   <none>        8080:30253/TCP   35s

minikube service ${name} --url
#| http://192.168.99.105:30253

time minikube service ${name} # {}

# Try
# http://192.168.99.105:30253/.well-known/security.txt

minikube stop
    
minikube delete

KUBERNETES/ KADMIN

Kadmin is the tool for multi node clustering

kubernetes

minikube stop
minikube delete
sudo apt-get remove minikube 
sudo apt-get remove kubeadm
sudo apt-get remove kubectl

sudo /usr/bin/keadm init

FAAS:

Example: making a proxy using HTTP client API (on node runtime).

url="https://rzr.o6s.io/iotjs-express"
json_url="http://www.jsonstore.io/84382ea020cdf924ee3a030ba2182d91c7ed89ad130385b33bba52f99a04fd23"
hostname="www.jsonstore.io"
path="/84382ea020cdf924ee3a030ba2182d91c7ed89ad130385b33bba52f99a04fd23"
port="80"

echo "{ \"date\": \"$(LANG=C date -u)\"} " | curl -H "Content-Type: application/json" -X POST -d@- "$json_url"
#| {"ok":true}

curl "$json_url"
#| {"result":{"date":"Wed 16 Oct 2019 02:26:49 PM UTC"},

echo "{ \"hostname\": \"$hostname\", \"port\": $port , \"path\":\"$path\" }" \
  | curl -H "Content-Type: application/json" -X POST -d @- "$url"
#| {"result":{"date":"Wed 16 Oct 2019 02:26:49 PM UTC"},

Other simpler example:

echo '{ "hostname": "ifconfig.io", "port": 80 , "path":"/all.json" }' \
      | time curl -H "Content-Type: application/json" -X POST -d @- \
      https://rzr.o6s.io/iotjs-express
#| 

OPENSHIFT:

Just create application from repo,

Adminstrator / Create Project : "webthing-iotjs"

Developer / Topology / From Git / URL: http://github.com/rzr/webthing-iotjs / Builder / Node / 10 / (default) / Create

Then adjust port in menu:

Administrator / Networking/ Service / webthing-iotjs/ YAML:

spec:
  ports:
    - name: 8080-tcp
      protocol: TCP
      port: 8888
      targetPort: 8888

Developer / Topology / webthing-iotjs / Route / URL

VAGRANT:

vagrant init hashicorp/bionic64 || vagrant init .
time vagrant up --provision

make client/demo
vagrant ssh iotjs -h

vagrant ssh -- "sudo killall iotjs"

vagrant halt
pidof VBoxHeadless # none
cat .vagrant/machines/default/virtualbox
du -hsc "$HOME/VirtualBox VMs/iotjs-express_default_"*"_"*/ # 2.5GB
vagrant destroy -f
sudo killall VBoxHeadless

KUBEEDGE (TODO):

dpkg -s kubeedge | grep Version: 
#| Version: 0.0.0+v1.1.0+beta.0+176+g4a75b7c2-0~rzr0+v1.1.0+beta.0+184+g432f6fea

dpkg -L kubeedge

sudo apt-get install kubeadm


/usr/bin/keadm version
#| version: version.Info{Major:"1", Minor:"1+", GitVersion:"v1.1.0-beta.0.185+bcc89d618aa98e", GitCommit:"bcc89d618aa98e21349d48cc0bf5ea8964d46c0a", GitTreeState:"clean", BuildDate:"2019-10-30T18:47:16Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

/usr/bin/keadm reset
#| sh: 1: kubeadm: not found

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo 'deb https://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo /usr/bin/keadm init

sudo keadm init

# TODO
sudo ./certgen.sh genCertAndKey edge
kubectl create -f devices_v1alpha1_devicemodel.yaml
kubectl create -f devices_v1alpha1_device.yaml

MISC:

RESOURCES:

LICENSE: CC-BY-SA-4.0

INDEX

Clone this wiki locally