Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes environment where plgd-hub was tested #1123

Open
Askidea opened this issue Nov 8, 2023 · 3 comments
Open

Kubernetes environment where plgd-hub was tested #1123

Askidea opened this issue Nov 8, 2023 · 3 comments
Assignees
Labels
🪲 bug Something isn't working ❓ question Further information is requested

Comments

@Askidea
Copy link

Askidea commented Nov 8, 2023

@jkralik
I am trying to install plgd-hub to run on a single machine. I tried installing the official Kubernetes binaries (kubeadm, kubectl, kubelet), but I am having trouble proceeding due to some errors.

My deployment environment:

  • Ubuntu 20.04 linux/amd64
  • docker 24.0.0
  • kubeadm v1.28.2
  • CNI: flannel

So, in order to get help distributing plgd-hub, I have the following questions and need some advices.

Q1. In what Kubernetes environments can plgd-hub operate? In which Kubernetes environment has this github source been tested? What is the recommended Kubernetes environment?
e.g. minikube, k3s, kind, microk8s, official k8s (kubeadm) etc.

Q2. Has plgd-hub been tested on both single node (control plane node) or multi node? In what Kubernetes environment is try.plgd.cloud currently running?

Q3. Is device onboarding possible remotely using plgd-hub/bundle?

Q4. I tried to onboard the device I built in try.plgd.cloud with client-application, but when I connect to localhost:3000 in web browser, infinite loading occurs. Does anyone else have any problems?

@Askidea Askidea added the 🪲 bug Something isn't working label Nov 8, 2023
@jkralik jkralik added the ❓ question Further information is requested label Nov 8, 2023
@jkralik
Copy link
Member

jkralik commented Nov 8, 2023

@Askidea

Q1. In what Kubernetes environments can plgd-hub operate? In which Kubernetes environment has this github source been tested? What is the recommended Kubernetes environment?
e.g. minikube, k3s, kind, microk8s, official k8s (kubeadm) etc.

We have tested it on GCP, AWS, and microk8s.

Q2. Has plgd-hub been tested on both single node (control plane node) or multi node? In what Kubernetes environment is try.plgd.cloud currently running?

plgd-hub has been tested with 10k devices on 3 nodes, also on raspberry pi4 8GB RAM with 1k cloud_servers. For https://try.plgd.cloud, it is running on microk8s.

Q3. Is device onboarding possible remotely using plgd-hub/bundle?

Through the environment variable FQDN, you can set the IP or domain where the hub is running.

docker run -d --name plgd -p 443:443 -p 5683:5683 -p 5684:5684 -e FQDN=10.110.110.12 ghcr.io/plgd-dev/hub/bundle:latest

For example, we have a pipeline that runs a runner with plgd-hub/bundle, and you can onboard devices there. (After approximately 30 minutes, the runner is terminated).

Link to the pipeline

Q4. I tried to onboard the device I built in try.plgd.cloud with client-application, but when I connect to localhost:3000 in web browser, infinite loading occurs. Does anyone else have any problems?

I temporarily moved the api-s domains because we exceeded Let's Encrypt limits. However, I have now fixed it and moved it back. If you follow the remote-access documentation, it should work.

@Askidea
Copy link
Author

Askidea commented Nov 8, 2023

@jkralik Thanks for your very quick comment.

I'm going to create several VODs (Virtual OCF Devices on Android app) that are mapped to physical ZigBee devices, and I want to register them in try.plgd.cloud. However, it seems that local onboarding is possible with client-application, but registration with cloud is not possible. Is there a way to register the VOD I created in try.plgd.cloud? I created my account in try.plgd.cloud and read the documentations, but I don't know the exact way to do it.

And as mentioned above, I am trying to install plgd-hub by creating a single control plane node using the official Kubernetes binary and Flannel CNI, but it does not work because of an error when running the plgd pod. At this point, would it be a good idea to build it using microk8s the way you did?

Of course, the error may be a problem with Kubernetes settings or not.

ocfcloud@p620:~/tmp$ kubectl get pods -A
NAMESPACE      NAME                                                 READY   STATUS             RESTARTS         AGE
cert-manager   cert-manager-58d8656bc5-99gwd                        1/1     Running            0                106m
cert-manager   cert-manager-cainjector-7bcbdd9b6d-t5v5h             1/1     Running            0                106m
cert-manager   cert-manager-webhook-7d87dcc755-56rqc                1/1     Running            0                106m
kube-flannel   kube-flannel-ds-phl7m                                1/1     Running            1 (7h24m ago)    8h
kube-system    coredns-5dd5756b68-5w8ww                             1/1     Running            0                8h
kube-system    coredns-5dd5756b68-kmmt2                             1/1     Running            0                8h
kube-system    etcd-p620                                            1/1     Running            12 (7h20m ago)   8h
kube-system    kube-apiserver-p620                                  1/1     Running            3 (7h20m ago)    8h
kube-system    kube-controller-manager-p620                         1/1     Running            3 (7h20m ago)    8h
kube-system    kube-proxy-7mnz5                                     1/1     Running            1 (7h24m ago)    8h
kube-system    kube-scheduler-p620                                  1/1     Running            3 (7h20m ago)    8h
plgd           hub-nats-0                                           3/3     Running            0                55m
plgd           hub-plgd-hub-certificate-authority-cbf995bb8-ks5pm   0/1     CrashLoopBackOff   14 (73s ago)     55m
plgd           hub-plgd-hub-coap-gateway-5db97f7d9c-2cjwl           0/1     CrashLoopBackOff   15 (3m31s ago)   55m
plgd           hub-plgd-hub-grpc-gateway-84b987467f-9mflk           0/1     CrashLoopBackOff   15 (3m8s ago)    55m
plgd           hub-plgd-hub-http-gateway-5698c94c67-8f47h           0/1     CrashLoopBackOff   15 (2m37s ago)   55m
plgd           hub-plgd-hub-identity-store-5c64cdcd8d-fmzbj         0/1     CrashLoopBackOff   15 (4m1s ago)    55m
plgd           hub-plgd-hub-resource-aggregate-78fbf74498-kp2lj     0/1     CrashLoopBackOff   14 (68s ago)     55m
plgd           hub-plgd-hub-resource-directory-976f9b98-kncj9       0/1     CrashLoopBackOff   15 (2m53s ago)   55m
plgd           mongodb-0                                            0/1     Pending            0                55m

@jkralik
Copy link
Member

jkralik commented Nov 9, 2023

@Askidea

Is there a way to register the VOD I created in try.plgd.cloud ?

For each VOD, it must have its own cloud configuration resource. Have you ported the changes to the master branch?

What we tested is similar, but it's not VOD. We ran a cloud server with multiple devices:

  1. To test, can you register a cloud_server from the master branch to try.plgd.cloud using the client application?
  2. Run the cloud_server with multiple devices (it is very similar to VOD and needs to be ported to the master) using the argument --num-devices:
    ./cloud_server --num-devices 5
    
    You will see five devices in the client application, and for each of them, you need to onboard them locally and register them to the try.plgd.cloud. This is similar to how it will work for VOD devices.

At this point, would it be a good idea to build it using microk8s the way you did?

You need to properly set up Kubernetes, and an easy way to do it is to use MicroK8s with the steps described here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🪲 bug Something isn't working ❓ question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants