Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding new node to minione #44

Open
Bercik1337 opened this issue Jan 26, 2020 · 2 comments
Open

Adding new node to minione #44

Bercik1337 opened this issue Jan 26, 2020 · 2 comments

Comments

@Bercik1337
Copy link

Howdy people.

I love minone, it got my setup up and running in minutes, but now I have problem expanding and I'm exhausted after 2 weeks of trying to throw in additional node. It shows up as STATUS ON in list and all. I can see resources but no luck starting VM there.
example output:

Sun Jan 26 11:08:24 2020 [Z0][VM][I]: New state is ACTIVE
Sun Jan 26 11:08:24 2020 [Z0][VM][I]: New LCM state is PROLOG
Sun Jan 26 11:08:25 2020 [Z0][VM][I]: New LCM state is BOOT
Sun Jan 26 11:08:25 2020 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/31/deployment.0
Sun Jan 26 11:08:27 2020 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Sun Jan 26 11:08:28 2020 [Z0][VMM][I]: ExitCode: 0
Sun Jan 26 11:08:28 2020 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/deploy '/var/lib/one//datastores/0/31/deployment.0' 'ubu1910' 31 ubu1910
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: deploy: Processing disk 0
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: deploy: Using qcow2 mapper for /var/lib/one/datastores/0/31/disk.0
Sun Jan 26 11:08:29 2020 [Z0][VMM][E]: deploy: do_map: qemu-nbd: Failed to blk_new_open '/var/lib/one/datastores/0/31/disk.0': Could not open '/var/lib/one/datastores/0/31/disk.0': Permission denied
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: deploy: Mapping disk at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-31/rootfs using device
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: deploy: Processing disk 0
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: deploy: Using qcow2 mapper for /var/lib/one/datastores/0/31/disk.0
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: deploy: Unmapping disk at /var/snap/lxd/common/lxd/storage-pools/default/containers/one-31/rootfs
Sun Jan 26 11:08:29 2020 [Z0][VMM][E]: deploy: Failed to detect block device from /var/snap/lxd/common/lxd/storage-pools/default/containers/one-31/rootfs
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: deploy: Unmapping disk at /var/lib/one/datastores/0/31/mapper/disk.1
Sun Jan 26 11:08:29 2020 [Z0][VMM][E]: deploy: Failed to detect block device from /var/lib/one/datastores/0/31/mapper/disk.1
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: /var/tmp/one/vmm/lxd/deploy:64:in `<main>': failed to setup container storage (RuntimeError)
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: ExitCode: 1
Sun Jan 26 11:08:29 2020 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Sun Jan 26 11:08:29 2020 [Z0][VMM][E]: Error deploying virtual machine
Sun Jan 26 11:08:29 2020 [Z0][VM][I]: New LCM state is BOOT_FAILURE

Above output is from node that has /var/lib/one mounted over NFS to Sunstone.
I tried the same without shared storage (hoping it would feed image files over SSH or something) and result is similar. Doesn't work (slightly different output).

Setup is:

Hostname opennebula (well it says it all, deployed using minione)
Hostname ubu1910 (new compute node)
Hostname rt (another compute node)

oneadmin@opennebula:~$ onehost list
  ID NAME                                                   CLUSTER    TVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   5 rt                                                     default      1    100 / 400 (25%)   768M / 7.7G (9%) on
   4 ubu1910                                                default      1    100 / 400 (25%)  768M / 1.9G (38%) on
   0 localhost                                              default      0       0 / 800 (0%)     0K / 7.8G (0%) on

Tried adding nodes via CLI and webgui, constantly ends up same way. I should note that I SEE message about permission, but when I logon as oneadmin, and do ls, touch or anything on this nfs share - works fine.

Any ideas? It must be something I'm missing, OR some silly mistake I've made. But went through and through with documentation, yt and blog tutorials and everyone seem to have it working without any problems. Maybe lxd is the problem here?

Ending with dump of ubu1910 node:

oneadmin@opennebula:~$ onehost show 4
HOST 4 INFORMATION
ID                    : 4
NAME                  : ubu1910
CLUSTER               : default
STATE                 : MONITORED
IM_MAD                : lxd
VM_MAD                : lxd
LAST MONITORING TIME  : 01/26 11:10:56

HOST SHARES
RUNNING VMS           : 1
MEMORY
  TOTAL               : 1.9G
  TOTAL +/- RESERVED  : 1.9G
  USED (REAL)         : 442.6M
  USED (ALLOCATED)    : 768M
CPU
  TOTAL               : 400
  TOTAL +/- RESERVED  : 400
  USED (REAL)         : 0
  USED (ALLOCATED)    : 100

MONITORING INFORMATION
ARCH="x86_64"
CLUSTER_ID="0"
CPUSPEED="3325"
HOSTNAME="ubu1910"
HYPERVISOR="lxd"
IM_MAD="lxd"
LXD_PROFILES=""
MODELNAME="Intel(R) Xeon(R) CPU           X5680  @ 3.33GHz"
NAME="ubu1910"
NETRX="0"
NETTX="0"
RESERVED_CPU=""
RESERVED_MEM=""
VERSION="5.10.1"
VM_MAD="lxd"

NUMA NODES

  ID CORES USED FREE
   0 - -   0    2

NUMA MEMORY

 NODE_ID TOTAL    USED_REAL            USED_ALLOCATED       FREE
       0 1.9G     880.6M               0K                   1.1G

NUMA HUGEPAGES

 NODE_ID SIZE     TOTAL    FREE     USED
       0 2M       0        0        0

WILD VIRTUAL MACHINES

NAME                                                      IMPORT_ID  CPU     MEMORY

VIRTUAL MACHINES

  ID USER     GROUP    NAME                                   STAT UCPU    UMEM HOST                             TIME
  31 oneadmin oneadmin t2                                     fail    0      0K ubu1910                      0d 00h05

Thanks!

@xorel
Copy link
Member

xorel commented Jan 28, 2020

As the minione was intended to be just an evaluation tool it supports only the localhost as a hypervisor. If you want to add additional node you either need to:

  1. switch to ssh datastore (http://docs.opennebula.org/5.8/deployment/open_cloud_storage_setup/fs_ds.html#id4)
  2. share & mount the datastores on the new nodes
  3. combine both -- create new cluster, put new nodes to the cluster and create ssh datastore for them

@Bercik1337
Copy link
Author

Thanks for reply.
Tried one more and... failed again.

oneadmin@opennebula:~$ onedatastore list
  ID NAME              SIZE AVA CLUSTERS IMAGES TYPE DS      TM      STAT
 108 ubu1910-sys          - -   0             0 sys  -       ssh     on
 107 ubu1910-img      78.2G 78% 0             0 img  fs      ssh     on
   2 files            78.2G 78% 0             0 fil  fs      ssh     on
   1 default          78.2G 78% 0            10 img  fs      qcow2   on
   0 system           78.2G 78% 0             0 sys  -       qcow2   on

Removed all NFS mounts.
Added ubu1910 as datastore with SSH as you said. Wasn't sure so added both system and image.

When deploying vm picked node, ubu1910, datastore ubu1910-sys (-img is not visible) and I'm back to square one:

Tue Jan 28 18:21:28 2020 [Z0][VM][I]: New state is ACTIVE
Tue Jan 28 18:21:28 2020 [Z0][VM][I]: New LCM state is PROLOG
Tue Jan 28 18:21:51 2020 [Z0][VM][I]: New LCM state is BOOT
Tue Jan 28 18:21:51 2020 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/38/deployment.0
Tue Jan 28 18:21:52 2020 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Tue Jan 28 18:21:52 2020 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Tue Jan 28 18:21:55 2020 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/lxd/deploy '/var/lib/one//datastores/108/38/deployment.0' 'ubu1910' 38 ubu1910
Tue Jan 28 18:21:55 2020 [Z0][VMM][I]: deploy: Processing disk 0
Tue Jan 28 18:21:55 2020 [Z0][VMM][I]: deploy: Using qcow2 mapper for /var/lib/one/datastores/108/38/disk.0
Tue Jan 28 18:21:55 2020 [Z0][VMM][E]: deploy: do_map: qemu-nbd: Failed to blk_new_open '/var/lib/one/datastores/108/38/disk.0': Could not open '/var/lib/one/datastores/108/38/disk.0': Permission denied

permission denied again :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants