New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proxmox VE 8.1.3 - TrueNAS Scale 23.10 #174
Comments
Ok, so I erased the TrueNAS Scale 23.10.01 (Cobia) and installed TrueNAS Scale 22.02.04 (Angelfish) and was able to get everything working as extecped. There is definitely something to be updated for the plugin to work with Cobia. Dennis |
Does not work with 23.10 iether |
mine works im on latest TrueNAS-SCALE-23.10.1.3 |
Solution I've found is just to add your pub key into the "Authorized Keys" section of the root user. To do so: In TrueNAS:
Edit: This was with TrueNAS-SCALE-23.10.1.3 and Proxmox VE 8.1.0 |
I'm getting the same result as Alcatraz077. Connected, and showing correctly within proxmox UI, but iscsiadm discovery shows nothing and when creating a VM it fails with the error: 'TASK ERROR: unable to create VM 200 - Unable to connect to the FreeNAS API service at '192.168.100.1' using the 'http' protocol at /usr/share/perl5/PVE/Storage/LunCmd/FreeNAS.pm line 380.' Disk does actually get created on the TrueNAS server but it never gets any further than that. |
I do confirm it's working as expected using PVE 8.1.10 + TrueNAS Scale 23.10.2. Took me a while to figure out a couple of things. I think @adam-beckett-1999 needs to enable "API use SSL" when creating the ZFS over iSCSI connection on PVE end. I can post /etc/scst.conf (Scale-end configuration) and /etc/pve/storage.cfg as needed. |
I did try with both SSL enabled and not, and it didn't seem to make any difference. Same errors. If you have a working configuration, please do share. |
PVE 8.1.10 IP address = 192.168.253.251 TrueNAS Scale 23.10.2= 192.168.253.252 I followed these steps:
Pool Name:
iSCSI Settings:
To confirm iscsi connectivity is possible, you can test with the following command in PVE shell (replace the IP after the -p option as needed). Current contents of my config files. PVE host /etc/initiatorname.iscsi
PVE host /etc/pve/storage.cfg
TrueNAS Scale /etc/scst.conf - parts of these are automatically filled by TrueNAS GUI, others were added by creating a VM with id 105 on PVE:
|
Host: Proxmox VE 8.1.3
NAS: TrueNAS Scale 23.10
Target Global Configuration
iqn.2005-10.org.freenas.ctl
15%
3260
Portals:
1
10.xx.xx.xx:3260
Discovery/Auth: None
Initiator Groups:
2
iqn.1993-08.org.debian:01:xxxxxxxxxxx
Targets:
datastore-ssd
Network 10.x.x.x/23
Portal Group ID: 1
Initiator Group ID: 2
Authentication Method: None
In proxmox Server ZFS over ISCSI:
ID: SSD-PROD
Portal: 10.XXX.XXX.XXX
Pool: Actual-Pool-Name
ZFS Block Size: 16k
Target: iqn.2005-10.org.freenas.ctl:datastore-ssd
API Username: root
iSCSI Provider: FreeNAS-API
API IPv4 Host: 10.xx.xx.xx (same as portal)
API Password: password
Results of restoring a VM from backup into the SSD-PROD volume:
new volume ID is 'SSD-PROD:vm-106-disk-0'
iscsiadm: No session found.
restore proxmox backup image: /usr/bin/pbs-restore --repository root@pam@10.XX.XX.XX:BKP-REPO vm/106/2024-01-06T04:54:20Z drive-scsi0.img.fidx iscsi://10.XXX.XXX.XXX/iqn.2005-10.org.freenas.ctl:datastore-ssd/0 --verbose --format raw
connecting to repository 'root@pam@10.XXX.XXX.XXX:BKP-REPO'
open block backend for target 'iscsi://10.XXX.XXX.XXX/iqn.2005-10.org.freenas.ctl:datastore-ssd/0'
iSCSI: Failed to connect to LUN : Failed to log in to target. Status: Target not found(515)
temporary volume 'SSD-PROD:vm-106-disk-0' sucessfuly removed
error before or during data restore, some or all disks were not completely restored. VM 106 state is NOT cleaned up.
TASK ERROR: command '/usr/bin/pbs-restore --repository root@pam@10.XXX.XXX.XXX:BKP-REPO vm/106/2024-01-06T04:54:20Z drive-scsi0.img.fidx iscsi://10.XXX.XXX.XXX/iqn.2005-10.org.freenas.ctl:datastore-ssd/0 --verbose --format raw' failed: exit code 255
Any ideas?
Regards,
Dennis
The text was updated successfully, but these errors were encountered: