New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Synology iSCSI docker plugin installation: invalid argument #368
Comments
I've never attempted to build a docker plugin myself, so its entirely possible that I'm just misconfiguring something. |
You are brave! I am not much help here but maybe @olljanat can provide some tips. |
iSCSI implementation in Linux is a bit tricky. I have used only this legacy type of Docker plugin with iSCSI. It is not open source but there is something I know based on it. Check these prerequirements first:
Then update {
"description": "democratic-csi storage driver for synology iscsi",
"entrypoint": [
"/home/csi/app/entrypoint.sh"
],
"env": [
{
"description": "the CSI endpoint to listen to internally",
"name": "CSI_ENDPOINT",
"value": "unix:///run/docker/plugins/csi-synology-iscsi.sock"
}
],
"interface": {
"socket": "csi-synology-iscsi.sock",
"types": [
"docker.csinode/1.0",
"docker.csicontroller/1.0"
]
},
"linux": {
"AllowAllDevices": true,
"capabilities": [
"CAP_SYS_ADMIN",
"CAP_CHOWN",
"CAP_SYS_PTRACE",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_NET_ADMIN",
"CAP_MKNOD",
"CAP_SYS_MODULE"
],
"devices": null
},
"mounts": [
{
"description": "Used to access the dynamically attached block devices",
"destination": "/dev",
"name": "dev",
"options": [
"rbind",
"rshared"
],
"source": "/dev/",
"type": "bind"
},
{
"destination": "/etc/iscsi",
"name": "/etc/iscsi",
"options": [
"bind"
],
"source": "/etc/iscsi",
"type": "bind"
},
{
"destination": "/lib/modules",
"name": "/lib/modules",
"options": [
"bind"
],
"source": "/lib/modules",
"type": "bind"
},
{
"destination": "/sbin/iscsiadm",
"name": "/sbin/iscsiadm",
"options": [
"bind"
],
"source": "/sbin/iscsiadm",
"type": "bind"
},
{
"destination": "/host/proc",
"name": "/proc",
"options": [
"bind"
],
"source": "/proc",
"type": "bind"
}
],
"network": {
"type": "host"
},
"PropagatedMount": "/data/published",
"workdir": "/home/csi/app"
} |
Ah, for iscsi I assume you bind mount / to /host and I handle the iscsiadm command using chroot via a wrapper. So you do need the daemon running but do not need to bind mount individual binaries or run the discover command manually. |
Manual discovery of course is just test and it is bit tricky to troubleshoot these plugins. |
@olljanat , I appreciate the reply -- I don't want you think I'm ghosting you. I haven't had a chance to try your suggestion yet; I should sometime tomorrow evening to try it out and report back. |
@olljanat @travisghansen , I have some unfortunate news. The above suggestions didn't seem to help. At first I added all the individual mounts like what @olljanat listed above and tried to build that, which resulted in the same I then followed what @travisghansen, said about how he configured it to perform a chroot based on the entire host file system mounted to That too resulted in the same Here's my current {
"description": "democratic-csi storage driver for synology iscsi",
"entrypoint": [
"/home/csi/app/entrypoint.sh"
],
"env": [
{
"name": "CSI_ENDPOINT",
"description": "the CSI endpoint to listen to internally",
"value": "unix:///run/docker/plugins/csi-synology-iscsi.sock"
}
],
"interface": {
"types": ["docker.csinode/1.0", "docker.csicontroller/1.0"],
"socket": "csi-synology-iscsi.sock"
},
"network": {
"type": "host"
},
"linux": {
"capabilities": [
"CAP_SYS_ADMIN",
"CAP_CHOWN",
"CAP_SYS_PTRACE",
"CAP_IPC_LOCK",
"CAP_IPC_OWNER",
"CAP_NET_ADMIN",
"CAP_MKNOD",
"CAP_SYS_MODULE"
],
"AllowAllDevices": true,
"devices": null
},
"mounts": [
{
"description": "entire filesystem mounted for chroot access",
"name": "/host",
"source": "/",
"destination": "/host",
"options": [
"bind"
],
"type": "bind"
}
],
"workdir": "/home/csi/app",
"PropagatedMount": "/data/published"
} I used a modified version of the I do not get the My suspicion is that the error is happening when Since my previous configuration, the config change I did based on my misunderstanding of what @olljanat meant, and my configuration to match how @travisghansen setup the Do either of you have any suggestions on how I can approach debugging the sock file? |
So did you tested that iscsi discovery works? Then enable debug logging to Docker daemon and CSI plugin. Also check if you get something to dmesg. And if there is no hint on those, then only options is add more debug logging to code. |
@olljanat, yes, sorry -- I should have listed that in my post. I did verify that iscsid and iscsiadm are working as expected. I was able to see my Synology's iSCSI targets. I tried enabling the I did some additional debugging after that post using the same article on plugin debugging that you posted. What I discovered is that the democratic-csi sock is not responding to curl calls; while weave, another plugin I have installed, responds to My current hypothesis is that the
Yeah, I kind of suspected that because I commented out the |
This is interesting since I am working on a Truenas version of this and I can enable it just fine. The only real difference I see is I'm pushing to a private registry that's running on my laptop, but I can't imagine that's significant. Just for fun, I tried to build and run the Synology version, and I'm seeing the same errors. Does the plugin need to make a connection with the actual Truenas/Synology hardware to initialize properly? I haven't had the time to do a deep dive on the inner workings of democratic-csi yet, but I may have time in the near future. |
Hmm, that’s really strange. They both should be exactly the same (in the sense of building a plugin). The active config for the app is the only thing that should alter the behavior at all. |
@sethicis Btw, now you have capabilities from my example but whole root fs mount. So most probably you need to add
@travisghansen how this works? Does your CSI plugin do chroot automatically when it finds /host mount or should it be part of entrypoint script in plugin container? |
It does this via a wrapper script builtin to the container: https://github.com/democratic-csi/democratic-csi/blob/master/docker/iscsiadm |
Ok, something is definitely different between the two grpc servers. (My currently installed weave plugin and my built democratic-csi). I was able to get the democratic-csi grpc server to respond to my connection calls, but I had to use Using However, now for the bad news. Even with the trace now working properly there is no additional information about what is happening during the plugin install / enable phase. I see the server get created, but no connection attempts ever seem to arrive and docker I guess just times out and kills the plugin. Comparing behavior between the
But again, I'm going to think on this mystery some more and start on it again fresh tomorrow. I'll probably dig into the source code of |
The issue
I wanted to try out the Synology iSCSI driver with a docker swarm setup, but when I try to build and install the logs say:
I'm kind of stumped as to what is happening, but it seems like something is going wrong during the grpc startup.
How I'm building
I'm using a modified version of the script written by olljanat.
my
config.json
My
build.sh
How I'm invoking the
build.sh
script:My
entrypoint.sh
#!/bin/sh bin/democratic-csi \ --driver-config-file=config/synology-iscsi.yaml \ --log-level=debug \ --server-socket=/run/docker/plugins/csi-synology-iscsi.sock \ --csi-version=1.5.0 \ --csi-name=csi-synology-iscsi \ --server-socket-permissions-mode=0755
My driver config yaml
Full log output when attempting to enable the plugin
Platform Info
I'm running this on Ubuntu 22.04.3 with Docker v25.0.1.
The text was updated successfully, but these errors were encountered: