New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report bugs for Synology DSM deploy hook #2727
Comments
@tresni thank's for your work! This doesn't support automatically renewing the cert, does it? |
@lippertmarkus If you mean will the Synology automatically renew the certs, no. However, renewed certificates will be updated on the synology. If you want to do renewals on your synology, I do this using a cronjob. Control Panel -> Task Scheduler -> Create -> Scheduled Task -> User-defined Script. Under General, set User to root. Under Schedule, set a schedule you feel is appropriate (I do monthly.) Under Task Settings, put the following in the Run Command box (assuming you installed to /usr/local/share/acme.sh/acme.sh --cron --home /usr/local/share/acme.sh/ |
@tresni Thanks for the quick response. I do basically the same, see https://github.com/lippertmarkus/synology-le-dns-auto-renew When I saw that you use the web API in your deployment hook, I wondered if we could use the this API also for the renewing process? |
This does both issuance and renewal. No weird directory handling, it just uses the API for everything. Maybe an example is better then trying to explain :)
|
@tresni sorry for hijacking this issue for discussing. Please tell me if you want to discuss elsewhere :) While implementing #2782 I found that there is a binary which allows executing API requests without requiring authentication:
Maybe this could be used to implement a second way for importing the certificates? This wouldn't require any access information, so no scheme, host, port, username, password or device id environment variables. On the other hand this can only be used with a locally installed version of I just briefly looked into that but couldn't figure out how to prepare a request for sending the binary certificate data with this method, but I think it should work somehow. |
This module is designed where it can be run from any system, not just the Synology device itself, in line with what I believe acme.sh is attempting to accomplish. So using the WebApi allows for that, using the binary would not. Definitely a good find though! |
Nice work with the hook, much cleaner than looping over folders and certs. 👍 🙂 I was trying it today, but I ran into two separate issues: Password Changing the password to an alphanumeric one fixed the issue, as the script proceeded further. Certificate upload In short, uploading fails (after quite some time) and the response has an error code of 119. I'd welcome any suggestions here... 🤔 (EDIT: trying to upload the same files through the DSM Web UI works perfectly) DSM: 6.2.2-24922 Update 4
|
Actually, according to the manual, Wget does not currently support multipart/form-data for transmitting POST data... 😖 I tried this again using cURL and it works. So I guess the options are either doing the upload in a way that works with Wget, or bail out with a message if that's not possible and Wget is being used. |
Exit code 4 is a network error which I don't think is related to the form data itself. Also, the documentation for wget is just saying that it cannot create multipart uploads itself. We construct the payload ourself and send it as data, I don't think this has any impact on wget, but I'll double check. |
I'm using default cURL for transferring the certificates, but receiving error code 119 too. As the default synology certificate is not verified, I first got cURL error 60 with the combination of "SSL certificate problem: unable to get local issuer certificate". After resolving this problem with setting Apart from that after not getting any further, I tried importing the certificates with a different reload script which was linked here, but this broke my DSM certificate system completely (no certificates were shown and even after regenerating the nginx certificate folder all the services which use the certificates are gone and I am not able to restore them). So I will now restore the DSM system to factory settings to see if that somehow helps getting rid of the 119er. Device: DS216j Error 119 debug log using cURL (cleared of private data):
|
I've fixed the URL encoding issue for passwords, I'll open a PR for that shortly. I have confirmed there appears to be an issue using wget. Comparing differences between wget and curl now. |
@JoJota and @tatablack both of your issues should be fixed by #2935 . Thank you for the reports. @JoJota I'm not entirely sure but I believe this is actually the same issue that is causing wget to fail all the time. |
The Synology DSM deploy hook feature is broken - on master and dev branches. The script does not detect or read exported SYNO_Create variable/data. Only when it is added to the synology_dsm.sh script are you able to progress up to the deployment function which after authentication and reading of cert data fails to add and apply the Let's Encrypt cert.
Note: xxx.xxxxxxx.xxx replaces actual cert domain There's also the issue of the missing sudo command which is not mentioned in the wiki - https://github.com/acmesh-official/acme.sh/wiki/Synology-NAS-Guide No issues to report with the issuance of Let's Encrypt certs, acme.sh itself or other exported variables ie. CERT_DOMAIN |
@atam1 Please give some additional information. Are you attempting to run this on Synology NAS device or from a remote computer? If on a Synology NAS, what device and what version of DSM? I've just tested from my personal machine to a virtualized DS3615xs running DSM 6.1.7 with no issues (only the domain has been changed.) With latest dev:
With master:
Additionally, the hook does not use sudo, so I'm not sure what you mean by "missing sudo command" at all. |
I just tested master on the same virtual machine (DS3615sx running DSM 6.1.7) and had no issues:
|
@tresni Hi Brian. Yes, I am connected to the synology box via ssh from my computer (mac pro). On sudo - if I run the command without sudo:
The same goes for script installation. Once you execute 'sudo -i' and after entering password you are logged in as root. I did not try to install the script under an administrator account as 'sudo -i' was used in the instructions. As for SYNO_Create, I just tried executing the same command line you used but with sudo and after exporting SYNO_Unsername, SYNO_Password, CERT_DOMAIN, SYNO_Certificate and the result:
acme.sh created and installed the cert and its files to /root/.acme.sh/xxx.xxxxxxx.xxx_ecc/ Also, another issue I had with the master branch was SYNO_Scheme, SYNO_Hostname, SYNO_Port were not being picked up by the deployment script so I manually saved them in /root/.acme.sh/xxx.xxxxxxx.xxx_ecc/xxx.xxxxxxx.xxx.conf ... haven't tested dev yet. |
A shell started with
The command is incorrect. By doing If you are going to use sudo, the appropriate command would be Just an additional note that I've update the Synology NAS guide to give steps that work with no workarounds necessary. Please let me know if you have problems after following the steps exactly as outlined there. |
@tresni Alright. I'm going to start fresh and install after executing Appreciate your help! Thanks |
@tresni Hello again Brian. I gave this another try and had followed the updated wiki. While acme.sh can now be executed without sudo and the deployment script is detecting/reading the data for SYNO_xxxxxx, it's still complaining and failing when deploying the cert to DSM.
What else can be causing this? Thank you! |
Adding debug output:
Tested with synology_dsm.sh from both master and dev branches |
What Synology device are you using? What version of DSM are you using? I can't find error code 5529 defined. The surrounding defined errors that I could find are:
I've been able to successfully deploy an ECC certificate as follows (DS3615sx and DSM 6.1.7):
With the information you've provided, I can't replicate what you are seeing. |
918+ with DSM 6.2.3 rel 25426 (latest release)
Could the form fields/options have changed between versions as it is failing when uploading the new cert? The error code 5529 is very near to the error codes you posted so I am guessing it has something to do with the cert or its fields. |
Unfortunately I am not home to test with my DSM running latest and having trouble getting xpenology to run DSM 6.2 or later on VirtualBox.
You could check the synoSDSjslib/sds.js file to see if it contains the the string '5529' similar to the ones I posted. Should be available at http://[synology_host]:[synology_port]/synoSDSjslib/sds.js . |
I checked and it's 5529:_T("certificate","not_support_ecc") How does it not support ecc when with the previous acme/synology method (prior to the dsm deployhook) I was able to create and install ec-256 and ec-384 certs. |
I can't answer that as I don't work for Synology. That's a response from their API. Can you add the certificates manually via the DSM webui? I was able to get something setup to access my DS1515+ at home, it's running DSM 6.2.2 and supports ECC certificates. So either it's something new in 6.2.3 or something specific to the DS918. Reviewing the DS918+ Release Notes, the closest I see is "Added support for Let's Encrypt wildcard certificates for Synology DDNS." |
You might take a look at |
Those outputs are meant to be written. |
"--debug 3" outputs the username and password as a hex string e.g. in Saxfusion's Jan 7 post, line 7 of the log output is username and line 10 is the password. Might want to change your password @Saxfusion |
For the password I'll add that to the list, i.e., I'll fix 3 instead of 2 debug outputs. However, neither ACME.sh nor this deploy extension ever cared for the username & I don't regard usernames as "sensitive information". |
Ok, I mistakenly thought the username was being hidden from the log output as well. |
As @rsporsche already pointed out, the password is also encoded as an easily decoded hex string. Beyond that the private key should also be hidden, considering that the private key should never be shared. While I can agree that a username isn't as sensitive as a password, it still does make it easier to figure out what accounts to attack, knowing they have admin privileges to function in their given context. Yes, I've changed the usernames in the logs I've shared here. All relevant sections have been replaced with
|
Had a look at it & regarding:
it's non of this deploy script's business - see: encoded_username="$(printf "%s" "$SYNO_Username" | _url_encode)"
encoded_password="$(printf "%s" "$SYNO_Password" | _url_encode)" That The same applies for the criticized debug output of both, actual login URL & saved deploy config which use ACME.sh's Until then, it basically remains up to the user pasting a log to properly review & occasionally redact any sensitive data that's still in the log. However, due to lowering the debug level for Lastly, your aforementioned criticized log part: body='api=SYNO.Core.Certificate.CRT&method=list&version=1&_sid=##APPARENTLY_UNHIDDEN_SYNO_DEVICE_ID_EVEN_THOUGH_OUTPUT-INSECURE_IS_NOT_ENABLED##' simply contains no sensitive data. Especially since my initial code change introduced required usage of It remains up to NeilPang for when he approves PR #5030 - so, don't complain to me, please. 😉 |
The deploy command does not appear to honor SYNO_CREATE. I've set it via 'export SYNO_CREATE=1', but the script does not pick it up. Even if I set SYNO_CREATE before the deploy call I get: SYNO_CREATE=1 /root/acme.sh/acme.sh --deploy -d mydomain.com --deploy-hook synology_dsm --debug 2... What does work is when I add SYNO_CREATE=1 on line 142, right before: if [ -z "$id" ] && [ -z "${SYNO_Create:-}" ]; then [Sat Mar 2 16:27:25 UTC 2024] http services were NOT restarted |
Check your case. You have a case missmatch in the variable name. |
I did trash my docker container setup and installed again locally /usr/local/share/acme.sh Trying to use the temp admin which is created and being deleted just fine. Only the login and deploy is failing somehow.
Any ideas? |
@Saxfusion Try to use the latest patched version, just download and replace the old synology_dsm.sh (wiki already been updated): https://github.com/acmesh-official/acme.sh/blob/6af52933155c913d236f5be7bc1c5d4eccf05887/deploy/synology_dsm.sh |
@scruel that is working. Suspecting the enforce 2FA for admin group setting, which is temporarily disabled in this patched version, did make the difference. |
Hello, [Thu Apr 25 10:21:13 UTC 2024] Running cmd: deploy First difference with the previous deployment : Session ID and SynoToken are not filled up. And obviously, the deployment fails. Looking at my NAS, the directory usr/local/share/acme.sh/ is not here anymore ! Checking on my other NAS (DS718+) with the same configuration and it is the same. Both NAS have been updated yesterday to DSM 7.2.1-69057U5. I have also updated acme.sh docker container to the latest. I can't tell if this directory was there before the update and if yes, which update is the culpid. Any idea what is going wrong ? |
I'm running acme.sh in docker using the neilpang/acme.sh docker container to deploy to my synology NAS. The docker container was updated earlier this week and out of curiosity I manually forced a renew, which failed with the following message:
I guess the grep provided by the docker container is now from busybox which doesn't seem to support the '-P' parameter. Any chance this can be adapted or should I talk to the maintainer of the docker container? |
@Mic13710 I did not follow the update on the deployment hook, but I had exactly the same issue. If you read the script, in my case, it was because the token and another variable were not set, blocking the script, even it they are not used for the authentication process. After many discussions, it is related to my "old" version of DSM. I proposed a modifcation that has been rejected I think in order to fix this... |
Try |
@LordDarkneo thanks for your feeback.
I feel that the problem lies on this famous acme.sh directory |
That worked |
@Mic13710 |
@Mic13710 You should see something "longer" in the logs with the constructed url in the variable 'url=xx', just before the "Unable to authenticate to" log line. Try to copy and paste this URL in your browser from your local network (as it is using the local ip adress I guess) and see the error you get for the logging issue (trying to fix the url to get the successful authent might also help to find which condition is making a failure) |
@LordDarkneo The URL is OK. It is the one for docker.
There are the Session ID and SynoToken correctly collected, and the process is going to the end. Since the update of DSM and acme in docker my attempt to deploy the certificate on the same NAS (718+) fails :
Clearly Session ID and SynoToken are not there........ Sorry @scruel, I have tried hard but could not locate any log in var/log after running with --debug 3 |
When you paste the URL http://172.17.0.1:5000/webapi/query.cgi?api=SYNO.API.Info&version=1&method=query&query=SYNO.API.Auth in a browser, it's OK? I remember I had issues here because the SYNO.API.Auth is a mthod of the entry.cgi and not query.cgi....
Try to set a value (no matter what) in the SynoToken to see if it fixes the issue (because the sessionID is get later in the code, at least in the version I have) |
@Mic13710 Unfortunately, we won't be able to know why to do the fix if you can't provide debug logs, maybe we will have to wait the other people to reproduce your issue. |
@LordDarkneo Thank you for your help. The IP 172.17.0.1 is private and internal to the NAS (docker). It can't be resolve in a browser. Replacing the IP by the NAS IP can be resolved but does not provide relevant info: {"data":{"SYNO.API.Auth":{"maxVersion":7,"minVersion":1,"path":"entry.cgi"}},"success":true} Cache-Control: max-age=0, no-cache, no-store, must-revalidate Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,/;q=0.8 @scruel I have search all over the NAS with WinSCP in root. Can't locate any log related to the deployment except those already given. My certs are valid till 22/05 for the 220+ and beginning of july for the 718+ so I have little time left. I am quite busy these days but I should be able to provide more information later on. At the moment : |
@Mic13710 in fact in does. It confirms that the query has been sent properly and that login should be made through entry.cgi. Now we still need to find which version is used dfor the authentication (i'd say the 7 but not sure...)did you try to set a value in the Syno Token and execute the script to see how it goes (just add a variable in the flat file). |
@LordDarkneo I have tried to add values for Session ID and SynoToken in ndd.conf like this:
Without any result. Still empty in the log. Values are those from the previous successful deployment. |
Just fill syno token, not session id.
If logs do not refresh, there's probably a problem elsewhere ....
Le sam. 27 avr. 2024, 03:41, Mic13710 ***@***.***> a écrit :
… @LordDarkneo <https://github.com/LordDarkneo> I have tried to add values
for Session ID and SynoToken in ndd.conf like this:
SAVED_SYNO_Session_ID='27_e3kXp9lHXGnNhIXmeqdTS33P5aa8LHZBt6581WwYx-VrBtWYRDJ2u0FFPSVFm80KYN0qu_Jci0eHsXglE8U'
SAVED_SYNO_SynoToken='yU5LMGVAgdCro'
Without any result. Still empty in the log. Values are those from the
previous successful deployment.
Maybe I should force the values somewhere else or differently but I have
no idea where and how.
—
Reply to this email directly, view it on GitHub
<#2727 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJXSJIRFBAHBX6YJ3H7RCKTY7NJCRAVCNFSM4KSAYNYKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMBYGA2DAMRXGI2A>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Well, what can do more, can do less. |
Hello @scruel , @LordDarkneo , |
Please report bugs for the Synology DSM hook here. Remember to include debug logs
The text was updated successfully, but these errors were encountered: