Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network storage cannot be mounted at /data/mounts/<name> because it is not empty #4358

Open
Laho812 opened this issue Jun 8, 2023 · 80 comments
Labels

Comments

@Laho812
Copy link

Laho812 commented Jun 8, 2023

The problem

Cannot mount HomeNAS_Backup at /data/mounts/HomeNAS_Backup because it is not empty.
This error message is not accurate because it can either be the mount point or the remote share.
Due to the fact the remote share is empty, the mount point is meant.
So: HomeAssistant keeps some data inside the mount point after deleting the network storage connection and there is no easy way to clear it by yourself.

grafik
grafik

To reproduce:

  1. Connect a network storage
  2. Select the network storage as a default backup path (Settings -> System -> Backup -> 3 dots -> select network storage)
  3. Do a full backup
  4. Delete the network storage connection
  5. Try to connect the network storage using the same Name

What version of Home Assistant Core has the issue?

2023.06

What was the last working version of Home Assistant Core?

No response

What type of installation are you running?

Home Assistant OS

Integration causing the issue

No response

Link to integration documentation on our website

No response

Diagnostics information

No response

Example YAML snippet

No response

Anything in the logs that might be useful for us?

No response

Additional information

No response

@Laho812
Copy link
Author

Laho812 commented Jun 8, 2023

I think the backup file is still in this folder, because the following works:

  1. Connect the same stoage with a different "Name", e.g. HomeNAS_Backup1
  2. Remove the connection
  3. Add the connection again with the same "Name" HomeNAS_Backup1

@frenck frenck transferred this issue from home-assistant/core Jun 8, 2023
@bendestras
Copy link

bendestras commented Jun 8, 2023

I have the same problem and the same observation.

because it is not empty

@JTP335d
Copy link

JTP335d commented Jun 8, 2023

This worked for me by not creating the (named)folder on my NAS. "homeassistant" is the folder on my NAS. It did not create a new folder inside it call "backup", but files are created there when I do a backup. Not sure if that helps.

name: backup
server: ip.add.re.ss
remote share: homeassistant

@bendestras
Copy link

bendestras commented Jun 8, 2023

identical problem with server off and with 2023.6.1

@wojo
Copy link

wojo commented Jun 14, 2023

Also running into this after a network issue caused me to remove and add back in a CIFS mount with the same name as prior "Backups".

On the filesystem I don't see that folder it is trying to mount into. Not sure how to move forward to clean up what is left behind.

@Mouseskowitz
Copy link

I'm having the same issue with an NFS share. It says it's mounted, but I can't actually write a backup file to it. The backup fails silently, which is a huge issue. I made several changes with the backup box checked thinking I had backups, but I didn't. Good thing nothing broke. While I really appreciate the effort of adding an external backup option, this implementation seems dangerous since there is no check to see if the backup was actually made.

@asayler
Copy link

asayler commented Jul 6, 2023

I encountered this same issue after my file server crashed, bringing down the backup samba share with it. It appears HA will silently write backups to the mount location even when nothing is mounted. To clean it up, I had to connect to the host system console (I have an HA Yellow, so I followed the serial console debugging directions at https://yellow.home-assistant.io/documentation/). It appears these mount points aren't exposed to the docker container that runs the terminal add-on, so connecting to the host system directly was the only way to access them. Once connected, I ran the following to move the misplaced backups to the local backup location and then reboot the host to allow it to cleanly remount the drive:

# mv /mnt/data/supervisor/mounts/<backup mount name>/*.tar /mnt/data/supervisor/backup/
# ha host reboot

So it seems like a few things need fixed here:

  • HA needs logic to prevent writing backups to the network mount point when nothing is mounted (and this should raise an error to prevent silently failed backups).
  • HA needs a simpler way to clean up a bad mount point in the case something like this occurs anyway.

@CAHutchins
Copy link

CAHutchins commented Jul 6, 2023

I'm having the same problem. I'm not able to locate the mount point /data/mounts/backup anywhere.
I suspect that after a power outage, the backup ran without the network folder mounted and wrote a file to the folder.
Now I can't find that folder to delete the file.
I'm running HA OS on a RPi4. I don't see a supervisor folder under mnt or anywhere else.

@asayler
Copy link

asayler commented Jul 6, 2023

@CAHutchins what kind of HA setup are you running? If you're using HAOS, you need the access the mount point from the underlying host itself at the path in my previous post.

@CAHutchins
Copy link

CAHutchins commented Jul 6, 2023

@asayler , I'm running on a Raspberry Pi. I just found the command a few minutes ago to access the supervisor file system.
"docker exec -it $(docker ps -f name=supervisor -q) bash"
Once there, the path /data/mounts/ is correct and there was a .tar file in there.
I was able to empty the backup folder as well as clean up some other folders that weren't used anymore.

As you mentioned above, HA needs to prevent writing if the network path is not mounted.
Would it help to set the mountpoint folder to read-only?

@dedors
Copy link

dedors commented Jul 7, 2023

@asayler , I'm running on a Raspberry Pi. I just found the command a few minutes ago to access the supervisor file system. "docker exec -it $(docker ps -f name=supervisor -q) bash" Once there, the path /data/mounts/ is correct and there was a .tar file in there. I was able to empty the backup folder as well as clean up some other folders that weren't used anymore.

As you mentioned above, HA needs to prevent writing if the network path is not mounted. Would it help to set the mountpoint folder to read-only?

Thanks, this solved my issue (for now)

@Rodney-Smith
Copy link

Rodney-Smith commented Aug 5, 2023

I am experiencing the same issue on HAOS 10.4, Core 2023.8.1, Supervisor 2023.07.1 after a power outage. I can't get to the /data/mounts folder to troubleshoot further.
As a workaround I had to create a new back storage location with a different name.

@Mouseskowitz
Copy link

@Rodney-Smith I never figure out how to get to the mount point, but a workaround I found is to remove the external storage and re add it. Same idea as adding it under a new name, but then you don't have a stack of broken stuff.

@MattCheetham
Copy link

Having the same issue. HomeAssistant says my drive has failed to connect, if you press reload it says the issue is fixed, but then shortly throws the error again. Attempting to press reload throws the error mentioned above. Was working fine previously.

@VinzNL
Copy link

VinzNL commented Aug 22, 2023

Same issue here. A combination of the Advanced SSH & Web Terminal Add-on running unprotected and the docker exec command above to get into the supervisor container allowed me to remove the offending back-ups by hand. Obviously it's easy to destroy stuff this way if you're not absolutely sure what you're doing, so proceed with caution.

@bart1970
Copy link

For me, this problem started with one of the 2023.8.x releases. Not sure which one.

@Lolcraftspace
Copy link

Lolcraftspace commented Sep 5, 2023

I am having the same problem with the new release. My last backup was on 01/09/2023

@chriszuercher
Copy link

I face the same issue because my NAS was offline during backup and the file seems like to be written to the folder anyways. Is there no way on unmounting and deleting the folder on HAOS?

@adrianoamalfi
Copy link

same issue

@sigalou
Copy link

sigalou commented Sep 17, 2023

Même probleme

@davidmankin
Copy link

And while we're talking about this, the error message could be more clear. It was really confusing whether it is the mount point that is not empty or the samba share that is not empty. The discussion here makes it clear it's the mount point that's not empty, but I almost tried creating a new samba share on my NAS to fix it.

@Instigater
Copy link

Issue manifested also for me. I mount my Windows workstation to do backups. Workstations isn't permanently running therefore backup location isn't always available. Normally it can remount OK but today I got this dreaded "not empty" error! Please make backup location auto-retry mounting and also give us ability at least to clean up. System yellow - no sane procedure is available to cleanup manually.

@Dieter-GitHub
Copy link

Have the same problem using NFS and Synology as backup utility ...

@Laho812
Copy link
Author

Laho812 commented Oct 9, 2023

And while we're talking about this, the error message could be more clear. It was really confusing whether it is the mount point that is not empty or the samba share that is not empty. The discussion here makes it clear it's the mount point that's not empty, but I almost tried creating a new samba share on my NAS to fix it.

Yeah, that's true. Using a different samba share but giving it the same "Name" (which will be the mount point) causes the error message.
I added it in the description.

@AoSpades
Copy link

Any news yet? I have this problem too.

@derkrasseleo
Copy link

Same problem here with Samba Backup to Synology Diskstation

@Kjeldgaard
Copy link

Same issue on 2024.1.2. My NFS mount was working fine before. Can't mount it now with the same error, however trying to cd /data/mounts/... fails for me because there is no mounts in /data, yet i still get this error.

Did you run the docker command?

@awilliams84
Copy link

!Perform below on your own risk!

So basically to fix this:

  1. Install, configure, and start "Advanced SSH & Web Terminal". I think you have to disable protected mode.
  2. Enter "docker exec -it $(docker ps -f name=supervisor -q) bash"
  3. Navigate to "/data/mounts/YOUR_REMOTE_BACKUP_NAME"
  4. Delete all .tar file in directory
  5. Exit docker and terminal
  6. Remount network storage from UI

This worked! Thank you!

@IvovanWilligen
Copy link

And again a freshly add backup location gets this error. Very disappointing that no core developer seems to pick up this bug report...

@pfurrie
Copy link

pfurrie commented Jan 22, 2024

@IvovanWilligen I agree.

Same problem here. QNAP NAS, running HA Green. Had added network share for backup about a week ago, and had a few good backups. Then had to bring down the power on the house to do rewiring. Shutdown NAS gracefully, and HA came down shortly after when the house power was cut. When it came back online, I get the dreaded "Cannot mount HABackups at /data/mounts/HABackups because it is not empty."

image

Why does the backup routine even care about that location being empty (or not)?

@spants
Copy link

spants commented Jan 26, 2024

I am having the same problem with CIFS. I think if the network fails, the backup will be written to the mount point instead of the cifs drive. Subsequently, the drive cannot be mounted as the directory is not empty....

@davidmankin
Copy link

davidmankin commented Jan 26, 2024 via email

@jmealo
Copy link

jmealo commented Jan 30, 2024

Architecture discussion on a proposed fixed to the Network Storage issues:
home-assistant/architecture#1033

Copy link

There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates.
Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍
This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Feb 29, 2024
@pfurrie
Copy link

pfurrie commented Mar 1, 2024

Thanks. Made sure my Home Assistant instance was fully updated...

image

Then tried (again) to setup storage for backups (again):

image

So, I have the latest version, and no, it didn't resolve the issue.

@github-actions github-actions bot removed the stale label Mar 1, 2024
@MSavisoft
Copy link

I suddenly realised there had been no backups for about 2 months, and found i had this 'Not Empty' problem. Spent some time trying to diagnose ... found this thread. Tried fixes here, but failed to install Advanced Terminal as I could not understand how to generate SSH key.

Then tried adding share again, with different name. Changed backups to that, and all seemed ok.

Today noticed network storage share had vanished!
Managed to add another, which seems ok.

Suggestions:

  1. Fix the 'Empty' problem, and need notification if backup fails.
  2. Messages like 'Failed to to call /mounts - Mounting HomeAssist3 did not succeed. Check host logs for errors from mount or systemd unit mnt-data-supervisor-mounts-HomeAssist3.mount for details' need improving. It meant little to me - tried systemd in Terminal, but it just said 'command not found'.
  3. The Backups list only shows their size, location and created date if the window (app or website) is VERY wide. I only found them by accident!
  4. Authors should improve instructions for those not versed in linux. Eg  in https://github.com/hassio-addons/addon-ssh/tree/main for ssh authorized_keys. I gave up!

Just observations from a newish HA user!

@pfurrie
Copy link

pfurrie commented Mar 9, 2024

Is anybody with Home Assistant following this thread?

Not having an off-server backup is bad. This problem isn't going away.

@Haraldvdh
Copy link

Frigate share is working for 2 updates now, homeassistant backup share still failing

@chriszuercher
Copy link

I hope this gets some developer attention. Also #4662 relates to this and is closed without fix?

@agners
Copy link
Member

agners commented Mar 11, 2024

So there are a bit too many similar/related issue around, and unfortunately it is hard to gauge what the "me toos" in here reference exactly.

From the original post, there are actually two things here:

  1. There was an issue in detecting when network storage were down (and protection of the mount point in that case). This is fixed with Improve error handling when mounts fail #4872.
  2. Is a symptom of 1). When the network storage was down, things got written to a location where the mount would go. When trying to mount the network storage later again the error "Cannot mount xy at /data/mounts/xy because it is not empty" go thrown.

Now 1) is fixed with #4872. However, it might be that your particular installation still has data in that location leading to 2) 😢 . There are two possible fixes:

a) Use a new mount name (e.g. for the original poster, simply use something other than HomeNAS_Backup).
b) Manually clear the internal mount point.

Unfortunately, b) is not quite straight forward. The directory is internally managed by Supervisor, and isn't exposed to the user. However on Home Assistant OS it can be done from the system terminal. Use login to access the shell:

Move the current mount folder away

mv /mnt/data/supervisor/mounts/HomeNAS_Backup/ /mnt/data/supervisor/mounts/HomeNAS_Backup.safe

If mounting then works, you can delete that directory (make sure the data in there is really not needed anymore).

⚠️ be careful with this commands, deletes data!

ls /mnt/data/supervisor/mounts/HomeNAS_Backup.safe
rm -r /mnt/data/supervisor/mounts/HomeNAS_Backup.safe

If you see this however:

mv: can't rename '/mnt/data/supervisor/mounts/HomeNAS_Backup/': Device or resource busy

Then the target is actually mounted right now. You should really no longer see the "cannot be mounted" error.


FWIW, I have tested with the current stable version of Supervisor 2024.02.1 (which contains the above fix), I was not able to create a backup even when the target host was down, or I've otherwise deleted. When trying to create a backup, the backup dialog reported an error along with this error message in the logs:

24-03-11 15:29:30 ERROR (MainThread) [supervisor.backups.manager] rpi3 is down, cannot back-up to it

Restarting the system correctly reported a not mountable target:

image

Trying a backup again, lead to the same error mentioned above.

Once the server was online again, using "Reload" on the repair caused it to mount again, and I was able to take a backup.

@agners agners added the bug label Mar 11, 2024
@bouyssic
Copy link

Hello @agners,

Thanks for the answer, and I agree with you there are two issues on this thread. The fix you mention indeed seems to solve problem 1.

That being said, you mention

Now 1) is fixed with #4872. However, it might be that your particular installation still has data in that location leading to 2) 😢 . There are two possible fixes:

a) Use a new mount name (e.g. for the original poster, simply use something other than HomeNAS_Backup). b) Manually clear the internal mount point.

Unfortunately, b) is not quite straight forward. The directory is internally managed by Supervisor, and isn't exposed to the user.

Couldn't the repair in Home Assistant offer to delete the content of the mount path, to be prevent people from having to Terminal into the instance and manually delete things?
Since we already have "Remove" & "Reload" we could add a "Clean-up" option that would do it automatically since the Supervisor has access to this data.

Just a guess, not a requirement.

@agners
Copy link
Member

agners commented Mar 27, 2024

Couldn't the repair in Home Assistant offer to delete the content of the mount path, to be prevent people from having to Terminal into the instance and manually delete things? Since we already have "Remove" & "Reload" we could add a "Clean-up" option that would do it automatically since the Supervisor has access to this data.

We could add a repair which fixes that. However, the problem should never have happened in first place. Maybe we should have added a repair (or automatically fix) while implementing #4872, but since that is already a while ago I wonder if it is still worth the effort today 🤔

What we could do is automatically delete that internal folder when creating a mount. This way there would be an easier way out 🤔

@pfurrie
Copy link

pfurrie commented Mar 27, 2024

@agners, you had mentioned "use a new mount name." I'm probably not understanding how you mean. I've tested created a new SMB share name on our NAS (originally was "HABackups" and now is "TEST"), for a brand-new folder, but I get the same error.
Or maybe you are meaning something different.

When you wrote "When the network storage was down, things got written to a location where the mount would go," what is that other location? Is that location not self-healing? Meaning, if the network storage comes back online, this is detected and queued file transfer jobs are completed, or at least don't impede any new jobs?

Sorry for being so dense about this.

@dasfdlinux
Copy link

Couldn't the repair in Home Assistant offer to delete the content of the mount path, to be prevent people from having to Terminal into the instance and manually delete things? Since we already have "Remove" & "Reload" we could add a "Clean-up" option that would do it automatically since the Supervisor has access to this data.

We could add a repair which fixes that. However, the problem should never have happened in first place. Maybe we should have added a repair (or automatically fix) while implementing #4872, but since that is already a while ago I wonder if it is still worth the effort today 🤔

What we could do is automatically delete that internal folder when creating a mount. This way there would be an easier way out 🤔

I ran into this issue a while back and had to install the terminal plugin to fix it, but then promptly removed the plugin when completed. As a relatively new user I do wish there had been a fix available that was more integrated, but I would strongly recommend not "automatically deleting" any folders. If you want to warn the user that there is already data in the folder and offer to delete it for them that would be okay, but automatically deleting anything always leads to a problem.

@bouyssic
Copy link

@agners, you had mentioned "use a new mount name." I'm probably not understanding how you mean. I've tested created a new SMB share name on our NAS (originally was "HABackups" and now is "TEST"), for a brand-new folder, but I get the same error. Or maybe you are meaning something different.

When you wrote "When the network storage was down, things got written to a location where the mount would go," what is that other location? Is that location not self-healing? Meaning, if the network storage comes back online, this is detected and queued file transfer jobs are completed, or at least don't impede any new jobs?

Sorry for being so dense about this.

This is inherited from Unix file systems, not a problem from HA specifically. You see, on Unix file system, pretty much everything is a file, or a folder. When you select a mount "location" (in other words a path to a folder) it will point to your NAS. But if the NAS is not available, the same "location" will still exist and will instead be pointing to your internal storage.
While the NAS remains offline, the folder will start receiving your backup data
Once your NAS comes back online, HA will try to mount that distant storage locally, but detecting files present there, will refuse to do it.

I hope it helps you with your questions. Don't hesitate to ask otherwise

@bouyssic
Copy link

I strongly agree with @dasfdlinux . Automatically dropping data without asking would be scary ^^

@pfurrie
Copy link

pfurrie commented Mar 27, 2024

@agners Thanks for that info.
This seems like a backup system oversite. The NAS could be down for a lot of different reasons, so is there a way to get HA to be graceful on how it handles this?
A good starting point might be to have a more meaningful error message. I took it to mean that the NAS share wasn't empty... and I don't know where the mount location in HA is, and the error message doesn't communicate that.

@mwjones1971
Copy link

@agners Thanks for that info. This seems like a backup system oversite. The NAS could be down for a lot of different reasons, so is there a way to get HA to be graceful on how it handles this? A good starting point might be to have a more meaningful error message. I took it to mean that the NAS share wasn't empty... and I don't know where the mount location in HA is, and the error message doesn't communicate that.

I've been watching this thread since the beginning, and there may be a simple (but effective) solution (I can't take the credit for the concept, a server where I work actually does this for its backups to a remote filesystem)

When creating the mount point to the remote filesystem, HA could write a zero byte file to the backup directory. The only thing relevant is that file exists and HA knows the name (might want to make the name unique to each HA instance in case the user has multiple HA instances backing up to the same remote filesystem directory).

When the backup process starts, it should look at the contents of the backup directory, specifically looking for that file. If that file doesn't exist (because say the remote filesystem is not mounted), it could attempt to mount the remote filesystem and then check again. If it fails to find the file after the second attempt, the backup fails and a repair notification is presented to the user. Otherwise the backup runs as intended.

Alas, I'm not a programmer just an IT engineer, so I'll let those that are programmers take the idea if they want to run with it and create an appropriate PR.

Copy link

There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates.
Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍
This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Apr 26, 2024
@mwjones1971
Copy link

Not stale, issue still occurs.

@github-actions github-actions bot removed the stale label Apr 30, 2024
@agners
Copy link
Member

agners commented Apr 30, 2024

When creating the mount point to the remote filesystem, HA could write a zero byte file to the backup directory. The only thing relevant is that file exists and HA knows the name (might want to make the name unique to each HA instance in case the user has multiple HA instances backing up to the same remote filesystem directory).

We do a very similar thing to this: We create a read-only, protective bind mount to the target path (see https://github.com/home-assistant/supervisor/blob/2024.04.4/supervisor/mounts/manager.py#L267).

To do the mounting we use D-Bus calls to systemd. The issue here was that Supervisor did not know/confused the exact state of the mounts, which ended up without a protective bind mount 😢

Not stale, issue still occurs.

As outlined in #4358 (comment), the underlying problem which caused this issue first to appear has been resolved. Nowadays, a protective mount should get created. Even if a failed mount is used as a backup target, you should no longer get into this state.

The problem is, that systems which still use the same mount name still have that directory with content at the target location 😢 Currently there is no automatic cleanup/fix for this systems. Can you try the fixes outlined in #4358 (comment)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests