Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial indexing killing my Synology nas (11k IOPS, 250mb/s sustained rw transfer) - is there a way to limit this? #429

Open
PiotrEsse opened this issue Apr 23, 2023 · 4 comments

Comments

@PiotrEsse
Copy link

PiotrEsse commented Apr 23, 2023

Synology 920+ (20GB RAM)
Hi, I am in general very positively toward Photonix. This project has a lot of potential. But its one drawback - I think a serious one preventing further accommodation on ppl devices.
I use docker installation.
Even small set of pictures (200-500, 1-2GB in total) killing my NAS. Upon check I found that Photonix, during initial scan, generate more than 11k IOPS and around 250MB/s sustained transfer, which kills completely a device up to being forced to hard reboot it manually (ssh unresponsive).

Its possible to introduce a parameter to limit operations?

If IOPS/transfer limit would be hard to introduced - its possible to add a batch option to index photos - like it indexing photos 5 in a given time?
Its no problem for me that my collection will be indexed in a week or two. I am used to waiting for computational results for days/weeks. Otherwise, it's impossible for me to use photonix.
Kind regards, Piotr

@michaelknox
Copy link

I’ve seen this as well. What I did was limit the resource priority in the docker containers. At least that way it’s not running the CPU at 100% all the time.

Maybe in future there would be a way to engage the GPU in devices like the 920+ as they have the onboard GPU. They do something like that for rencoding in Tdarr. Maybe thats something to look at?

@michaelknox
Copy link

I’ve had to stop it on mine for now. It’s running the CPU at a constant 98%. I’ll wait a couple of versions to see if there are improvements.

@gkzsolt
Copy link

gkzsolt commented Nov 6, 2023

Same problem here. I was running it in docker containers on my arm server (8 CPUs, 2GB RAM) and started with 100 photos, but it was running the CPU-s on 100% for hours (the server was almost unresponsive). Then, what surprised me was that even after stopping the 3 docker containers, there were a lot of python threads running Photonix tasks. I had to reboot the server.

Do you have some lighter API-s, maybe command-line which deal with applying one task to a photo? I might even contribute some time to the project. I am mainly interested in the automatic tagging.

@aashishbhanawat
Copy link

Yes, you can limit CPU utilisation by using bare minimum supervisord.conf file.
Create your own copy of supervisord.conf (you can delete lines related to program that you dont want to run and keep only config that required to launch UI) and modify docker-compose.yaml file to add your supervisord.conf path in volume section as below:

  • /supervisord.conf:/etc/supervisord.conf

To run indexing using cli run below commands:
docker-compose exec photonix ./manage.py

It will give you number of options e.g raw_scheduler, raw_processor, classification_face_detection_processor, classification_object_processor etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants