Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check if we can get rid of queue/lock/trigger on some platforms #166

Open
DerDakon opened this issue Jul 5, 2020 · 8 comments
Open

Check if we can get rid of queue/lock/trigger on some platforms #166

DerDakon opened this issue Jul 5, 2020 · 8 comments

Comments

@DerDakon
Copy link
Member

DerDakon commented Jul 5, 2020

qmail-queue uses the pipe in queue/lock/trigger to signal qmail-send that a new message is ready for processing. This can be avoided on newer platforms, where one could use e.g. inotify on Linux to watch the todo directory for new files. This would get the notification automatically when new files are created, and avoids scanning the directory afterwards, as the event would already return the filename of the new file.

@mbhangui
Copy link
Contributor

mbhangui commented Jul 6, 2020

I have tried something like this and ran out of limits on a busy system. We need to add a note on setting the limit in /etc/sysctl.conf. IIRC fs.inotify.max_user_watches was the parameter I had to modify. I never did investigate how I ran out this limit, if there was something else other than my application using inotify.

EDIT: This was the program and it was inotify_add_watch() that returned error with errno set to ENOSPC
https://github.com/mbhangui/indimail-mta/blob/master/indimail-mta-x/inotify.c

@leahneukirchen
Copy link
Contributor

Shouldn't 1 watch be enough to look at the directory contents?

@DerDakon
Copy link
Member Author

DerDakon commented Jul 6, 2020

A normal user may just get EMFILE when calling inotify_init(), but as qmail-send runs as root it is not affected by this.

@mbhangui
Copy link
Contributor

mbhangui commented Jul 6, 2020

A normal user may just get EMFILE when calling inotify_init(), but as qmail-send runs as root it is not affected by this.

Yes, that could have been the reason. My program was running under tcpserver as non-root. It was a service that kept on updating clients (using tcpclient), list of files getting updated on the system.

@mbhangui
Copy link
Contributor

mbhangui commented Jul 6, 2020

Shouldn't 1 watch be enough to look at the directory contents?

Yes. You just need one inotify_add_watch() for a directory. But I think what happens is when an event is generated, one is supposed to read the event to clear it. These events get queued till you read it. The problem I faced was an extermely busy system. This was one directory where user keept on pushing video files. my application was supposed to read the events and transfer the files to another system. I guess due to heavy IO, my application wasn't able to process the events fast enough.

@leahneukirchen
Copy link
Contributor

You get an IN_Q_OVERFLOW event in this case. I don't see how to get EMFILE.

@mbhangui
Copy link
Contributor

mbhangui commented Jul 6, 2020

read(2) could return IN_Q_OVERFLOW. I never had a read failure. What was failing for me was inotify_add_watch(). The inotify_add_watch was being called by around 5 devices. So it wasn't as if 100s of inotify_add_watch calls were being made. And it was just one directory. From the man page, one cannot clearly distinguish if the error was due to inoity watch limit or if the kernel failed to allocae a needed resource (it is vague on what resource).

Let me see if I can simulate the same by running 5 instances of the above program, comment out the read calll and write a script to create thousands of files.

       ENOMEM Insufficient kernel memory was available.

       ENOSPC The  user  limit  on  the  total  number  of inotify watches was reached or the kernel failed to allocate a needed
              resource.

@mbhangui
Copy link
Contributor

mbhangui commented Jul 6, 2020

Came across this while finding how one can run out of inotify resources. Apparently one setup a trace to find out who is using and consuming inotify().

https://unix.stackexchange.com/questions/15509/whos-consuming-my-inotify-resources

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants