Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File descriptors leak to xdg-open #816

Open
setharnold opened this issue Dec 6, 2023 · 2 comments
Open

File descriptors leak to xdg-open #816

setharnold opened this issue Dec 6, 2023 · 2 comments

Comments

@setharnold
Copy link

Hello, I'm writing an AppArmor profile for matterhorn and I believe that it isn't correctly closing file descriptors before calling the xdg-open helper program:

type=AVC msg=audit(...): apparmor="DENIED" operation="file_inherit" profile="matterhorn//opened-by-matterhorn" name="/tmp/matterhorn-subprocess372785-0.log" pid=380450 comm="xdg-open" requested_mask="wr" denied_mask="wr" fsuid=1000 ouid=1000
type=AVC msg=audit(...): apparmor="DENIED" operation="file_inherit" profile="matterhorn//opened-by-matterhorn" pid=380450 comm="xdg-open" laddr=10.172.64.71 lport=39880 faddr=185.125.189.99 fport=443 family="inet" sock_type="stream" protocol=6 requested_mask="send receive" denied_mask="send receive"
type=AVC msg=audit(...): apparmor="DENIED" operation="file_inherit" profile="matterhorn//opened-by-matterhorn" pid=380450 comm="xdg-open" laddr=10.172.64.71 lport=38944 faddr=185.125.189.100 fport=443 family="inet" sock_type="stream" protocol=6 requested_mask="send receive" denied_mask="send receive"

The first line is the subprocess log. This is probably fine and intentional, but I thought I'd mention it just in case it is not fine and not intentional.

The next two lines show sockets:

10.172.64.71:39880 connected with 185.125.189.99:443
10.172.64.71:38944 connected with 185.125.189.100:443

These are probably connections with my company's Mattermost server. These sockets should not be shared outside the matterhorn process -- what happens if one of the child processes starts writing to it? or reading from it?

The best way to address this is to ensure the close-on-exec flag is set after creating the socket but before connecting to the server. See FD_CLOEXEC in fcntl():

It can also be addressed by closing all the file descriptors that shouldn't be available after the fork() and before the exec() call. This is often hard to do correctly, which is why I recommend the close-on-exec flag instead.

Thanks

@jtdaugherty
Copy link
Member

Interesting, thanks for the report. I looked into this a little bit and we don't currently have control over low-level flags on the server sockets because they're handled by another package that doesn't expose those via its interface. It seems to me the only way forward for us would be to issue a pull request or feature request to that maintainer to get this need addressed in some way.

As for the two socket lines you posted, it's interesting to me that there are two different IPv4 addresses. I don't think it's safe to assume they're both your server because Matterhorn won't connect to more than one address for ordinary Mattermost protocol purposes. It will make a connection to the server to establish a websocket connection, and then it will make numerous subsequent connections for making API requests.

The subprocess log leak can probably be fixed since we have more control over that.

@setharnold
Copy link
Author

How unfortunate that it's not easily exposed already. Hopefully the package maintainer will be amenable to a proposal. If not, you can always break the abstractions a bit and do the moral equivalent of:

for i in 0 .. 1024:
   close(i)

before opening any filedescriptors or pipes that you do want open. (Yeah, it'd be best if the API just supported this by default.)

About the IPv4 addresses, it's entirely possible that could be a side-effect of us deploying Mattermost in Kubernetes. I can't claim to know much beyond that, so I don't know if it's multiple processes on multiple systems that are coordinating the work or if these are just hitting several load balancers that direct all the traffic to the same machine. In any event, it's just the one team.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants