Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Files piling up in _temporary_compressed_files folder #655

Open
brianberns opened this issue Sep 18, 2017 · 8 comments
Open

Files piling up in _temporary_compressed_files folder #655

brianberns opened this issue Sep 18, 2017 · 8 comments

Comments

@brianberns
Copy link

I'm using Suave.io as a web server in a shared hosting environment. I've noticed that files created in the _temporary_compressed_files folder are never getting deleted. The documentation says "Suave deletes these files once they are served, so their lifetime is usually less than a second," but this is not occurring and I cannot see anywhere in the Suave code where this is implemented. What can I do to make sure these files get deleted in a timely fashion?

@jimfoye
Copy link
Contributor

jimfoye commented Sep 22, 2017

It seems to me that there are probably three points where files should be cleaned up:

  1. On startup (previous session could have terminated abnormally and left files behind)
  2. On shutdown
  3. When the compressed files folder exceeds a certain size. I did a quick search and found a blog post about how IIS handles compressed files, and it says that IIS does this.

Number 1 is easy, just add code similar to the following to startWebServerAsync:

if Directory.Exists compressionFolder then
  // It's possible that a cleanup of the compressed files folder from a previous
  // session is needed. Get the list of files that are there now, these should be
  // safe to delete, then delete them asynchronously.
  let files = Directory.GetFiles compressionFolder
  if files.Length > 0 then
    async {
      files
      |> Array.iter (fun file -> 
        // shouldn't fail, but if it does just ignore
        try
          File.Delete file
        with
        | _ -> ())
    } |> Async.Start

For number 2, add similar code to shutdown (doesn't need to be async, obviously), but I'm not exactly sure were is the best place to add this.

Number 3 strikes me as a tricky problem. It's easy enough monitor the size, then determine what the oldest files are that should be deleted , but how to be sure it's safe to delete them? Unless maybe Suave starts keeping track of compressed files being served in another queue somewhere, which can be checked to make sure a file is not being served anymore and can be safely deleted?

@haf
Copy link
Contributor

haf commented Sep 22, 2017

  1. Ensure Suave uses a write lock on its files, then you can't delete in-use files.

@haf
Copy link
Contributor

haf commented Sep 22, 2017

  1. You can have a look at my code for the FileTarget in Logary for inspiration of cleaning log files.

@jimfoye
Copy link
Contributor

jimfoye commented Sep 22, 2017

@haf Actually even the FileShare.Read that transformStream() uses is sufficient to prevent deletion (I tested just to make sure). So I think that removes some of the trickiness of number 3. Also it means I can safely try to delete the old uncompressed file when a new one is created.

I searched the Logary repo for "FileTarget" but didn't find anything :)

I don't think there is any real place where "shutdown" type code executes in Suave. The closest I could see would be Tcp.runServer(), that would be only if it's shut down via cancellation token, and in any event this code doesn't really belong there.

But, I think it's sufficient that the server will clean up the next time it starts, and a developer could easily add a few lines to his own app to clean this up after stopping the server, if he thinks it's really needed. Assuming you agree, I can modify the PR and that would leave number 3, managing the size of the folder.

Any suggestions on that? Just fire off a timer and check once a minute? Add size to configuration, what is best default, etc.?

@haf
Copy link
Contributor

haf commented Sep 25, 2017

The file target is here https://github.com/logary/logary/blob/master/src/Logary/Targets_Core.fs#L816

General guidelines:

  • functions, functions, functions. If your function is bigger than two lines of code, something's off
  • compose functions together rather than writing it all in a single function
  • imagine that your functions should be callable on their own without too much context, so that you keep your computations small and to the point
  • LIke here, where I've separated the deleter function from the loop that checks the file https://github.com/logary/logary/blob/master/src/Logary/Targets_Core.fs#L1095

I generally don't like timers, but in this case they are probably necessary.

  • You need to check that the Read lock also locks in the same manner between Windows/Linux, since they are known to be different in that regard.

@brianberns
Copy link
Author

Just wondering if there's been any progress on this? Seems like this could be a real problem in a production environment. I just checked and have half a gig of dead temp files on a very low-traffic website.

@haf
Copy link
Contributor

haf commented Jan 13, 2018

@brianberns I don't think anyone has taken a stab at this; no.

@nightroman
Copy link

nightroman commented May 6, 2018

I was playing with the WebSocket example (netcoreapp2.0) and noticed some unfortunate effects.

I run the example by

dotnet bin\Debug\netcoreapp2.0\WebSocket.dll

It looks like .NET SDK uses Suave.dll right from the package cache directory, namely from

~\.nuget\packages\suave\2.4.0\lib\netstandard2.0

And it looks like Suave _temporary_compressed_files is there by default.

As a result after a few runs of the example I find several compressed files right in the package directory:

~\.nuget\packages\suave\2.4.0\lib\netstandard2.0\_temporary_compressed_files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants