Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refinery::Resource not removed from cache on destroy #3520

Open
evenreven opened this issue Nov 1, 2022 · 2 comments
Open

Refinery::Resource not removed from cache on destroy #3520

evenreven opened this issue Nov 1, 2022 · 2 comments

Comments

@evenreven
Copy link
Contributor

I'm unsure if this question belongs here or in Dragonfly upstream, but I'll try here.

I've uploaded hundreds of PDFs since I first created my Refinery app in 2015 (first 2.1.5, later upgraded to 4.0.3), and I've noticed this issue on and off. When I delete a Refinery::Resource in the CMS panel (say, to upload a new version of a document), the file is deleted from the filestore (I just use a local file store). However, the old direct link /system/resources/base64reallylongstring/old-document.pdf still works, so it's still being served from the Dragonfly cache.

Needless to say, I deleted the old document for a reason, and I would really like the link to go away from the internet (people could forward a link to the old document on email, for instance), and I also want to free up the space without needing to wait for a Redis LFU invalidation. I don't know much about the low level innards of the Rails cache handing, but shouldn't it invalidate the key when it's destroyed?

My site is extremely slow without caching due to some legacy architectural issues, so it's not an option to flush the entire cache when I delete a document.

Three questions:

  1. Is this a bug?
  2. If this behaviour is intentional, is there a good way to find the cache key that matches the deleted Refinery::Resource? Using the model id or something (file_uid?) to look it up and then expire that specific key?
  3. Why does destroying the model (doing it from the CMS or the Rails console yields identical results) leave a .meta file behind?

Thanks in advance! If this is upstream behaviour, feel free to tell me, and I'll file an issue there instead.

@Matho
Copy link
Contributor

Matho commented Nov 5, 2022

Hi @evenreven

Unfortunately I dont know how to help you with this issue.

But, if you says your site is slow, you can use Nginx cache for serving Dragonfly resources. It means, that resources will be send from Nginx cache (by Nginx process), instead serving by some Ruby process.

I have attached my nginx config. Check the proxy_cache_path directive.

upstream app {
  server unix:/app/puma.sock fail_timeout=0;
}

proxy_cache_path /app/shared/nginx/cache/dragonfly levels=2:2 keys_zone=dragonfly:100m inactive=30d max_size=1g;
server {
  listen 80 default_server;
  root /app/public;

  location ^~ /assets/ {
    gzip_static on;
    expires max;
    add_header Cache-Control public;
    add_header Vary Accept-Encoding;
  }

  try_files $uri/index.html $uri $uri.html @app;
  location @app {
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Host $server_name;
    proxy_pass_request_headers      on;

    proxy_redirect off;
    proxy_pass http://app;

    proxy_connect_timeout       1800;
    proxy_send_timeout          1800;
    proxy_read_timeout          1800;
    send_timeout                1800;

    proxy_buffer_size   128k;
    proxy_buffers   4 256k;
    proxy_busy_buffers_size   256k;

    gzip             on;
    gzip_min_length  1000;
    gzip_proxied     expired no-cache no-store private auth;
    gzip_types       text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
    gzip_disable     "MSIE [1-6]\.";
  }

  error_page 500 502 503 504 /500.html;
  client_max_body_size 4G;

  client_body_timeout 12;
  client_header_timeout 12;
  keepalive_timeout 20;
  send_timeout 10;

  client_body_buffer_size 10K;
  client_header_buffer_size 1k;
  large_client_header_buffers 4 32k;

  server_tokens off;
}

@evenreven
Copy link
Contributor Author

Thanks, that's an interesting config for Dragonfly. But caching is not the part of my site which is slow. The main problem is N+1 queries, but with a warm cache, the site is actually quite fast.

The problem is that the orphaned cache entries still get served from the cache even though the owner of the cached entry was deleted with the destroy action. I don't even know how to find it to delete it manually from Redis (where I assume it's served from, but it could be in a static file somewhere, I don't even know).

If anything the app is too fast, I would actually be fine with not caching file resources at all (images, with imagemagick processing and all, is necessary to cache, though).

I could try to remove the cache directive from config (the action dispatch flag) and rely on fragment caching for the processed images. That feels wrong, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants