Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CloudFront Invalidator #3

Open
TheMattSchiller opened this issue Oct 4, 2017 · 6 comments
Open

CloudFront Invalidator #3

TheMattSchiller opened this issue Oct 4, 2017 · 6 comments

Comments

@TheMattSchiller
Copy link

TheMattSchiller commented Oct 4, 2017

This project changes the game for munki! Thank you for making it.

When I implemented this in our testing environment it worked fantastically, however, subsequent changes to the munki repo in S3 were not pushed to CloudFront. The fix for this is to implement a CloudFront invalidator. It is a very small amount of code that can be run as a lambda function, with S3 puts to the munki repo as its trigger, which will invalidate the associated CloudFront objects for each file added to S3.

link to lambda function

If you would like, I can write instructions on how to include this bit which you could add to the readme. I feel like it is a necessary addition as it is much too easy for CloudFront to serve stale content. For example, this issue was revealed to us when we pushed out a pkginfo with an error, and our subsequent changes to remedy the pkginfo file would make it to S3 but not to CloudFront or the clients.

Thank you again!

@clburlison
Copy link
Contributor

clburlison commented Oct 4, 2017

You should not invalidate the cache! That costs money and is very heavy handed for the CloudFront CDN. You should instead look into adding time to live values (referred to in some docs as Cache-Control headers). For example /manifests & /catalogs live for 2mins in my cloudfront instances, while /pkgs, /pkginfo, etc have 24 hr time to live.

This is very easy to achieve in the CloudFront web interface. I think it is under the “behaviors” tab.

@DanLockcuff
Copy link

DanLockcuff commented Oct 5, 2017 via email

@clburlison
Copy link
Contributor

clburlison commented Oct 5, 2017

So you likely aren’t hitting the 1,000 invalidations a month limit after which you pay. With that said I highly recommend both of y’all reading the following. https://www.cloudvps.com/helpcenter/knowledgebase/content-delivery-network-cdn/cdn-cache-control-and-invalidations

Yes setting invalidations will work. No you shouldn’t be using that to “refresh” content. That is not how CDNs are designed. Munki already versions big files. Let’s take an office update for example. That is a ~1.6 GB file, if using the skuless installer, when you invalidate you will remove this from the edge servers. Then the next update will require a pull from s3 to cache to the CloudFront edge servers. Doing this multiple times a day is very inefficient.

Also, CloudFront makes it very easy to set the Cache-Control headers for entire directory paths using the web interface. You can also script this with something like aws sync if that is more you fancy (could even do both).

@erikng
Copy link

erikng commented Oct 5, 2017 via email

@AaronBurchfield
Copy link
Owner

Thanks @TheMattSchiller for bringing this up. This behavior may not be obvious to those new to CloudFront, I'll probably update the readme to include a note about this for future readers.

Personally I use this script to wrap the aws command line tools and specify a desired cache-control for each munki repo subdirectory while syncing to s3.

Thanks @clburlison for providing a concise explanation for using cache control over invalidations.

I'll leave this open as a reminder to myself to update the documentation.

@erikng
Copy link

erikng commented Oct 5, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants