This repository has been archived by the owner on Sep 30, 2020. It is now read-only.
Bypass 16kb userdata limit by uploading to s3 bucket #255
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Crude fix for #252 .
This is a best-effort rebase+squash of our internal branch on top of master. It seems to solve this unexpected blocker for us, but code is brittle and moderately ugly, but not much more that already existing code around there :) I have no capacity to refine it further. It breaks nodepools (another reason to get rid of them!).
Probably best thing to do is to make
cluster.yml
to accepts3Bucket
config option and removes3-uri
flag, making presence of a readable bucket for cluster needs a hard requirement.As I said, I wont be able to make it better, please adopt it, edit it to your needs and fix it,throw away it or whatever, but fix 16KB problem in master one way or another It is a major blocker for us so had to be solved ASAP, I imagine other users feel this pain too.
No testing of this PR is done whatsoever, just hacked it so that it compiles. Internal branch this PR is based on seems to be working fine.