Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to limit bandwidth for S3 uploads/downloads #1090

Closed
jamesls opened this issue Jan 13, 2015 · 67 comments
Closed

Add ability to limit bandwidth for S3 uploads/downloads #1090

jamesls opened this issue Jan 13, 2015 · 67 comments
Labels
feature-request A feature should be added or improved. s3bandwidth s3

Comments

@jamesls
Copy link
Member

jamesls commented Jan 13, 2015

Original from #1078, this is a feature request to add the ability for the aws s3 commands to limit the amount of bandwidth used for uploads and downloads.

In the referenced issue, it was specifically mentioned that some ISPs charge fees if you go above a specific mbps, so users need the ability to limit bandwidth.

I imagine this is something we'd only need to add to the aws s3 commands.

@AustinSnow
Copy link

Hello jamesis,
Could you provide a timeframe when the bandwidth limit could become available?
Thanks
austinsnow

@kjohnston
Copy link

👍

3 similar comments
@beauhoyt
Copy link

👍

@bhegazy
Copy link

bhegazy commented Jul 30, 2015

👍

@seattledoug
Copy link

👍

@dsclassen
Copy link

@godefroi
Copy link

godefroi commented Oct 4, 2015

👍

3 similar comments
@rayterrill
Copy link

👍

@kazeburo
Copy link

kazeburo commented Oct 5, 2015

👍

@isaoshimizu
Copy link

👍

@quiver
Copy link
Contributor

quiver commented Oct 5, 2015

Under Unix-flavor systems, trickle comes in handy for ad-hoc throttling. trickle hooks socket-APIs using LD_PRELOAD and throttles bandwidth.

You can run commands something like

$ trickle -s -u {UPLOAD_LIMIT(KB/s)} command
$ trickle -s -u {UPLOAD_LIMIT(KB/s)} -d {DOWNLOAD_LIMIT(KB/s)} command

Built-in feature will be really useful, but given cross-platform nature of AWS-CLI, it can cost a lot to implement and maintain it.

@binaryorganic
Copy link

Trickle is specifically mentioned in issue #1078 which is linked to in the first comment here. The two (trickle and AWS-CLI) just don't play nice together in my experience.

@isp0000
Copy link

isp0000 commented Oct 22, 2015

👍

1 similar comment
@l3rady
Copy link

l3rady commented Nov 12, 2015

👍

@andrefelipe
Copy link

+1

5 similar comments
@ddehghan
Copy link

+1

@joshpelz
Copy link

👍

@whiteadam
Copy link

+1

@mikeg0
Copy link

mikeg0 commented Feb 1, 2016

+1

@nhumphreys
Copy link

👍

@apeschar
Copy link

apeschar commented Mar 9, 2016

(Y)

@JulienChampseix
Copy link

👍

@ikoniaris
Copy link

👍 this is much needed!

@aegixx
Copy link

aegixx commented Apr 15, 2016

👍

@nikitasius
Copy link

👍

@cobaltjacket
Copy link

Over two years in, and this request is still outstanding. Is there a timeframe by which this could be implemented?

@hilyin
Copy link

hilyin commented Apr 6, 2017

👍

4 similar comments
@zouyixiong
Copy link

👍

@zouyixiong
Copy link

👍

@danielpfarmer
Copy link

👍🏿

@nullobject
Copy link

👍

@leonsmith
Copy link

Just nuked the internet in a shared office.
This would be a nice feature when you want to be kind to other people
👍

@pticyn
Copy link

pticyn commented Jun 15, 2017

👍

@markdavidburke
Copy link

you can use trickle -s -u 100 aws s3 sync . s3://examplebucket

@ikoniaris
Copy link

@Sofuca does this work correctly though? There are many people that have tried trickle for this but the results were questionable. See #1078.

@markdavidburke
Copy link

markdavidburke commented Jun 16, 2017

@ikoniaris

Works perfectly for me.

The following command nukes the internet in the office, it's a 20Mb/s connection

aws s3 cp /foo s3://bar

And the following command uploads at a nice 8Mb/s

trickle -s -u 1000 aws s3 sync /foo s3://bar

Screen shot of the outside interface of the firewall I'm using
image

@mxins
Copy link

mxins commented Jul 6, 2017

👍

@ctappy
Copy link

ctappy commented Jul 30, 2017

Trickle and large s3 files will cause the trickle to crash

@wadejensen
Copy link

(y)

@ctappy
Copy link

ctappy commented Aug 22, 2017

sorry, Trickle and large s3 files will cause the trickle to crash using boto3 with 10 concurrent(default settings) uploads, changing the concurrent uploads will resolve the issue. I need to add this in the boto3's github, thanks!

@bhicks-usa
Copy link

👍

So it's been over 2.5 years since this was opened. Is this request just being ignored?

@tantra35
Copy link

tantra35 commented Oct 6, 2017

For us we use pv(https://linux.die.net/man/1/pv) in this maner:

/usr/bin/pv -q -L 20M $l_filepath | /usr/local/bin/aws s3 cp --region "us-east-1" - s3://<s3-bucket>/<path in s3 bucker>

This solution is not ideal(because it require additional support for filtering and recursion, we do it inside bash loop) but much better than trickle which in our case uses 100% of CPU, and behaves very unstable

Here our full usecase of pv(we limit upload speed to 20MB/s == 160Mbit/s)

for l_filepath in /logs/*.log-*; do
    l_filename=`basename $l_filepath`
    /usr/bin/pv -q -L 20M $l_filepath | /usr/local/bin/aws s3 cp --region "us-east-1" - s3://$S3BUCKET/${HOSTNAME}/$l_filename
    /bin/rm $l_filepath
done

@jonoaustin
Copy link

+1

Real life use case: Very large upload to S3 over DX, do not want to saturate the link and potentially impact production applications using the DX link.

@erincerys
Copy link

throttle, trickle and pv all do not work for me on archlinux with the latest awscli from pip when uploading to a bucket. I have additionally set max_concurrent_connections for s3 in the awscli configuration to 1 with no difference made. This would be a much appreciated addition!

@tantra35
Copy link

tantra35 commented Oct 26, 2017

@ischoonover seems that you don't pass --expected-size to aws cli when use it with pv, it very useful when you try to upload very big files

--expected-size (string) This argument specifies the expected size of a stream in terms of bytes. Note that this argument is needed only when a stream is being uploaded to s3 and the size is larger than 5GB. Failure to include this argument under these conditions may result in a failed upload due to too many parts in upload.

@erincerys
Copy link

@tantra35 Size was 1GB. I ended up using s3cmd, which has rate limiting built in with --limit-rate

@joguSD
Copy link
Contributor

joguSD commented Jan 2, 2018

Implemented in #2997.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request A feature should be added or improved. s3bandwidth s3
Projects
None yet
Development

No branches or pull requests