You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Multiple other issues are caused by the problem that will be described in detail in this bug report: #2011#1864#512
This issue stems from aria2c and design being fundamentally incompatible with with CDNs that use redirecting urls that have expiration. This is common and you will see such URLs in use on AWS. The most easily accessible URLs that have this implemented is every github release asset download.
For our example github asset, github sets a link expiration of 5 minutes (X-Amz-Expires=300). That means that all downloads need to be INITIATED at that link within 300 seconds of the date that it returns (generally the current time since it creates the link at the time of request).
aria2c breaks up downloads into multiple "connections" and "pieces". the default piece length is 1M and the download is split by this size. So for example if a file to be downloaded is 300M then it will be split into 300 pieces. By default these pieces will be downloaded individually BUT you can have parallelism (-x and -s) to download multiple pieces at once. The fundamental flaw is that all pieces are downloaded using the initial negotiated URL (or URLs when using more than one connection) at the very start which as pointed out has an expiration time. So if the download is slow or the file very large, then all later pieces will fail to download when the URL expires.
Even if you have a fast internet connection, this can easily be replicated on a system by throttling your connection speed.
For example, on linux you can do so with the likes of these commands (where wlp4s0 is the name of the network device from ifconfg)
sudo tc qdisc add dev wlp4s0 ingress
sudo tc filter add dev wlp4s0 parent ffff: protocol ip u32 match ip src 0.0.0.0/0 flowid :1 police rate 5.0mbit burst 10k
then download a large enough file from github assets so that it takes longer than five minutes to complete and you will get Authorization failed errors after five minutes.
Multiple other issues are caused by the problem that will be described in detail in this bug report: #2011 #1864 #512
This issue stems from aria2c and design being fundamentally incompatible with with CDNs that use redirecting urls that have expiration. This is common and you will see such URLs in use on AWS. The most easily accessible URLs that have this implemented is every github release asset download.
For our example github asset, github sets a link expiration of 5 minutes (
X-Amz-Expires=300
). That means that all downloads need to be INITIATED at that link within 300 seconds of the date that it returns (generally the current time since it creates the link at the time of request).aria2c breaks up downloads into multiple "connections" and "pieces". the default piece length is
1M
and the download is split by this size. So for example if a file to be downloaded is300M
then it will be split into 300 pieces. By default these pieces will be downloaded individually BUT you can have parallelism (-x and -s) to download multiple pieces at once. The fundamental flaw is that all pieces are downloaded using the initial negotiated URL (or URLs when using more than one connection) at the very start which as pointed out has an expiration time. So if the download is slow or the file very large, then all later pieces will fail to download when the URL expires.Even if you have a fast internet connection, this can easily be replicated on a system by throttling your connection speed.
For example, on linux you can do so with the likes of these commands (where
wlp4s0
is the name of the network device fromifconfg
)then download a large enough file from github assets so that it takes longer than five minutes to complete and you will get
Authorization failed
errors after five minutes.The text was updated successfully, but these errors were encountered: