-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama3:70B pull error #4520
Comments
Is it just one off thing? Have you tried to restart it? But I also agree that ollama should have handled such issues better, and try to resume the download number of times before giving up, especially when we're speaking about downloading some massive files. It will be very disappointing to get this error at 99% without an option to resume, when you have to start over again from 0% :) |
I think the model registry might be a bit hosed, I can't pull any models as am getting the same error.
Might be related to #1736 (comment) |
By any chance aren't you behind any PROXY or VPN? |
Nope. |
Yes 3 Attemps |
I'm hitting this issue repeatedly with several llama3 models. The registry definitely seems like it needs a little help. $ ollama pull llama3:8b-instruct-q8_0
pulling manifest
pulling 11a9680b0168... 100% ▕███████████████████████████████████████████████████████ ▏ 8.5 GB/8.5 GB
Error: max retries exceeded: EOF |
Yeah it seems the registry has been completely broken for almost a week now. I’ve pretty much given up on it and now build all my models myself which is ok but kind of negates one of the primary benefits of ollama. |
@jmorganca do you know what’s going on here? Is there a discussion thread we should be following / contributing to? |
Actually - this looks to be in an improved state this morning, now it just goes back to pulling very slowly (from 80MB/s down to 70KB/s) at 99% again like it used to - perhaps a fix was made? |
I just tested again, and I'm still seeing the issue on |
yes the problem remains |
yes, have the same problem but with all models |
I'm currently running ollama 0.1.39-rc1 from germany. My cloudflare target domain is: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com where the first part seems to be your project id. The domain get's resolved to 2606:4700::6812:85a. Forcing an IPv4 connection does not alleviate the issue. So an IPv6 network level issue should not be the source. I have gotten this information using OpenSnitch What I just noticed, although model download fails due to unexpected EOF the ollama server responds with http 200, which seems not entirely correct, since the operation was a failure. I'm trying to pull llama3:instruct In wireshark I have noticed the following: Wireshark then shows that cloudflare responds with an ACK (to the packet before the FIN) and in the next packet it resets the connection while acknowledging the FIN packet. This behaviour appears further down more often as more and more connections are reset. I don't actually know how the http download is implemented, but could it be that ollama receives a byte count of the part, allocates a buffer and terminates the connection when the buffer is full? Even if the error message indicates a stalling of the connection? EDIT: I'm no Go magician, however one thing stuck out to me: If the defer line runs prematurely, due to whatever reason, that would result in the observed sequence of tcp packets. However this is just an uneducated guess |
I tried again this morning, and it still could not download. So, I tried completely removing EDIT: well... that optimism is behind me now. I'm trying to redownload some of the other models I liked before I wiped the cache, and now they won't download completely either. |
Well... I picked an older release at random (0.1.35), and now it is able to successfully download models that 0.1.39 was unable to download. I have not tried to pinpoint the exact release that introduced this serious bug, but I'd say it was probably in the past week. Not being able to download models reliably will make ollama extremely painful to use and remove most of its value. If this isn't a high priority issue for the project, then I don't know what would be. For the moment, I'm working around the issue by downloading an old release of ollama and using that to pull models, which isn't great. |
Tagging @mxyng, since I see some changes that affect the model download code paths in the past week or so, and something in there might not be right. |
Maybe a piece of the puzzle (and a quick fix for anyone stuck on this).
This forces ollama to download the failed parts from the start and hopefully completes them this time. |
The bug is in this line n, err := io.CopyN(w, io.TeeReader(resp.Body, part), part.Size) It always tries to re-download the full chunk size even if parts have already been downloaded. Correct would be n, err := io.CopyN(w, io.TeeReader(resp.Body, part), part.Size-part.Completed) |
will the problem be solved? |
What is the issue?
Error: max retries exceeded: unexpected EOF
OS
Linux
GPU
Nvidia
CPU
Intel
Ollama version
0.1.33
The text was updated successfully, but these errors were encountered: