Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama3:70B pull error #4520

Closed
DimIsaev opened this issue May 19, 2024 · 20 comments · Fixed by #4619
Closed

llama3:70B pull error #4520

DimIsaev opened this issue May 19, 2024 · 20 comments · Fixed by #4619
Assignees
Labels
bug Something isn't working networking Issues relating to ollama pull and push

Comments

@DimIsaev
Copy link

What is the issue?

image

Error: max retries exceeded: unexpected EOF

OS

Linux

GPU

Nvidia

CPU

Intel

Ollama version

0.1.33

@DimIsaev DimIsaev added the bug Something isn't working label May 19, 2024
@kha84
Copy link

kha84 commented May 19, 2024

Is it just one off thing? Have you tried to restart it?
To me it looks like you might have some network issues. Or ollama "model registry" might have them.

But I also agree that ollama should have handled such issues better, and try to resume the download number of times before giving up, especially when we're speaking about downloading some massive files. It will be very disappointing to get this error at 99% without an option to resume, when you have to start over again from 0% :)

@sammcj
Copy link
Contributor

sammcj commented May 19, 2024

I think the model registry might be a bit hosed, I can't pull any models as am getting the same error.

ollama pull llama3:8b-text-q6_K
pulling manifest
pulling ce446d4caf83...  99% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████  ▏ 6.5 GB/6.6 GB
Error: max retries exceeded: EOF

Might be related to #1736 (comment)

@kha84
Copy link

kha84 commented May 19, 2024

By any chance aren't you behind any PROXY or VPN?

@sammcj
Copy link
Contributor

sammcj commented May 19, 2024

By any chance aren't you behind any PROXY or VPN?

Nope.
Tested on 3 machines and on two different internet connections (see #1736 (comment))

@sammcj
Copy link
Contributor

sammcj commented May 19, 2024

Ooo see also #1036 and #941

@DimIsaev
Copy link
Author

Is it just one off thing? Have you tried to restart it? To me it looks like you might have some network issues. Or ollama "model registry" might have them.

Yes 3 Attemps

@pdevine pdevine added the networking Issues relating to ollama pull and push label May 20, 2024
@coder543
Copy link

coder543 commented May 23, 2024

I'm hitting this issue repeatedly with several llama3 models. The registry definitely seems like it needs a little help.

$ ollama pull llama3:8b-instruct-q8_0
pulling manifest
pulling 11a9680b0168... 100% ▕███████████████████████████████████████████████████████ ▏ 8.5 GB/8.5 GB
Error: max retries exceeded: EOF

@sammcj
Copy link
Contributor

sammcj commented May 23, 2024

Yeah it seems the registry has been completely broken for almost a week now. I’ve pretty much given up on it and now build all my models myself which is ok but kind of negates one of the primary benefits of ollama.

@sammcj
Copy link
Contributor

sammcj commented May 23, 2024

@jmorganca do you know what’s going on here? Is there a discussion thread we should be following / contributing to?

@sammcj
Copy link
Contributor

sammcj commented May 23, 2024

Actually - this looks to be in an improved state this morning, now it just goes back to pulling very slowly (from 80MB/s down to 70KB/s) at 99% again like it used to - perhaps a fix was made?

@coder543
Copy link

coder543 commented May 23, 2024

I just tested again, and I'm still seeing the issue on llama3:8b-instruct-q8_0.

@DimIsaev
Copy link
Author

yes the problem remains

@ahoepf
Copy link

ahoepf commented May 24, 2024

yes, have the same problem but with all models

@FairyTail2000
Copy link

FairyTail2000 commented May 24, 2024

I'm currently running ollama 0.1.39-rc1 from germany. My cloudflare target domain is: dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com where the first part seems to be your project id. The domain get's resolved to 2606:4700::6812:85a. Forcing an IPv4 connection does not alleviate the issue. So an IPv6 network level issue should not be the source. I have gotten this information using OpenSnitch

What I just noticed, although model download fails due to unexpected EOF the ollama server responds with http 200, which seems not entirely correct, since the operation was a failure.

I'm trying to pull llama3:instruct

In wireshark I have noticed the following:
image
Which shows the connection being finalized (dropped), which is initiated from my local ollama instance.

image

Wireshark then shows that cloudflare responds with an ACK (to the packet before the FIN) and in the next packet it resets the connection while acknowledging the FIN packet. This behaviour appears further down more often as more and more connections are reset.

I don't actually know how the http download is implemented, but could it be that ollama receives a byte count of the part, allocates a buffer and terminates the connection when the buffer is full? Even if the error message indicates a stalling of the connection?

EDIT: I'm no Go magician, however one thing stuck out to me:
https://github.com/ollama/ollama/blob/afd2b058b4ee36230ab2a06927bdc0ff41b1e7ae/server/download.go#L222C4-L222C26

If the defer line runs prematurely, due to whatever reason, that would result in the observed sequence of tcp packets. However this is just an uneducated guess

@coder543
Copy link

coder543 commented May 24, 2024

I tried again this morning, and it still could not download. So, I tried completely removing /usr/share/ollama and reinstalling ollama. Now, it was able to successfully download and install the model I mentioned before, which makes me think there could be some bug in ollama itself that is corrupting the local cache.

EDIT: well... that optimism is behind me now. I'm trying to redownload some of the other models I liked before I wiped the cache, and now they won't download completely either.

@coder543
Copy link

Well... I picked an older release at random (0.1.35), and now it is able to successfully download models that 0.1.39 was unable to download. I have not tried to pinpoint the exact release that introduced this serious bug, but I'd say it was probably in the past week.

Not being able to download models reliably will make ollama extremely painful to use and remove most of its value. If this isn't a high priority issue for the project, then I don't know what would be.

For the moment, I'm working around the issue by downloading an old release of ollama and using that to pull models, which isn't great.

@coder543
Copy link

Tagging @mxyng, since I see some changes that affect the model download code paths in the past week or so, and something in there might not be right.

@noxer
Copy link
Contributor

noxer commented May 24, 2024

Maybe a piece of the puzzle (and a quick fix for anyone stuck on this).

  • Check the ollama serve log for the numbers of the parts that are stuck
  • Open the corresponding sha265-{huge hash}-partial-{nn} (nn being the number) files in the models/blobs folder as a text file
  • Now replace the number behind Completed: with a 0
  • Save the file
  • Retry the pull

This forces ollama to download the failed parts from the start and hopefully completes them this time.

@noxer
Copy link
Contributor

noxer commented May 24, 2024

The bug is in this line

n, err := io.CopyN(w, io.TeeReader(resp.Body, part), part.Size)

It always tries to re-download the full chunk size even if parts have already been downloaded. Correct would be

n, err := io.CopyN(w, io.TeeReader(resp.Body, part), part.Size-part.Completed)

@DimIsaev
Copy link
Author

pulling manifest
pulling 0bd51f8f0c97...  29% ▕█████████████████████                                                     ▏  11 GB/ 39 GB  2.7 MB/s   2
Error: max retries exceeded: Get "https://dd20bb891979d25aebc8bec07b2b3bbc.r2.cloudflarestorage.com/ollama/docker/registry/v2/blobs/s/0b/0bd51f8f0c975ce910ed067dcb962a9af05b77bafcdc595ef02178387f10e51d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=66040c77a7c3af820529859349a%!F(MISSING)20240530%!F(MISSING)auto%!F(MISSING)s3%!F(MISSING)aws4_request&X-Amz-Date=20240530T155259Z&X-Amz-Expire0&X-Amz-SignedHeaders=host&X-Amz-Signature=dd62ec94899015534ce1d7e7ecf720685ff06b5a7687e53b3d219ddb657e56cc": net/http: TLS handshakeout

will the problem be solved?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working networking Issues relating to ollama pull and push
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants