Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting frustrated due to infinite loops. #2633

Open
v3ss0n opened this issue Feb 15, 2024 · 17 comments
Open

Getting frustrated due to infinite loops. #2633

v3ss0n opened this issue Feb 15, 2024 · 17 comments
Labels
🧩 dependency resolution Resolution failures

Comments

@v3ss0n
Copy link

v3ss0n commented Feb 15, 2024

This had happen several time when i am using PDM .

Wasted 2 days trying to fix it. I am going to give up PDM soon at this rate.

Here are the depedencies

dependencies = [
    "litestar[cli,jinja,jwt,pydantic,sqlalchemy,standard]",
    "pydantic-settings>=2.0.3",
    "asyncpg>=0.28.0",
    "python-dotenv>=1.0.0",
    "passlib[argon2]>=1.7.4",
    "litestar-saq>=0.1.16",
    "litestar-vite>=0.1.4",
    "litestar-aiosql>=0.1.1",
    "boto3>=1.34.25",
    "python-ffmpeg>=2.0.10",
    "pyav>=12.0.2",
    "boto3-stubs[essential]>=1.34.27",
    "s3fs",
    "awscli>=1.32.28",
    "faster_whisper>=0.10.0",
    "pydub>=0.25.1",
    "whisperx",
    "numpy>=1.26.3"
]

pdm getting stucked in infinite resolution loop at s3transfers

@v3ss0n v3ss0n added the 🐛 bug Something isn't working label Feb 15, 2024
@pawamoy
Copy link
Sponsor Contributor

pawamoy commented Feb 15, 2024

It's not PDM's fault: I tried installing these dependencies using pip and it takes a long time too. Especially with big packages such as torch (750MB), nvidia-cublas (410MB), nvidia-cudnn (730MB), etc.

A more constructive approach would be to try and reduce this set of dependencies to highlight the problematic ones, as an example of dependency tree that takes time to resolve. These examples can then be used to try and find optimizations in the libraries responsible for resolving dependencies. I have added the dependency-resolution label to this issue so it can be used later maybe.

Once a minimal set of dependencies has been identified, another constructive approach can be to reach out to the maintainers of the problematic dependencies, to kindly ask if it's possible to make their own dependency specifications less strict, or more strict depending on the situation, as to help resolvers find a solution more quickly.

@pawamoy pawamoy added 🧩 dependency resolution Resolution failures and removed 🐛 bug Something isn't working labels Feb 15, 2024
@v3ss0n
Copy link
Author

v3ss0n commented Feb 15, 2024

is there any way to build a dependency resolution system ( via pdm) without downloading and resolving to try and find a match?
The problem comes from s3fs <-> boto3 dependency mismatches.

@v3ss0n v3ss0n changed the title Getting faustratied due to infinite resolutin loops. Getting frustrated due to infinite loops. Apr 26, 2024
@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

This is happening again. I think we should have a recursive limit on how many tries till fail. I was expecting the system to be deployed properly and slept but when i woke up the deployment is broken. I would stop if it taking too long.
Since the maintainer of some packages aren't even replying your suggestion on informing them won't work. A

So I think an option how many retries on dependency resolution attempt should be good.

@frostming
Copy link
Collaborator

So I think an option how many retries on dependency resolution attempt should be good.

Why do you think there isn't? https://arc.net/l/quote/gdeikbxb

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

Thanks gonna try strategy.resolve_max_rounds . I think default should be around 1000 times . currently is too much.

@frostming
Copy link
Collaborator

Thanks gonna try strategy.resolve_max_rounds . I think default should be around 1000 times . currently is too much.

That would be too small for projects with more than 10 big dependencies, a round is smaller than you'd think

@frostming
Copy link
Collaborator

frostming commented Apr 27, 2024

BTW, boto3 and aws families are tough ones for dependency resolution, since they have a rather strict version range restricting each other. Try using a more accurate version range for these packages.

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

i removed them and now , stucking at prompthub-py

⠼ Resolving: new pin prompthub-py 4.0.0

Quite long now.

here is my pkgs , i removed all version restrictions too

dependencies = [
    "litestar[cli,jinja,jwt,pydantic,sqlalchemy,standard]",
    "asyncpg",
    "passlib[argon2]",
    "litestar-saq",
    "litestar-vite",
    "litestar-aiosql",
    "s3fs",
    "pyav",
    "whisperx", 
    "numpy",
    "ollama-haystack",
    "jiwer",
    "ollama",
    "gliner",
    "farm-haystack[faiss-gpu,inference]",
]

it comes form farm-haystack

Thank you very much for prompt replies.

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

still couldn't resolve . are there any way to know what is exactly screwing this up?

@frostming
Copy link
Collaborator

frostming commented Apr 27, 2024

still couldn't resolve . are there any way to know what is exactly screwing this up?

add -v to enable terminal logging and you will probably spot some packages trying to be resolved repeatedly and why they are rejected

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

Found and fix first problem it was due to linters with version being locked.

And now leads to another , this time weird

error

pdm.termui: Candidate rejected: farm-haystack@1.25.5 because it introduces a new requirement pydantic<2 that conflicts with other requirements:
    pydantic (from litestar@2.8.2)    
  pydantic>=2.0.1 (from pydantic-settings@2.0.1)

but:

https://github.com/deepset-ai/haystack/blob/8d04e530da24b5e5c8c11af29829714eeea47db2/pyproject.toml#L169

dosen't mention pydantic <=2 ... why its making it up.

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

looks like a bug , this definitely is an infinite loop.

@pawamoy
Copy link
Sponsor Contributor

pawamoy commented Apr 27, 2024

dosen't mention pydantic <=2

It does in version 1.25.5: https://github.com/deepset-ai/haystack/blob/a8bc7551aeb2036f87cb2a33743f3c2f71b9be52/pyproject.toml#L51.

It looks like an infinite loop but it's probably actually trying to prune branches of a huge tree 😕 Always the same libs who are problematic 😅

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

ah , it is fixed in latest master then.
But case like this is common ... is there anyway we can solve this programatically? Or ease it to somewhat manageable way.
We will never know when a package owner/maintainer will do this ,

@pawamoy
Copy link
Sponsor Contributor

pawamoy commented Apr 27, 2024

You can override the resolver to force specific versions of specific packages.

@pawamoy
Copy link
Sponsor Contributor

pawamoy commented Apr 27, 2024

You could also disallow pydantic-settings v2, since its versions probably match pydantic's: pydantic-settings<2. This way you regain compatibility with farm-haystack, etc.

@v3ss0n
Copy link
Author

v3ss0n commented Apr 27, 2024

Thanks alot , gonna try overriding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🧩 dependency resolution Resolution failures
Projects
None yet
Development

No branches or pull requests

3 participants