Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: llama_cpp_python-0.2.19-cp311-cp311-macosx_12_0_x86_64.whl is not a supported wheel on this platform. #24

Open
iOSRajaramMohanty opened this issue Nov 24, 2023 · 14 comments

Comments

@iOSRajaramMohanty
Copy link

getting error when running the below command

pip install -r requirements_apple_intel.txt

Could your please help me how can I fix this error.

@jllllll
Copy link
Owner

jllllll commented Nov 25, 2023

What version of Python are you using?

@gaby
Copy link
Contributor

gaby commented Nov 27, 2023

@jllllll I have some users unable to install on MacOS with M1. Can't figure out why 😂

@gaby
Copy link
Contributor

gaby commented Nov 27, 2023

This is the error we get:

 echo 'Recommended install command for llama-cpp-python:'
+ echo 'UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu'
+ eval 'UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu'
++ UNAME_M=arm64
++ python -m pip install llama-cpp-python==0.2.19 --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu
Looking in indexes: https://pypi.org/simple, https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu
Collecting llama-cpp-python==0.2.19
  Downloading llama_cpp_python-0.2.19.tar.gz (7.8 MB)
     ���������������������������������������� 7.8/7.8 MB 1.9 MB/s eta 0:00:00
  Installing build dependencies: started
  Installing build dependencies: finished with status 'done'
  Getting requirements to build wheel: started
  Getting requirements to build wheel: finished with status 'done'
  Installing backend dependencies: started
  Installing backend dependencies: finished with status 'done'
  Preparing metadata (pyproject.toml): started
  Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: typing-extensions>=4.5.0 in /usr/local/lib/python3.11/site-packages (from llama-cpp-python==0.2.19) (4.8.0)
Requirement already satisfied: numpy>=1.20.0 in /usr/local/lib/python3.11/site-packages (from llama-cpp-python==0.2.19) (1.26.2)
Collecting diskcache>=5.6.1 (from llama-cpp-python==0.2.19)
  Obtaining dependency information for diskcache>=5.6.1 from https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl.metadata
  Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
   ���������������������������������������� 45.5/45.5 kB 2.8 MB/s eta 0:00:00
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml): started
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml): started
  Building wheel for llama-cpp-python (pyproject.toml): finished with status 'error'
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [24 lines of output]
      *** scikit-build-core 0.6.1 using CMake 3.27.7 (wheel)
      *** Configuring CMake...
      loading initial cache file /tmp/tmp9tc9476m/build/CMakeInit.txt
      -- The C compiler identification is unknown
      -- The CXX compiler identification is unknown
      CMake Error at CMakeLists.txt:3 (project):
        No CMAKE_C_COMPILER could be found.

        Tell CMake where to find the compiler by setting either the environment
        variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
        the compiler, or to the compiler name if it is in the PATH.


      CMake Error at CMakeLists.txt:3 (project):
        No CMAKE_CXX_COMPILER could be found.

        Tell CMake where to find the compiler by setting either the environment
        variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
        to the compiler, or to the compiler name if it is in the PATH.


      -- Configuring incomplete, errors occurred!

      *** CMake configuration failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

[notice] A new release of pip is available: 23.2.1 -> 23.3.1
[notice] To update, run: pip install --upgrade pip
Failed to install llama-cpp-python
+ echo 'Failed to install llama-cpp-python'

@gaby
Copy link
Contributor

gaby commented Nov 27, 2023

@jllllll Is using this the right way for Metal cpu's?

UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu

@jllllll
Copy link
Owner

jllllll commented Nov 28, 2023

@gaby Try this command:

UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --only-binary llama-cpp-python --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu

@gaby
Copy link
Contributor

gaby commented Nov 28, 2023

@jllllll I had to use --only-binary=:all: to get it to work with amd64. I have to find someone with an M1 to test the new command. Will report back :-) Thanks!

@gaby
Copy link
Contributor

gaby commented Nov 28, 2023

@jllllll Still didn't work, this is with py3.11 on a M2 Pro

+ls_cpu 'Architecture:                    aarch64
CPU op-mode(s):                  64-bit
Byte Order:                      Little Endian
CPU(s):                          6
On-line CPU(s) list:             0-5
Vendor ID:                       0x00
Model name:                      -
Model:                           0
Thread(s) per core:              1
Core(s) per cluster:             6
Socket(s):                       -
Cluster(s):                      1
Stepping:                        0x0
BogoMIPS:                        48.00
Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
Vulnerability Spectre v2:        Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected'

++ dpkg --print-architecture
Recommended install command for llama-cpp-python: UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu

+ pip_command='UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu'

+ echo 'Recommended install command for llama-cpp-python: UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu'

+ eval 'UNAME_M=arm64 python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu'

++ UNAME_M=arm64
++ python -m pip install llama-cpp-python==0.2.19 --only-binary=:all: --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu
Looking in indexes: https://pypi.org/simple, https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/basic/cpu
ERROR: Could not find a version that satisfies the requirement llama-cpp-python==0.2.19 (from versions: none)
ERROR: No matching distribution found for llama-cpp-python==0.2.19

[notice] A new release of pip is available: 23.2.1 -> 23.3.1
[notice] To update, run: pip install --upgrade pip
+ echo 'Failed to install llama-cpp-python'
+ exit 1
Failed to install llama-cpp-python

@gaby
Copy link
Contributor

gaby commented Nov 28, 2023

Seems it works fine with MacOS x86, I was able to create a test CI to prove it: https://github.com/gaby/testbench/actions/runs/7013936843/job/19080863382

Seems to be related to arm64/m1/m2

@jllllll
Copy link
Owner

jllllll commented Nov 28, 2023

@gaby Your Mac is recognizing itself as aarch64. I only have wheels for arm64 at the moment. They are both essentially the same thing. I'll upload some aarch64 wheels when I have the time. Can't find any other solution for that.

@gaby
Copy link
Contributor

gaby commented Nov 28, 2023

@jllllll According to Google aarch64 is another way of saying arm64.

"aarch64" and "arm64" are the same thing. AArch64 is the official name for the 64-bit ARM architecture, but some people prefer to call it "ARM64" as a continuation of 32-bit ARM.

Several users suggested building manylinux wheels, which would solve this issue.

https://github.com/pypa/cibuildwheel
https://cibuildwheel.readthedocs.io/en/stable/faq/#how-to-cross-compile
Example here: https://github.com/Azure/azure-uamqp-python/blob/main/.github/workflows/wheel.yml

@jllllll
Copy link
Owner

jllllll commented Nov 28, 2023

Not really sure how manylinux wheels are supposed to fix an issue on MacOS.

The true problem is that Python/pip pointlessly differentiates between aarch64 and arm64 despite them being the same and seemingly provides no option to override this.

Uploading new wheels is a trivial problem to fix as I just need to write a simple script to download all the MacOS wheels and upload a version that is renamed to use aarch64. Shouldn't need to rebuild anything.

@gaby
Copy link
Contributor

gaby commented Nov 28, 2023

Not really sure how manylinux wheels are supposed to fix an issue on MacOS.

The true problem is that Python/pip pointlessly differentiates between aarch64 and arm64 despite them being the same and seemingly provides no option to override this.

Uploading new wheels is a trivial problem to fix as I just need to write a simple script to download all the MacOS wheels and upload a version that is renamed to use aarch64. Shouldn't need to rebuild anything.

I see what you mean, cool! The main diff I read about manylinux, is that those wheels work dor both x86 and arm64.

I played with cibuildwheel a bit and even though it builds the wheels for arm64 it fails trying to link the llama libs. Not sure how to fix that yet, lol. The CI is way more simple though:

name: CI

on:
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]

jobs:
  build_wheels_macos:
    name: Build wheels for macos
    runs-on: macos-latest
    steps:
      - uses: actions/checkout@v4
        with:
          repository: 'abetlen/llama-cpp-python'
          ref: 'v0.2.20'
          submodules: 'recursive'
      - name: Build wheels
        uses: pypa/cibuildwheel@v2.16.2
        env:
          CIBW_ARCHS_MACOS: x86_64 universal2
          CIBW_PROJECT_REQUIRES_PYTHON: ">=3.10"
          CIBW_BEFORE_ALL: 'export CMAKE_ARGS="-DLLAMA_NATIVE=off -DLLAMA_METAL=on"'
          CIBW_BEFORE_BUILD: 'pip install build wheel cmake'
      - name: List wheels
        run: ls -la /wheelhouse/*.whl

@jllllll
Copy link
Owner

jllllll commented Nov 28, 2023

@gaby I had originally built universal2 wheels for MacOS, but I had encountered problems with some people being unable to install them.
Additionally, it didn't work well with the differences in CMake arg configurations between the 2 builds.
Args for x86_64 would end up being applied to the ARM wheels or vice versa.

I have finished building and uploading aarch64 wheels. Let me know if they work.

@gaby
Copy link
Contributor

gaby commented Nov 28, 2023

@jllllll Thanks for your help, turns out my issue is related to running in Docker with MacOS. When running in Docker the platform is linux/arm64 not macos. Thus why it can't find any compatible wheels 😂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants