Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

arm & arm64 support #828

Open
kusold opened this issue Aug 21, 2023 · 14 comments
Open

arm & arm64 support #828

kusold opened this issue Aug 21, 2023 · 14 comments
Labels
enhancement New feature or request

Comments

@kusold
Copy link

kusold commented Aug 21, 2023

Describe the feature you'd like to have.

I would like to be able to run VolSync on arm and arm64 processors.

I am attempting to use VolSync on a RaspberryPi 4 Cluster (arm), and also on Oracle's Ampere A1 (arm64), but the manager is failing to run due to it being the wrong architecture. The underlying storage movers have builds supporting these architectures.

What is the value to the end user? (why is it a priority?)

A business justification could be that AWS Graviton uses arm64 binaries and supports running EKS. Also, running on a RaspberryPi is always fun.

How will we know we have a good solution? (acceptance criteria)

The acceptance criteria should be that CI successfully cross builds for the various architectures, and multi-platform docker images are published, allowing VolSync to run on arm and arm64 processors.

Additional context

@kusold kusold added the enhancement New feature or request label Aug 21, 2023
@zimbres
Copy link

zimbres commented Aug 27, 2023

Until its official, I have an image built for also arm64

https://hub.docker.com/r/zimbres/volsync/tags

@pl4nty
Copy link

pl4nty commented Sep 20, 2023

this is a dupe of #574. my image has run fine on Oracle for months, so we might just need CI support

@JohnStrunk
Copy link
Member

Looks like support for ARM may be coming to GH actions. That would allow us to test and release multi-arch.
https://github.blog/changelog/2023-10-30-accelerate-your-ci-cd-with-arm-based-hosted-runners-in-github-actions/

@ppoloskov
Copy link

@zimbres can you kindly update it 0.8.0?

@zimbres
Copy link

zimbres commented Nov 16, 2023

Its done.

Tags "latest" and "main" is the main branch. Tag "0.8.0" is release-0.8 branch

@samip5
Copy link

samip5 commented Nov 17, 2023

I'm noticing that the quay.io/backube/volsync:0.8.0 image is not arm64 compatible.

exec /mover-restic/entry.sh: exec format error

Aka quay.io/backube/volsync@sha256:b0969dce78b900412303153f0761b2233204164572eff1aeebf31707db7e20db

@samip5
Copy link

samip5 commented Nov 17, 2023

Its done.

Tags "latest" and "main" is the main branch. Tag "0.8.0" is release-0.8 branch

Seems like not quite when trying to use it for restic.

exec /mover-restic/entry.sh: no such file or directory

@samip5
Copy link

samip5 commented Nov 17, 2023

amd64 and arm64 supported: registry.samipsolutions.fi/library/volsync:0.8.0

@onedr0p
Copy link
Contributor

onedr0p commented Nov 18, 2023

I also have a image built and pushed here based on alpine, it only includes rclone and restic but I am open to PRs if people want any other mover.

Container: https://github.com/onedr0p/containers/pkgs/container/volsync
Source: https://github.com/onedr0p/containers/tree/main/apps/volsync

@tuxpeople
Copy link

Looks like support for ARM may be coming to GH actions. That would allow us to test and release multi-arch. https://github.blog/changelog/2023-10-30-accelerate-your-ci-cd-with-arm-based-hosted-runners-in-github-actions/

@JohnStrunk just as a side note: you don't need a Github Runner on arm to build arm images. I've built many multi-arch container images on GitHub, even for sparc and other crazy environments.

@JohnStrunk
Copy link
Member

@tuxpeople Do you have a good example you could point me to?

There's also the issue of building vs testing. Today, we test the built containers via the e2e tests in kind. My assumption is we still wouldn't be able to test non-x86-64 images (plz correct me here). I'm wondering what the community thoughts are around that... x64 gets tested, everything else is 🤷‍♂️.

@onedr0p
Copy link
Contributor

onedr0p commented Dec 13, 2023

@JohnStrunk would it be better to pull binaries out of the official images like this? It would lessen the support burden of maintaining and compiling these tools in this project.

https://github.com/onedr0p/containers/blob/main/apps/volsync/Dockerfile#L21L22

https://github.com/onedr0p/containers/blob/main/apps/volsync/Dockerfile#L47L48

With the matter of s390x support maybe that can be dropped? I'm not sure if there's any users on that platform? It seems like an esoteric platform.

@JohnStrunk
Copy link
Member

From an upstream VolSync standpoint, yes, it would be easier (assuming they are statically linked, of course). The complication to much of the build process is that we also ship this as a supported Red Hat product, and that comes with requirements for stuff like CVE remediation, FIPS support, and other stuff. We have to trade off simplicity of the upstream build process w/ what is required for the downstream builds in order to comply w/ our internal processes (and not have to do everything twice).

... and we're required to ship s390x 🙄... I wish we could drop it.

@tuxpeople
Copy link

tuxpeople commented Dec 13, 2023

Hi @JohnStrunk

Sorry, it wasn't sparc, I was wrong. Sparc isn't in the list.

@tuxpeople Do you have a good example you could point me to?

Not sure how "good" it is. But here is one:

Definition of platforms:
https://github.com/tuxpeople/docker-podsync/blob/b98f4354aa547b69a39a38638edd1ec7408d07ff/.github/workflows/release.yml#L17-L18

Prepare build environment:
https://github.com/tuxpeople/docker-podsync/blob/b98f4354aa547b69a39a38638edd1ec7408d07ff/.github/workflows/release.yml#L113-L118

Build:
https://github.com/tuxpeople/docker-podsync/blob/b98f4354aa547b69a39a38638edd1ec7408d07ff/.github/workflows/release.yml#L133-L153

This approach supports the following platforms:

"platforms": "linux/amd64,linux/amd64/v2,linux/amd64/v3,linux/arm64,linux/riscv64,linux/ppc64le,linux/s390x,linux/386,linux/mips64le,linux/mips64,linux/arm/v7,linux/arm/v6"

There's also the issue of building vs testing. Today, we test the built containers via the e2e tests in kind. My assumption is we still wouldn't be able to test non-x86-64 images (plz correct me here). I'm wondering what the community thoughts are around that... x64 gets tested, everything else is 🤷‍♂️.

I do it like that: I test the x86_64 container and assume that other platforms are working as well. I understand that this may be insufficient for you. I don't know if it may be possible using qemu and an emulated environment or if you would have to wait for arm runners for E2E testing.

Edit:

I've no idea about your tests, but if docker run works for you, please see this as a (potential?) solution: https://github.com/orgs/community/discussions/38728#discussioncomment-6324428

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: No status
Development

No branches or pull requests

8 participants