Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

staged-dockerfile: support Dockerfile builder with distributed builds #2215

Open
distorhead opened this issue Mar 10, 2020 · 15 comments
Open

Comments

@distorhead
Copy link
Member

Dockerfile builder should map dockerfile instructions onto stapel build stages. Stapel build stages will be stored in the stages storage.

@distorhead distorhead changed the title Support Dockerfile builder with distributed builds staged-dockerfile: support Dockerfile builder with distributed builds Sep 27, 2022
@distorhead
Copy link
Member Author

distorhead commented Sep 27, 2022

Implement container backend support for dockerfile

  • Use driver pkg/buildah.
  • Implement BuildDockerfileStage
    • no Dockerfile []byte param, no parsing.
    • pass all params using options.
    • map all Dockerfile instructions into options.

Cleanup buildah driver

  • remove docker-with-fuse obsolete mode

@distorhead
Copy link
Member Author

Refactor mapping of werf.yaml to image+stages

  • Move image+stages initialization into separate package.
  • Define interface to convert parsed werf.yaml or Dockerfile to image+stages.
  • Do not implement Dockerfile to image+stages mapping yet.

@distorhead
Copy link
Member Author

Refactor mapping of werf.yaml to image+stages

  • Move image+stages initialization into separate package.
  • Define interface to convert parsed werf.yaml or Dockerfile to image+stages.
  • Do not implement Dockerfile to image+stages mapping yet.

Done in #4973.

@distorhead
Copy link
Member Author

Next step is to implement 3 parts of new dockerfile builder:

  1. Convert config.WerfConfig to dockerfile.Dockerfile object. dockerfile.Dockerfile is parsed stages tree, all data from dockerfile should be accessed only using this struct.
  2. Construct images+stages (ImagesSets) from dockerfile.Dockerfile to use it in werf conveyor which builds images in parallel.
  3. Implement stage.DockerfileInstruction new stage object:
    • Calculate digest.
    • Configure builder for backend.
    • Add some unit tests for this stage.
    • Implement dependencies between werf.yaml images.

distorhead added a commit that referenced this issue Oct 10, 2022
… staged dockerfile builder

* Implemented test instruction building using new staged-dockerfile building method of container backend.
* Refactored conveyor so that new builder now correctly coupled to the existing building mechanics.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Oct 10, 2022
… staged dockerfile builder

* Implemented test instruction building using new staged-dockerfile building method of container backend.
* Refactored conveyor so that new builder now correctly coupled to the existing building mechanics.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Oct 10, 2022
… staged dockerfile builder

* Implemented test instruction building using new staged-dockerfile building method of container backend.
* Refactored conveyor so that new builder now correctly coupled to the existing building mechanics.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
@distorhead
Copy link
Member Author

Implemented working conveyor + backend build mechanics. No real dockerfile parsing yet. Only RUN instruction implemented at conveyor-stage-level (digests calculation).

Next steps:

  • Implement basic primitives for dockerfile parsing.
  • Implement all stages at conveyor-stage-level.
  • Implement container backend instructions options (for example healthcheck options).

distorhead added a commit that referenced this issue Oct 12, 2022
…o dockerfile parser package

Embed instructions data into container backend instructions.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Oct 12, 2022
…o dockerfile parser package

Embed instructions data into container backend instructions.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Oct 14, 2022
…configration section

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Oct 14, 2022
…configration section

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
@distorhead
Copy link
Member Author

distorhead commented Oct 25, 2022

Status update. Implemented full working skeleton of staged dockerfile builder.

Next steps:

  • Buildah container backend should support most of instructions options:
    • ADD
    • CMD
    • COPY
    • ENTRYPOINT
    • ENV
    • EXPOSE
    • HEALTHCHECK
    • LABEL
    • MAINTAINER
    • ONBUILD
    • RUN
      • common mounts support
      • ssh / secret mounts support
    • SHELL
    • STOPSIGNAL
    • USER
    • VOLUME
    • WORKDIR
  • Correct digests calculation and builder configuration for all instructions (full pkg/build/stage/instruction implementation) with tests for digest:
  • dockerfile parser ADD --checksum support
  • basic Dockerfile build-args support
  • basic werf.yaml dependencies directive support
  • WARNING: There is no way to ignore the Dockerfile due to docker limitation when building an image for a compressed context that reads from STDIN.
  • Dependencies + meta-args + FROM instruction support.
  • E2e test for dependencies.
  • Support add_host, network and ssh werf.yaml dockerfile config directive.
  • Support ONBUILD instruction from base image.
  • BUG: chaging of base image in FROM instruction does not cause image rebuilding.
  • Check case when several werf.yaml images reference different targets of the same Dockerfile: common stages should map to the same werf-images in conveyor.
  • Check case when two dockerfile stages has the same common instructions: these should use common cache stages.
  • Do not store intermediate dockerfile stages cache in the --final-repo, only target stages.

distorhead added a commit that referenced this issue Nov 7, 2022
Implemented 2-stage build-args expansion:
1. Expand all build-args except dependencies-args on early dockerfile parsing stage.
2. Expand dependencies-args when dependencies images names are available during image conveyor processing.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Nov 7, 2022
…lding

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Nov 15, 2022
refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Nov 16, 2022
refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Nov 16, 2022
…ations)

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Nov 17, 2022
…ations)

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Nov 17, 2022
…ations)

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Nov 17, 2022
…ations)

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
@distorhead

This comment was marked as outdated.

@distorhead
Copy link
Member Author

Check following environment variables / build arguments are available during build process of Dockerfile with staged builder:

  • WERF_COMMIT_HASH
  • WERF_COMMIT_TIME_HUMAN
  • WERF_COMMIT_TIME_UNIX

distorhead added a commit that referenced this issue Dec 9, 2022
…the final-repo

* Do not store non-target Dockerfile stages in the final-repo, nor set custom tags on non-target stages.
* Changed staged-dockerfile builder digest calculation algorithm: do not use instruction number in digest input calculation, only instruction name.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
@distorhead
Copy link
Member Author

Bug report:

Error: phase build after image stage/builder stages handler failed: unable to add custom image tags to stages storage: unable to add stage 2aaca860feef2e0753658f282d1f05e061d4130e28c4c0aa7a033218-1670252379768 custom tag stage/builder-latest in the storage harbor.local/images/infra/dev/argo-demo/app3: PUT https://harbor.local/v2/images/infra/dev/argo-demo/app3/manifests/stage/builder-latest: unexpected status code 404 Not Found: 404 page not found

Input factors:

  • Custom tags
  • Internal stage naming stage/builder
  • Harbor registry

Fixed by the #5142

distorhead added a commit that referenced this issue Dec 12, 2022
…the final-repo

* Do not store non-target Dockerfile stages in the final-repo, nor set custom tags on non-target stages.
* Changed staged-dockerfile builder digest calculation algorithm: do not use instruction number in digest input calculation, only instruction name.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Dec 17, 2022
…the final-repo

* Do not store non-target Dockerfile stages in the final-repo, nor set custom tags on non-target stages.
* Changed staged-dockerfile builder digest calculation algorithm: do not use instruction number in digest input calculation, only instruction name.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Dec 17, 2022
…the final-repo

* Do not store non-target Dockerfile stages in the final-repo, nor set custom tags on non-target stages.
* Changed staged-dockerfile builder digest calculation algorithm: do not use instruction number in digest input calculation, only instruction name.

refs #2215

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
@distorhead
Copy link
Member Author

Some new reported problems:

  1. too many open files
too many open files
Error: phase build on image app stage copy11 handler failed: unable to calculate build context globs checksum: unable to get build context dir: unable to extract context tar to tmp context dir "/tmp/werf-project-data-3782322004/juu0lgcquo/context2490369233": unable to create new file "/tmp/werf-project-data-3782322004/juu0lgcquo/context2490369233/tests/api/subway_stations/__init__.py" while extracting tar: open /tmp/werf-project-data-3782322004/juu0lgcquo/context2490369233/tests/api/subway_stations/__init__.py: too many open files

заработало только добавлением лимитов для пользователя в /etc/security/limits.conf, а переменная WERF_BUILDAH_ULIMIT не помогла

  1. Needed slirp4netns, uidmap, but installation instruction lacks this information.
  2. ENV PATH="${PATH}:/root/.local/bin" not working as expected.
  3. COPY / /app/ gives an error:
Error: phase build on image app stage copy6 handler failed: unable to calculate build context globs checksum: unable to calculate build context paths checksum: unable to calculate hash: unable to stat "/tmp/werf-project-data-4184094232/fhrsacgnc6/context2894825107/tmp/werf-project-data-4184094232/fhrsacgnc6/context2894825107/docker/wait-for": stat /tmp/werf-project-data-4184094232/fhrsacgnc6/context2894825107/tmp/werf-project-data-4184094232/fhrsacgnc6/context2894825107/docker/wait-for: no such file or directory

@distorhead
Copy link
Member Author

Bugreport:

┌ ⛵ image xxx
│ Use cache image for xxx/from
│      name: localhost:5000/quickstart-application-1:a71052baf9c6ace8171e59a2ae5ea1aede3fb89aa95d160ec354b205-1672069279117
│        id: ec0477f5c739
│   created: 2022-12-26 18:41:17 +0300 MSK
│      size: 30.1 MiB
└ ⛵ image xxx (0.12 seconds) FAILED

Running time 5.71 seconds
Error: phase build on image xxx stage dependenciesAfterInstall handler failed: unable to get import 0 source checksum: unable to generate import source checksum: unable to calculate dependency import checksum in localhost:5000/quickstart-application-1:a71052baf9c6ace8171e59a2ae5ea1aede3fb89aa95d160ec354b205-1672069279117: unable to open file "/home/distorhead/.local/share/containers/storage/overlay/b64e9d8e9ff9f0ec948599288748a69537346918e32e0fe7e821a8f905ce6e9c/merged/etc/alternatives/which": open /home/distorhead/.local/share/containers/storage/overlay/b64e9d8e9ff9f0ec948599288748a69537346918e32e0fe7e821a8f905ce6e9c/merged/etc/alternatives/which: no such file or directory

Configuration:

# werf.yaml
---
artifact: arti
from: ubuntu:22.04
---
image: xxx
from: ubuntu:22.04
import:
- artifact: arti
  after: install
  add: /etc
  to: /myetc

@distorhead
Copy link
Member Author

distorhead commented Dec 27, 2022

Bugreport. Following Dockerfile not building with staged-dockerfile builder:

# Dockerfile
FROM alpine
ENV PATH="${PATH}:/opt/bin"
RUN echo $PATH

Update. Fixed by the #5195

@distorhead
Copy link
Member Author

distorhead commented Dec 28, 2022

Provide meaningful message about staged: true available only for buildah backend and not avaiable for docker server backend. Now the panic occurs:

werf  build
Version: v1.2.193
##### ЧТО ТО ПРО ГИТЕРМЕНИЗМ
https://werf.io/documentation/usage/project_configuration/giterminism.html.
Using werf config render file: /tmp/werf/werf-config-render-1708726254

┌ ⛵ image app
-- app SetupBaseImage "alpine:3.17"
-- app SetupBaseImage "alpine:3.17" -> &image.Info{Name:"alpine:3.17", Repository:"alpine", Tag:"3.17", RepoDigest:"alpine@sha256:8914eb54f968791faf6a8638949e480fef81e697984fba772b3976835194c6d4", OnBuild:[]string(nil), ID:"sha256:49176f190c7e9cdb51ac85ab6c6d5e4512352218190cd69b08e6fd803ffbf3da", ParentID:"sha256:60643c78796d4d33b3533adf6df1994ab846fb22ca117abe6f6cbc53d93e5205", Labels:map[string]string(nil), Size:7049688, CreatedAtUnixNano:1669155569008562326}
│ ┌ Building stage app/RUN1
panic: not implemented
goroutine 1 [running]:
github.com/werf/werf/pkg/container_backend.(*DockerServerBackend).BuildDockerfileStage(0xc000984a80?, {0xc00121a7f0?, 0x0?}, {0xc0013b01d0?, 0x6?}, {{}, {0xc001222390?, 0x404e92?}}, {0xc000984b00, 0x8, ...})
	/git/pkg/container_backend/docker_server_backend.go:90 +0x27
github.com/werf/werf/pkg/container_backend/stage_builder.(*DockerfileStageBuilder).Build(0xc0001739e0, {0x3d7f928, 0xc000a52000}, {0x78?, 0x2?}

Update. Fixed in the main.

@distorhead
Copy link
Member Author

distorhead commented Feb 16, 2023

Panic on main branch:

│      name: ghcr.io/distorhead/quickstart-application:1670e5d66511df3fd98ccc38261195eccb5e9d6c0a7c6ba88cb3c765-1676554514630
│        id: aa57e3159413
│   created: 2023-02-16 16:35:13 +0300 MSK
│      size: 283.6 MiB (+2.5 MiB)
└ ⛵ image worker/stage/builder (21.13 seconds)

┌ ⛵ image worker
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x28b7c85]

goroutine 338 [running]:
github.com/werf/werf/pkg/build.(*Conveyor).GetImageNameForLastImageStage(0xc000639510?, {0xc0008f4d80?, 0xc001728900?})
        /home/distorhead/work/werf/pkg/build/conveyor.go:709 +0x25
github.com/werf/werf/pkg/build/stage.(*DependenciesStage).GetDependencies(0xc00152c8c0, {0x3defcc8?, 0xc00191adb0}, {0x3e0ab80, 0xc000f1bb80}, {0x3e0bc40?, 0xc000c5bd40}, 0x1af79b2?, 0x30c49c0?, {0x0, ...})
        /home/distorhead/work/werf/pkg/build/stage/dependencies.go:89 +0x6c3
github.com/werf/werf/pkg/build.(*BuildPhase).calculateStage(0xc0014812c0, {0x3defcc8, 0xc00191adb0}, 0xc000c6fce0, {0x3e0fd40, 0xc000014f00})
        /home/distorhead/work/werf/pkg/build/build_phase.go:594 +0x11a
github.com/werf/werf/pkg/build.(*BuildPhase).onImageStage(0xc0014812c0, {0x3defcc8, 0xc00191adb0}, 0xc000c6fce0, {0x3e0fd40?, 0xc000014f00?}, 0x1?)
        /home/distorhead/work/werf/pkg/build/build_phase.go:451 +0x174
github.com/werf/werf/pkg/build.(*BuildPhase).OnImageStage.func1(0xc0012d5760?, {0x3e0fd40?, 0xc000014f00?}, 0x60?)
        /home/distorhead/work/werf/pkg/build/build_phase.go:426 +0x45
github.com/werf/werf/pkg/build.(*StagesIterator).OnImageStage(0xc0004e7700, {0x3defcc8, 0xc00191adb0}, 0xc000c6fce0, {0x3e0fd40, 0xc000014f00}, 0xc000639a40)
        /home/distorhead/work/werf/pkg/build/stages_iterator.go:58 +0x2d0
github.com/werf/werf/pkg/build.(*BuildPhase).OnImageStage(0xc0012d5740?, {0x3defcc8?, 0xc00191adb0?}, 0xc001586300?, {0x3e0fd40?, 0xc000014f00?})
        /home/distorhead/work/werf/pkg/build/build_phase.go:425 +0x65
github.com/werf/werf/pkg/build.(*Conveyor).doImage.func2()
        /home/distorhead/work/werf/pkg/build/conveyor.go:586 +0xb43
github.com/werf/logboek/internal/stream.(*Stream).logProcess.func1()
        /home/distorhead/go/pkg/mod/github.com/werf/logboek@v0.5.4/internal/stream/process.go:150 +0x1b
github.com/werf/logboek/internal/stream.(*Stream).logProcess(0xc001728900, {0xc000b7d8d0?, 0x38?}, 0xc000dcf8c0, 0xc0004e75c0)
        /home/distorhead/go/pkg/mod/github.com/werf/logboek@v0.5.4/internal/stream/process.go:157 +0x1cf
github.com/werf/logboek/internal/stream.(*LogProcess).DoError(0xc0004e7580, 0xc0004e75c0)
        /home/distorhead/go/pkg/mod/github.com/werf/logboek@v0.5.4/internal/stream/process_types.go:201 +0xa5
github.com/werf/werf/pkg/build.(*Conveyor).doImage(0x32923e0?, {0x3defcc8?, 0xc00191adb0}, 0xc000c6fce0, {0xc0005b1dd0, 0x1, 0x1}, 0x55?)
        /home/distorhead/work/werf/pkg/build/conveyor.go:568 +0x1eb
github.com/werf/werf/pkg/build.(*Conveyor).doImagesInParallel.func3({0x3defcc8?, 0xc00191adb0?}, 0xc00191e040?)
        /home/distorhead/work/werf/pkg/build/conveyor.go:547 +0xec
github.com/werf/werf/pkg/util/parallel.DoTasks.func1()
        /home/distorhead/work/werf/pkg/util/parallel/parallel.go:80 +0x304
created by github.com/werf/werf/pkg/util/parallel.DoTasks
        /home/distorhead/work/werf/pkg/util/parallel/parallel.go:73 +0x225
distorhead@ordenador:~/work/quickstart-application$ vi /home/distorhead/work/werf/pkg/build/conveyor.go +709

Update. Fixed in the main.

distorhead added a commit that referenced this issue Mar 17, 2023
…le only for buildah backend and not avaiable for docker server backend

Refs #2215 (comment)

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Mar 17, 2023
…le only for buildah backend and not avaiable for docker server backend

Refs #2215 (comment)

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Mar 21, 2023
…le only for buildah backend and not avaiable for docker server backend

Refs #2215 (comment)

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Mar 21, 2023
…le only for buildah backend and not avaiable for docker server backend

Refs #2215 (comment)

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Mar 21, 2023
…le only for buildah backend and not avaiable for docker server backend

Refs #2215 (comment)

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
distorhead added a commit that referenced this issue Mar 21, 2023
…le only for buildah backend and not avaiable for docker server backend

Refs #2215 (comment)

Signed-off-by: Timofey Kirillov <timofey.kirillov@flant.com>
@distorhead
Copy link
Member Author

New case:

ADD / /app

results in error, while

ADD . /app

works ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant