Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ros] Provide exact version numbers to ensure rebuilds #112

Open
mikaelarguedas opened this issue Jan 26, 2018 · 30 comments
Open

[ros] Provide exact version numbers to ensure rebuilds #112

mikaelarguedas opened this issue Jan 26, 2018 · 30 comments

Comments

@mikaelarguedas
Copy link
Contributor

Context in docker-library/official-images#3890.

TL;DR the docker images don't get rebuilt as the version of the metapackages never changes. We need a way to get them re-triggered.

Possible solution: provide exact version numbers for the packages in docker files so that the docker cache is invalidated. One con is that these version differ for each platform / arch so we need to find a way to conditionally set the right version for each arch without duplicating dockerfiles.

@ruffsl
Copy link
Member

ruffsl commented Jan 26, 2018

What do you suppose would be our options here, in addition to what tianon proposed in docker-library/official-images#3890 ? It would be a shame that updating one single arch would consequently brake the cache of all the others. Not sure how avoid that given a single Dockerfile and different version strings. @nuclearsandwich, Do you spot a different angle?

@nuclearsandwich
Copy link
Member

What if we added a property to the Dockerfile that was an environment variable with a sync timestamp. Similar to the "daily invalidation" datestamp we use to conditionally rebuild docker images on the ROS buildfarm.

If I'm not mistaken that would be a change sufficient to re-trigger image generation, it's architecture independent, easy enough to do by hand and it's possible to automatically generate the PR in the future.

@mikaelarguedas
Copy link
Contributor Author

What if we added a property to the Dockerfile that was an environment variable with a sync timestamp. Similar to the "daily invalidation" datestamp we use to conditionally rebuild docker images on the ROS buildfarm.

Yeah that's be the approach I'd take as well 👍

@ruffsl
Copy link
Member

ruffsl commented Jan 27, 2018

added a property to the Dockerfile

Can you link to the example? Do you mean a dockerfile build arg?

@mikaelarguedas
Copy link
Contributor Author

In ros_buildfarm and ros2 we invalidate dockerfiles on a regular basis.
Some of them are invalidated daily. This is done by adding an echo statement in the Dockerfile at empy expansion time: https://github.com/ros-infrastructure/ros_buildfarm/blob/2499ffaf66e311bfd4710ac3798bf7fbef051a3d/ros_buildfarm/templates/doc/doc_independent_task.Dockerfile.em#L31

What Steven! is suggesting is that we expand the templates of a given distro after each sync / print the sync date in it and submit the new hash upstream.

Ideall this would be done in a job triggered automatically after each sync.

@ruffsl
Copy link
Member

ruffsl commented Jan 27, 2018

Ok, so it wouldn't avoid breaking cache of other arch pointing to the same dockerfile,
But it would provide a minimal foothold to trigger docker build with arbitrary syncs.

@nuclearsandwich
Copy link
Member

Ok, so it wouldn't avoid breaking cache of other arch pointing to the same dockerfile,

I'm not sure that I follow here. As far as I know, syncs to the main repositories cover all architectures. If we provide docker builds for testing syncs then that's different (but not immediately avoidable with out architecture dependent sync variables).

@mikaelarguedas
Copy link
Contributor Author

Yeah in theory it would be beneficial to not break the cache on unrelated images. In practice I don't think I ever saw a sync where only one architecture was impacted and not another one.

One thing that may need some more looking into, is that most syncs don't see changes within desktop-full (even less for base or core). So invalidating automatically all images at each sync, even if there was no change in the lower levels of the stack, seems overkill. So maybe looking at the exact version numbers is valuable after all as it would allow to invalidate only when required.

@tianon
Copy link

tianon commented Jan 29, 2018

Since it's going to bust the cache of all arches no matter which way it goes, it seems like the best solution is to embed all the full metapackage version numbers instead of just a simple timestamp so that the cache busts are contextually meaningful instead of simply time-based.

@nuclearsandwich
Copy link
Member

@tianon the full version numbers are architecture dependent, so that would require implementing the machinery as described in docker-library/official-images#3890 (comment)

I'm not current on docker build caching. If we update the Dockerfile to bump the versions of one arch as is done in that example, won't that change in that layer bust the cache anyway since the runline is changing?

In which case we might as well stick with timestamps as they're less work and less machinery to maintain.

it seems like the best solution is to embed all the full metapackage version numbers instead of just a simple timestamp so that the cache busts are contextually meaningful instead of simply time-based.

I think the sync date itself is a meaningful release version even if they aren't necessarily thought of as such. Package syncs are announced on our forum and come with a change summary, and no packages update in between syncs that I know of. A new sync is, essentially, a new point release of a rosdistro. eg ROSkinetic.2018.01.08

@mikaelarguedas
Copy link
Contributor Author

My understanding is that it would invalidate the cache for both arches.
From my perspective the main advantage of using full metapackage version, is that it would allow us to not have a meaningfull diff between dockerfiles: if there is no diff between Dockerfile pre-sync and post-sync, there were no changes so we don't need to submit PR upstream and rebuild the images. If we just update the date we won't have information and always have a diff so will rebuild unnecessarily.

@tianon
Copy link

tianon commented Jan 29, 2018

Comparing something like RUN : # 2018-01-29 ... (which is a really hacky way to bust cache) to something that embeds the contextually appropriate 1.3.1-0jessie-20171116-213027-0800 instead seems like a no-brainer IMO. The machinery we're discussing here isn't terribly complex. Here's an example shell script to generate that for the jessie+kinetic combination, at least at the ros-core layer:

#!/usr/bin/env bash
set -Eeuo pipefail

arches="$(
	curl -fsSL 'http://packages.ros.org/ros/ubuntu/dists/jessie/Release' \
		| awk -F ': ' '$1 == "Architectures" { print $2 }'
)"

cat <<'EOH'
FROM debian:jessie

RUN set -eux; \
	apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 421C365BD9FF1F717815A3895523BAEEB01FA116; \
	echo 'deb http://packages.ros.org/ros/ubuntu jessie main' > /etc/apt/sources.list.d/ros-latest.list

EOH

for arch in $arches; do
	version="$(
		curl -fsSL "http://packages.ros.org/ros/ubuntu/dists/jessie/main/binary-$arch/Packages" \
			| awk -F ': ' '
				$1 == "Package" { pkg = $2 }
				pkg == "ros-kinetic-ros-core" && $1 == "Version" { print $2 }
			'
	)"
	if [ -z "$version" ]; then
		echo >&2 "# warning: skipping $arch (can't find version for 'ros-kinetic-ros-core')"
		continue
	fi

	echo "ENV ROS_CORE_VERSION_$arch $version"
done

cat <<'EODF'

RUN set -eux; \
	arch="$(dpkg --print-architecture)"; \
	eval "version=\"\$ROS_CORE_VERSION_$arch\""; \
	[ -n "$version" ]; \
	apt-get update; \
	apt-get install -y ros-kinetic-ros-core="$version"; \
	rm -rf /var/lib/apt/lists/*
EODF

Here's an example of what this generates:

FROM debian:jessie

RUN set -eux; \
	apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 421C365BD9FF1F717815A3895523BAEEB01FA116; \
	echo 'deb http://packages.ros.org/ros/ubuntu jessie main' > /etc/apt/sources.list.d/ros-latest.list

ENV ROS_CORE_VERSION_amd64 1.3.1-0jessie-20171116-213027-0800
ENV ROS_CORE_VERSION_arm64 1.3.1-0jessie-20171117-163957-0800
# warning: skipping armhf (can't find version for 'ros-kinetic-ros-core')
# warning: skipping i386 (can't find version for 'ros-kinetic-ros-core')

RUN set -eux; \
	arch="$(dpkg --print-architecture)"; \
	eval "version=\"\$ROS_CORE_VERSION_$arch\""; \
	[ -n "$version" ]; \
	apt-get update; \
	apt-get install -y ros-kinetic-ros-core="$version"; \
	rm -rf /var/lib/apt/lists/*

And the associated build output:

$ ./ros.sh | docker build -
# warning: skipping armhf (can't find version for 'ros-kinetic-ros-core')
# warning: skipping i386 (can't find version for 'ros-kinetic-ros-core')
Sending build context to Docker daemon   2.56kB
Step 1/5 : FROM debian:jessie
 ---> 2fe79f06fa6d
Step 2/5 : RUN set -eux; 	apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 421C365BD9FF1F717815A3895523BAEEB01FA116; 	echo 'deb http://packages.ros.org/ros/ubuntu jessie main' > /etc/apt/sources.list.d/ros-latest.list
 ---> Running in 52e7d17ed3a7
+ apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 421C365BD9FF1F717815A3895523BAEEB01FA116
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.xWqsUIgOSG --no-auto-check-trustdb --trust-model always --primary-keyring /etc/apt/trusted.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-security-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-stable.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-stretch-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-stretch-security-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-stretch-stable.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-wheezy-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-wheezy-stable.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 421C365BD9FF1F717815A3895523BAEEB01FA116
gpg: requesting key B01FA116 from hkp server keyserver.ubuntu.com
gpg: key B01FA116: public key "ROS Builder <rosbuild@ros.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1
+ echo deb http://packages.ros.org/ros/ubuntu jessie main
Removing intermediate container 52e7d17ed3a7
 ---> 8ba565f1dfb3
Step 3/5 : ENV ROS_CORE_VERSION_amd64 1.3.1-0jessie-20171116-213027-0800
 ---> Running in 86c3cad4c66f
Removing intermediate container 86c3cad4c66f
 ---> 771bf0339ec0
Step 4/5 : ENV ROS_CORE_VERSION_arm64 1.3.1-0jessie-20171117-163957-0800
 ---> Running in 4ca2bf45ce61
Removing intermediate container 4ca2bf45ce61
 ---> 84f1a6c28e86
Step 5/5 : RUN set -eux; 	arch="$(dpkg --print-architecture)"; 	eval "version=\"\$ROS_CORE_VERSION_$arch\""; 	[ -n "$version" ]; 	apt-get update; apt-get install -y ros-kinetic-ros-core="$version"; 	rm -rf /var/lib/apt/lists/*
 ---> Running in 355901371ac9
+ dpkg --print-architecture
+ arch=amd64
+ eval version="$ROS_CORE_VERSION_amd64"
+ version=1.3.1-0jessie-20171116-213027-0800
+ [ -n 1.3.1-0jessie-20171116-213027-0800 ]
+ apt-get update
Get:1 http://security.debian.org jessie/updates InRelease [63.1 kB]
Get:2 http://packages.ros.org jessie InRelease [4019 B]
Get:3 http://packages.ros.org jessie/main amd64 Packages [372 kB]
Ign http://deb.debian.org jessie InRelease
Get:4 http://deb.debian.org jessie-updates InRelease [145 kB]
Get:5 http://deb.debian.org jessie Release.gpg [2434 B]
Get:6 http://security.debian.org jessie/updates/main amd64 Packages [608 kB]
Get:7 http://deb.debian.org jessie Release [148 kB]
Get:8 http://deb.debian.org jessie-updates/main amd64 Packages [23.1 kB]
Get:9 http://deb.debian.org jessie/main amd64 Packages [9064 kB]
Fetched 10.4 MB in 5s (1746 kB/s)
Reading package lists...
+ apt-get install -y ros-kinetic-ros-core=1.3.1-0jessie-20171116-213027-0800
Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
  autotools-dev binfmt-support binutils bzip2-doc ca-certificates cmake
  cmake-data cpp cpp-4.9 docutils-common docutils-doc file gcc gcc-4.9
  icu-devtools init-system-helpers krb5-locales libalgorithm-c3-perl libapr1
  libapr1-dev libaprutil1 libaprutil1-dev libarchive-extract-perl libarchive13
  libasan0 libasan1 libatomic1 libblas-common libblas3 libboost-all-dev
  libboost-atomic-dev libboost-atomic1.55-dev libboost-atomic1.55.0
  libboost-chrono-dev libboost-chrono1.55-dev libboost-chrono1.55.0
  libboost-context-dev libboost-context1.55-dev libboost-context1.55.0
  libboost-coroutine-dev libboost-coroutine1.55-dev libboost-date-time-dev
  libboost-date-time1.55-dev libboost-date-time1.55.0 libboost-dev
  libboost-exception-dev libboost-exception1.55-dev libboost-filesystem-dev
  libboost-filesystem1.55-dev libboost-filesystem1.55.0 libboost-graph-dev
  libboost-graph-parallel-dev libboost-graph-parallel1.55-dev
  libboost-graph-parallel1.55.0 libboost-graph1.55-dev libboost-graph1.55.0
  libboost-iostreams-dev libboost-iostreams1.55-dev libboost-iostreams1.55.0
  libboost-locale-dev libboost-locale1.55-dev libboost-locale1.55.0
  libboost-log-dev libboost-log1.55-dev libboost-log1.55.0 libboost-math-dev
  libboost-math1.55-dev libboost-math1.55.0 libboost-mpi-dev
  libboost-mpi-python-dev libboost-mpi-python1.55-dev
  libboost-mpi-python1.55.0 libboost-mpi1.55-dev libboost-mpi1.55.0
  libboost-program-options-dev libboost-program-options1.55-dev
  libboost-program-options1.55.0 libboost-python-dev libboost-python1.55-dev
  libboost-python1.55.0 libboost-random-dev libboost-random1.55-dev
  libboost-random1.55.0 libboost-regex-dev libboost-regex1.55-dev
  libboost-regex1.55.0 libboost-serialization-dev
  libboost-serialization1.55-dev libboost-serialization1.55.0
  libboost-signals-dev libboost-signals1.55-dev libboost-signals1.55.0
  libboost-system-dev libboost-system1.55-dev libboost-system1.55.0
  libboost-test-dev libboost-test1.55-dev libboost-test1.55.0
  libboost-thread-dev libboost-thread1.55-dev libboost-thread1.55.0
  libboost-timer-dev libboost-timer1.55-dev libboost-timer1.55.0
  libboost-tools-dev libboost-wave-dev libboost-wave1.55-dev
  libboost-wave1.55.0 libboost1.55-dev libboost1.55-tools-dev libbz2-dev
  libc-dev-bin libc6-dev libcgi-fast-perl libcgi-pm-perl libcilkrts5
  libclass-c3-perl libclass-c3-xs-perl libcloog-isl4 libconsole-bridge-dev
  libconsole-bridge0.2 libcpan-meta-perl libcr0 libcurl3 libdata-optlist-perl
  libdata-section-perl libexpat1 libexpat1-dev libfcgi-perl libffi6
  libfreetype6 libgcc-4.8-dev libgcc-4.9-dev libgdbm3 libgfortran3
  libglib2.0-0 libglib2.0-data libgmp10 libgnutls-deb0-28 libgomp1
  libgssapi-krb5-2 libgtest-dev libhogweed2 libhwloc-dev libhwloc-plugins
  libhwloc5 libibverbs-dev libibverbs1 libicu-dev libicu52 libidn11 libisl10
  libitm1 libjbig0 libjpeg62-turbo libk5crypto3 libkeyutils1 libkrb5-3
  libkrb5support0 liblapack3 liblcms2-2 libldap-2.4-2 libldap2-dev
  liblog-message-perl liblog-message-simple-perl liblog4cxx10 liblog4cxx10-dev
  liblsan0 libltdl-dev libltdl7 liblz4-1 liblz4-dev liblzo2-2 libmagic1
  libmodule-build-perl libmodule-pluggable-perl libmodule-signature-perl
  libmpc3 libmpfr4 libmro-compat-perl libnettle4 libnuma-dev libnuma1
  libopenmpi-dev libopenmpi1.6 libp11-kit0 libpackage-constants-perl
  libpaper-utils libpaper1 libparams-util-perl libpciaccess0 libpipeline1
  libpng12-0 libpod-latex-perl libpod-readme-perl libpython-dev
  libpython-stdlib libpython2.7 libpython2.7-dev libpython2.7-minimal
  libpython2.7-stdlib libquadmath0 libregexp-common-perl librtmp1 libsasl2-2
  libsasl2-modules libsasl2-modules-db libsctp-dev libsctp1
  libsoftware-license-perl libsqlite3-0 libssh2-1 libssl1.0.0
  libstdc++-4.8-dev libsub-exporter-perl libsub-install-perl libtasn1-6
  libterm-ui-perl libtext-soundex-perl libtext-template-perl libtiff5
  libtinyxml-dev libtinyxml2.6.2 libtool libtsan0 libubsan0 libwebp5
  libwebpdemux1 libwebpmux1 libxml2 libyaml-0-2 linux-libc-dev lksctp-tools
  lsb-release make manpages manpages-dev mime-support mpi-default-bin
  mpi-default-dev ocl-icd-libopencl1 openmpi-bin openmpi-common openssl perl
  perl-modules pkg-config python python-catkin-pkg python-catkin-pkg-modules
  python-chardet python-crypto python-dateutil python-defusedxml python-dev
  python-docutils python-ecdsa python-empy python-minimal python-netifaces
  python-nose python-numpy python-paramiko python-pil python-pkg-resources
  python-pygments python-pyparsing python-roman python-rosdep python-rosdistro
  python-rosdistro-modules python-rospkg python-rospkg-modules
  python-setuptools python-six python-yaml python2.7 python2.7-dev
  python2.7-minimal rename ros-kinetic-actionlib-msgs ros-kinetic-catkin
  ros-kinetic-cmake-modules ros-kinetic-common-msgs ros-kinetic-cpp-common
  ros-kinetic-diagnostic-msgs ros-kinetic-gencpp ros-kinetic-geneus
  ros-kinetic-genlisp ros-kinetic-genmsg ros-kinetic-gennodejs
  ros-kinetic-genpy ros-kinetic-geometry-msgs ros-kinetic-message-filters
  ros-kinetic-message-generation ros-kinetic-message-runtime ros-kinetic-mk
  ros-kinetic-nav-msgs ros-kinetic-ros ros-kinetic-ros-comm ros-kinetic-rosbag
  ros-kinetic-rosbag-migration-rule ros-kinetic-rosbag-storage
  ros-kinetic-rosbash ros-kinetic-rosboost-cfg ros-kinetic-rosbuild
  ros-kinetic-rosclean ros-kinetic-rosconsole ros-kinetic-rosconsole-bridge
  ros-kinetic-roscpp ros-kinetic-roscpp-core ros-kinetic-roscpp-serialization
  ros-kinetic-roscpp-traits ros-kinetic-roscreate ros-kinetic-rosgraph
  ros-kinetic-rosgraph-msgs ros-kinetic-roslang ros-kinetic-roslaunch
  ros-kinetic-roslib ros-kinetic-roslisp ros-kinetic-roslz4
  ros-kinetic-rosmake ros-kinetic-rosmaster ros-kinetic-rosmsg
  ros-kinetic-rosnode ros-kinetic-rosout ros-kinetic-rospack
  ros-kinetic-rosparam ros-kinetic-rospy ros-kinetic-rosservice
  ros-kinetic-rostest ros-kinetic-rostime ros-kinetic-rostopic
  ros-kinetic-rosunit ros-kinetic-roswtf ros-kinetic-sensor-msgs
  ros-kinetic-shape-msgs ros-kinetic-std-msgs ros-kinetic-std-srvs
  ros-kinetic-stereo-msgs ros-kinetic-topic-tools ros-kinetic-trajectory-msgs
  ros-kinetic-visualization-msgs ros-kinetic-xmlrpcpp sbcl sgml-base
  shared-mime-info ucf uuid-dev xdg-user-dirs xml-core
Suggested packages:
  binutils-doc codeblocks eclipse ninja-build cpp-doc gcc-4.9-locales
  gcc-multilib autoconf automake flex bison gdb gcc-doc gcc-4.9-multilib
  gcc-4.9-doc libgcc1-dbg libgomp1-dbg libitm1-dbg libatomic1-dbg libasan1-dbg
  liblsan0-dbg libtsan0-dbg libubsan0-dbg libcilkrts5-dbg libquadmath0-dbg
  lrzip libboost-doc graphviz python3 libboost1.55-doc gccxml libmpfrc++-dev
  libntl-dev xsltproc doxygen docbook-xml docbook-xsl default-jdk fop
  glibc-doc blcr-dkms gnutls-bin krb5-doc krb5-user libhwloc-contrib-plugins
  icu-doc liblcms2-utils liblog4cxx10-doc libtool-doc pciutils
  libsasl2-modules-otp libsasl2-modules-ldap libsasl2-modules-sql
  libsasl2-modules-gssapi-mit libsasl2-modules-gssapi-heimdal
  libstdc++-4.8-doc libtinyxml-doc automaken gfortran fortran95-compiler
  gcj-jdk lsb make-doc man-browser opencl-icd openmpi-checkpoint perl-doc
  libterm-readline-gnu-perl libterm-readline-perl-perl libb-lint-perl
  libcpanplus-dist-build-perl libcpanplus-perl libfile-checktree-perl
  libobject-accessor-perl python-doc python-tk python-crypto-dbg
  python-crypto-doc texlive-latex-recommended texlive-latex-base
  texlive-lang-french fonts-linuxlibertine ttf-linux-libertine python-coverage
  python-nose-doc python-numpy-dbg python-numpy-doc python-pil-doc
  python-pil-dbg python-distribute python-distribute-doc ttf-bitstream-vera
  python2.7-doc sbcl-doc sbcl-source slime sgml-base-doc debhelper
Recommended packages:
  libarchive-tar-perl
The following NEW packages will be installed:
  autotools-dev binfmt-support binutils bzip2-doc ca-certificates cmake
  cmake-data cpp cpp-4.9 docutils-common docutils-doc file gcc gcc-4.9
  icu-devtools init-system-helpers krb5-locales libalgorithm-c3-perl libapr1
  libapr1-dev libaprutil1 libaprutil1-dev libarchive-extract-perl libarchive13
  libasan0 libasan1 libatomic1 libblas-common libblas3 libboost-all-dev
  libboost-atomic-dev libboost-atomic1.55-dev libboost-atomic1.55.0
  libboost-chrono-dev libboost-chrono1.55-dev libboost-chrono1.55.0
  libboost-context-dev libboost-context1.55-dev libboost-context1.55.0
  libboost-coroutine-dev libboost-coroutine1.55-dev libboost-date-time-dev
  libboost-date-time1.55-dev libboost-date-time1.55.0 libboost-dev
  libboost-exception-dev libboost-exception1.55-dev libboost-filesystem-dev
  libboost-filesystem1.55-dev libboost-filesystem1.55.0 libboost-graph-dev
  libboost-graph-parallel-dev libboost-graph-parallel1.55-dev
  libboost-graph-parallel1.55.0 libboost-graph1.55-dev libboost-graph1.55.0
  libboost-iostreams-dev libboost-iostreams1.55-dev libboost-iostreams1.55.0
  libboost-locale-dev libboost-locale1.55-dev libboost-locale1.55.0
  libboost-log-dev libboost-log1.55-dev libboost-log1.55.0 libboost-math-dev
  libboost-math1.55-dev libboost-math1.55.0 libboost-mpi-dev
  libboost-mpi-python-dev libboost-mpi-python1.55-dev
  libboost-mpi-python1.55.0 libboost-mpi1.55-dev libboost-mpi1.55.0
  libboost-program-options-dev libboost-program-options1.55-dev
  libboost-program-options1.55.0 libboost-python-dev libboost-python1.55-dev
  libboost-python1.55.0 libboost-random-dev libboost-random1.55-dev
  libboost-random1.55.0 libboost-regex-dev libboost-regex1.55-dev
  libboost-regex1.55.0 libboost-serialization-dev
  libboost-serialization1.55-dev libboost-serialization1.55.0
  libboost-signals-dev libboost-signals1.55-dev libboost-signals1.55.0
  libboost-system-dev libboost-system1.55-dev libboost-system1.55.0
  libboost-test-dev libboost-test1.55-dev libboost-test1.55.0
  libboost-thread-dev libboost-thread1.55-dev libboost-thread1.55.0
  libboost-timer-dev libboost-timer1.55-dev libboost-timer1.55.0
  libboost-tools-dev libboost-wave-dev libboost-wave1.55-dev
  libboost-wave1.55.0 libboost1.55-dev libboost1.55-tools-dev libbz2-dev
  libc-dev-bin libc6-dev libcgi-fast-perl libcgi-pm-perl libcilkrts5
  libclass-c3-perl libclass-c3-xs-perl libcloog-isl4 libconsole-bridge-dev
  libconsole-bridge0.2 libcpan-meta-perl libcr0 libcurl3 libdata-optlist-perl
  libdata-section-perl libexpat1 libexpat1-dev libfcgi-perl libffi6
  libfreetype6 libgcc-4.8-dev libgcc-4.9-dev libgdbm3 libgfortran3
  libglib2.0-0 libglib2.0-data libgmp10 libgnutls-deb0-28 libgomp1
  libgssapi-krb5-2 libgtest-dev libhogweed2 libhwloc-dev libhwloc-plugins
  libhwloc5 libibverbs-dev libibverbs1 libicu-dev libicu52 libidn11 libisl10
  libitm1 libjbig0 libjpeg62-turbo libk5crypto3 libkeyutils1 libkrb5-3
  libkrb5support0 liblapack3 liblcms2-2 libldap-2.4-2 libldap2-dev
  liblog-message-perl liblog-message-simple-perl liblog4cxx10 liblog4cxx10-dev
  liblsan0 libltdl-dev libltdl7 liblz4-1 liblz4-dev liblzo2-2 libmagic1
  libmodule-build-perl libmodule-pluggable-perl libmodule-signature-perl
  libmpc3 libmpfr4 libmro-compat-perl libnettle4 libnuma-dev libnuma1
  libopenmpi-dev libopenmpi1.6 libp11-kit0 libpackage-constants-perl
  libpaper-utils libpaper1 libparams-util-perl libpciaccess0 libpipeline1
  libpng12-0 libpod-latex-perl libpod-readme-perl libpython-dev
  libpython-stdlib libpython2.7 libpython2.7-dev libpython2.7-minimal
  libpython2.7-stdlib libquadmath0 libregexp-common-perl librtmp1 libsasl2-2
  libsasl2-modules libsasl2-modules-db libsctp-dev libsctp1
  libsoftware-license-perl libsqlite3-0 libssh2-1 libssl1.0.0
  libstdc++-4.8-dev libsub-exporter-perl libsub-install-perl libtasn1-6
  libterm-ui-perl libtext-soundex-perl libtext-template-perl libtiff5
  libtinyxml-dev libtinyxml2.6.2 libtool libtsan0 libubsan0 libwebp5
  libwebpdemux1 libwebpmux1 libxml2 libyaml-0-2 linux-libc-dev lksctp-tools
  lsb-release make manpages manpages-dev mime-support mpi-default-bin
  mpi-default-dev ocl-icd-libopencl1 openmpi-bin openmpi-common openssl perl
  perl-modules pkg-config python python-catkin-pkg python-catkin-pkg-modules
  python-chardet python-crypto python-dateutil python-defusedxml python-dev
  python-docutils python-ecdsa python-empy python-minimal python-netifaces
  python-nose python-numpy python-paramiko python-pil python-pkg-resources
  python-pygments python-pyparsing python-roman python-rosdep python-rosdistro
  python-rosdistro-modules python-rospkg python-rospkg-modules
  python-setuptools python-six python-yaml python2.7 python2.7-dev
  python2.7-minimal rename ros-kinetic-actionlib-msgs ros-kinetic-catkin
  ros-kinetic-cmake-modules ros-kinetic-common-msgs ros-kinetic-cpp-common
  ros-kinetic-diagnostic-msgs ros-kinetic-gencpp ros-kinetic-geneus
  ros-kinetic-genlisp ros-kinetic-genmsg ros-kinetic-gennodejs
  ros-kinetic-genpy ros-kinetic-geometry-msgs ros-kinetic-message-filters
  ros-kinetic-message-generation ros-kinetic-message-runtime ros-kinetic-mk
  ros-kinetic-nav-msgs ros-kinetic-ros ros-kinetic-ros-comm
  ros-kinetic-ros-core ros-kinetic-rosbag ros-kinetic-rosbag-migration-rule
  ros-kinetic-rosbag-storage ros-kinetic-rosbash ros-kinetic-rosboost-cfg
  ros-kinetic-rosbuild ros-kinetic-rosclean ros-kinetic-rosconsole
  ros-kinetic-rosconsole-bridge ros-kinetic-roscpp ros-kinetic-roscpp-core
  ros-kinetic-roscpp-serialization ros-kinetic-roscpp-traits
  ros-kinetic-roscreate ros-kinetic-rosgraph ros-kinetic-rosgraph-msgs
  ros-kinetic-roslang ros-kinetic-roslaunch ros-kinetic-roslib
  ros-kinetic-roslisp ros-kinetic-roslz4 ros-kinetic-rosmake
  ros-kinetic-rosmaster ros-kinetic-rosmsg ros-kinetic-rosnode
  ros-kinetic-rosout ros-kinetic-rospack ros-kinetic-rosparam
  ros-kinetic-rospy ros-kinetic-rosservice ros-kinetic-rostest
  ros-kinetic-rostime ros-kinetic-rostopic ros-kinetic-rosunit
  ros-kinetic-roswtf ros-kinetic-sensor-msgs ros-kinetic-shape-msgs
  ros-kinetic-std-msgs ros-kinetic-std-srvs ros-kinetic-stereo-msgs
  ros-kinetic-topic-tools ros-kinetic-trajectory-msgs
  ros-kinetic-visualization-msgs ros-kinetic-xmlrpcpp sbcl sgml-base
  shared-mime-info ucf uuid-dev xdg-user-dirs xml-core
...

(which really isn't much of a departure from the current Dockerfile)

@ruffsl
Copy link
Member

ruffsl commented Jan 30, 2018

@tianon , I agree that embedding the complete version is more contextually appropriate than some timestamp, although I'm not sure how well the ENV approach above would scale when multiple packages need to be versioned in a single dockerfile. The methods above would fit well for ROS1, where usually its only one metapackage per dockerfile.

Looking forward, it would be nice to support ROS2 syncs with the same pipeline. However, ROS2 does not provide the same metapackages, requiring many more packages to be specified in the Dockerfile. Some of which (I assume, but correct me @mikaelarguedas ) could diverge in package versions from each other. Having an ENV per arch per package is perhaps extreme.

# install ros2 packages
RUN apt-get update && apt-get install -y \
ros-ardent-common-interfaces \
ros-ardent-composition \
ros-ardent-demo-nodes-cpp \
ros-ardent-demo-nodes-cpp-native \
ros-ardent-demo-nodes-py \
ros-ardent-examples* \
ros-ardent-launch \
ros-ardent-lifecycle \
ros-ardent-logging-demo \
ros-ardent-ros2msg \
ros-ardent-ros2node \
ros-ardent-ros2pkg \
ros-ardent-ros2run \
ros-ardent-ros2service \
ros-ardent-ros2srv \
ros-ardent-ros2topic \
ros-ardent-sros2 \
ros-ardent-tf2* \
ros-ardent-tlsf* \
ros-ardent-topic-monitor \
&& rm -rf /var/lib/apt/lists/*

I'm beginning to suspect that nesting another sub-tree per arch to house each arch specific Dockerfile may be the most complete contextually. Though I do not like the added number of Dockerfiles it brings, the pipeline here is already automated, and doing so would avoid compromising the context or breaking catches of untouched archs. e.g:

$ tree ros
ros
├── kinetic
│   ├── debian
│   │   └── jessie
│   │        └── amd64
│   │        │  └── ros-core
│   │        │     ├── Dockerfile
│   │        │     └── ros_entrypoint.sh
│   │        └── arm64v8
│   │            └── ros-core
│   │                ├── Dockerfile
│   │                └── ros_entrypoint.sh
...

@mikaelarguedas
Copy link
Contributor Author

Looking forward, it would be nice to support ROS2 syncs with the same pipeline. However, ROS2 does not provide the same metapackages, requiring many more packages to be specified in the Dockerfile.

ROS 2 doesnt provide metapackages yet but will do in the future. While it's still unclear how metapackages will be implemented in practice, we do want to have an installation process as streamlined for ROS 2 as it is in ROS 1 so we will provide a single deb bringing in entire stacks. As we don't plan on releasing docker images with this pipeline before then, I don't think this is something that should impact the decision made here.

Though I do not like the added number of Dockerfiles it brings, the pipeline here is already automated, and doing so would avoid compromising the context or breaking catches of untouched archs.

My understanding is that we don't foresee situations where only one arch should be invalidated and not the others, I'm don't think that avoiding the invalidation outweighs the number of Dockerfiles (that was my understanding from docker-library/official-images#3890). But I could go either way as long as the process is automated.

@tianon
Copy link

tianon commented Jan 30, 2018

Ok, here's an updated PoC which uses a different approach and supports multiple packages:

#!/usr/bin/env bash
set -Eeuo pipefail

arches="$(
	curl -fsSL 'http://packages.ros.org/ros/ubuntu/dists/jessie/Release' \
		| awk -F ': ' '$1 == "Architectures" { print $2 }'
)"

cat <<'EOH'
FROM debian:jessie

RUN set -eux; \
	apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 421C365BD9FF1F717815A3895523BAEEB01FA116; \
	echo 'deb http://packages.ros.org/ros/ubuntu jessie main' > /etc/apt/sources.list.d/ros-latest.list

RUN set -eux; \
	apt-get update; \
	arch="$(dpkg --print-architecture)"; \
	case "$arch" in \
EOH

for arch in $arches; do
	archPackages="$(curl -fsSL "http://packages.ros.org/ros/ubuntu/dists/jessie/main/binary-$arch/Packages")"

	cat <<EOF
		$arch) \\
EOF

	for package in \
		ros-kinetic-ros-core \
		ros-kinetic-ros-base \
	; do
		version="$(
			awk -F ': ' -v findPkg="$package" '
				$1 == "Package" { pkg = $2 }
				pkg == findPkg && $1 == "Version" { print $2 }
			' <<<"$archPackages"
		)"
		if [ -z "$version" ]; then
			cat <<EOF
			echo >&2 'error: $package not found for $arch'; \\
			exit 1; \\
			;; \\
EOF
			continue 2
		fi
		cat <<EOF
			apt-get install -y $package=$version; \\
EOF
	done

	cat <<EOF
			;; \\
EOF
done

cat <<'EODF'
		*) \
			echo >&2 "error: unsupported architecture: $arch"; \
			exit 1; \
			;; \
	esac; \
	rm -rf /var/lib/apt/lists/*
EODF
FROM debian:jessie

RUN set -eux; \
	apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 421C365BD9FF1F717815A3895523BAEEB01FA116; \
	echo 'deb http://packages.ros.org/ros/ubuntu jessie main' > /etc/apt/sources.list.d/ros-latest.list

RUN set -eux; \
	apt-get update; \
	arch="$(dpkg --print-architecture)"; \
	case "$arch" in \
		amd64) \
			apt-get install -y ros-kinetic-ros-core=1.3.1-0jessie-20171116-213027-0800; \
			apt-get install -y ros-kinetic-ros-base=1.3.1-0jessie-20180125-125251-0800; \
			;; \
		arm64) \
			apt-get install -y ros-kinetic-ros-core=1.3.1-0jessie-20171117-163957-0800; \
			apt-get install -y ros-kinetic-ros-base=1.3.1-0jessie-20180125-235037-0800; \
			;; \
		armhf) \
			echo >&2 'error: ros-kinetic-ros-core not found for armhf'; \
			exit 1; \
			;; \
		i386) \
			echo >&2 'error: ros-kinetic-ros-core not found for i386'; \
			exit 1; \
			;; \
		*) \
			echo >&2 "error: unsupported architecture: $arch"; \
			exit 1; \
			;; \
	esac; \
	rm -rf /var/lib/apt/lists/*

@ruffsl
Copy link
Member

ruffsl commented Jan 31, 2018

I created two sample PR just to investigate how much we'd have to modify our pipeline to support full in-line version pinning per package. Here they are:
osrf/docker_templates#32
#115

I'm not saying its simple or the best approach here, but it is doable. One thing I did notice is that not all meta packages are available for all archs, but perhaps thats just a one off thing with arm32v7 on trusty.

@mikaelarguedas
Copy link
Contributor Author

We talked about machine readable format for getting the version of packages in the past. There are now yaml files generated by the buildfarm (as of ros-infrastructure/ros_buildfarm#521 and ros-infrastructure/ros_buildfarm#522) providing the list of packages avaialble in each repositories without requiring us to poke them.
These files are stored at http://repositories.ros.org/status_page/yaml/ and match what is described in the buildfarm configuration index: with yaml file names being 'ros_%s_%s.yaml' % (distribution_name, release_build_key)

@Axel13fr
Copy link

Hi guys,

I understand the proper solution to trigger rebuilds of the Docker images will be to check the newly available debian pkg versions (sorry if this is more complex than that as I haven't followed the whole buld farm structure). It seems to be something rather complex. Can I suggest to have a dumb periodic docker build with --no-cache option to ensure at least that a ros docker image will be brought back up to date with latest packages on a regular basis ?

I'm having this issue #198 where diagnostic aggregator and static tf transformer are unusable due to dependency breaks and had to rebuild the docker images myself which kinda ruins the confort of just docker pulling the latest and start using it as the latest debian pkg are incompatible.

What do you think ?

@BillWSY
Copy link

BillWSY commented Nov 9, 2018

Hi guys,

I understand the proper solution to trigger rebuilds of the Docker images will be to check the newly available debian pkg versions (sorry if this is more complex than that as I haven't followed the whole buld farm structure). It seems to be something rather complex. Can I suggest to have a dumb periodic docker build with --no-cache option to ensure at least that a ros docker image will be brought back up to date with latest packages on a regular basis ?

I'm having this issue #198 where diagnostic aggregator and static tf transformer are unusable due to dependency breaks and had to rebuild the docker images myself which kinda ruins the confort of just docker pulling the latest and start using it as the latest debian pkg are incompatible.

What do you think ?

This should be achievable (without --no-cache) as easy as putting ADD http://packages.ros.org/ros/ubuntu/dists/xenial/Release /tmp/xxx to the first line of the root Dockerfile. Suggesting ROS people implement this as you are not enforcing ABI compatibility, which caused many issues recently (ros/roscpp_core#82 (comment)). Since ROS only re-sync periodically (once per month?), I believe the cost (unnecessary rebuild/invalidate images) is minimal.

@tianon
Copy link

tianon commented Nov 9, 2018 via email

@BillWSY
Copy link

BillWSY commented Nov 9, 2018

I see. I thought Docker Hub has a cache mechanism works the in the same way as docker local — my bad.

In that case, automatically PR and embedding version strings as part of the sync process seems quite reasonable.

@ruffsl
Copy link
Member

ruffsl commented Nov 10, 2018

you could instead embed a hash of that releases file, which is an
interesting way to acheive this goal such that updates happen explicitly.

@tianon , That sounds like nice idea. No need for adding large mutex logic in the docker file, nor multiplying them. I went an implemented it in osrf/docker_templates#45 and #204 . Feedback welcome!

@mikaelarguedas
Copy link
Contributor Author

Addressed by osrf/docker_templates#90, example of resulting automatic PR: #459

@nuclearsandwich
Copy link
Member

@mikaelarguedas was this re-opened intentionally?

@mikaelarguedas
Copy link
Contributor Author

Yes, reopened in response to #559 and as a consequence of #494 because adding labels has been declined in the official images so this issue still applies (no automatic rebuild happens when a ROS sync happens except if the metapackages get a version bump)

@ruffsl
Copy link
Member

ruffsl commented Apr 1, 2023

@tianon and @yosifkit , could you take look at our new approach to resolve this issue? I've prototyped a new template that pins the ros package version by instead listing them in files, sorted by each supported architecture in the manifest, that is then selectively copied from the build context. Rather than embedding all version numbers for every architecture into the Dockerfile directly, we can relegate these string to external files, yet still locked within the build context, as to buffer version bumps and delayed syncs from each other, otherwise resulting in either greater deployment delays or needles rebuild churn. Thus this would allow us to break the build cache for only specific architectures affected by upstream.

See the template changes, it effects on the existing Dockerfiles, and applied generated output here:

Could we go ahead with this approach?

@yosifkit
Copy link

yosifkit commented Apr 6, 2023

❤️This is very clever! What a simple way to break architecture version bumps apart while keeping build cache.

😞Unfortunately, official images' review pipelines do not support variable substitution in COPY or FROM lines.

So, the current goal of this is to not cause a rebuild of one architecture's image when another architecture has a package change and I am a little confused by that. I'd like to understand how often this happens and what effect that has on users. How often does one arch change and not another? Does it happen repeatedly?

The Official Images already get rebuilt on a regular cadence when, for example, the ubuntu base images are updated (currently, Canonical is targeting those updates every three weeks, and we have Debian on a similar cycle). So, the ros images are guaranteed to be rebuilt at least that often anyway.

@ruffsl
Copy link
Member

ruffsl commented Apr 7, 2023

😞 Unfortunately, official images' review pipelines do not support variable substitution in COPY or FROM lines.

Ah shucks. Have review pipelines not been updated to use buildx, or was that feature locked down for security?

I'd like to understand how often this happens and what effect that has on users. How often does one arch change and not another? Does it happen repeatedly?

@nuclearsandwich might have a better idea, but I think I recall seeing the timestamps in the version identifiers deviate by as little as a few minutes to as much as a couple of hours or days between architectures. I think it's just a matter of when the ROS buildfarm worker was able to complete it's packaging job. Sometimes there may be blocked packages on arm for instance, like from C++ shenanigans, but I think most recent syncs to the apt repo have been in lockstep, give or take the time for mirroring across networks.

We have (or had) a workflow that updates the package versions to reflect what is currently released into the ROS apt repo, but could still of course catch the repo in mid sync. We could then of course just manually wait for it to trigger again once all architectures are updated, but that wouldn't solve the secondary template muxing issue when choosing installable package version based on platform architecture.

I'm not a fan of regurgitating all versions for all packages for all architectures into all Dockerfiles, as it's nice to keep them clean as a reference example for our community, but it seems like there's no getting around having to bash our way out of this shell. Do you have any reference examples that we could dissect on how other official library images handle this, or are we the only oddballs using timestamped version identifiers?

Also, is there a more standard Dockerfile templating engine yet? Our current python based empy templates are getting a little unwieldy, and I'd like to refactor. But I think the empy project is no longer maintained, or at least the documentation site is dead:


So, the ros images are guaranteed to be rebuilt at least that often anyway.

Some in our community are rather eager to update to the latest packages, packages they've most likely already waited for to make it into our ~monthly syncs. Packages in the ROS ecosystem iterate and evolve rather quickly, most notably our rolling distro release, but also for fixes as newly released packages that were migrated to the next distro inevitably require minor patches and such. Users can of course build these from source or apt install updates from their own images, but the dependency hell of federated and multilingual packages is a barrier for the former, while losing out on the shared docker layers cached upstream and having to build/host your own images is a barrier for the latter.

Thus, an unfortunate maintainer could end up waiting 2 months or more, before their package makes it into the library image, if the ~monthly cadence between Ubuntu/Debian and ROS inconveniently occurs one right after the other.

(currently, Canonical is targeting those updates every three weeks, and we have Debian on a similar cycle)

Ahh! Years past I've felt if was more or less monthly, but if it's down to 3 weeks now, perhaps this isn't as much of an issue.

@mikaelarguedas
Copy link
Contributor Author

But I think the empy project is no longer maintained, or at least the documentation site is dead:
empy: https://pypi.org/project/empy
docs: http://www.alcyone.com/software/empy (as of typing: DNS_PROBE_FINISHED_NXDOMAIN)

Maybe @j-rivero has more details about the state of the project?

Happy to explore a more common tool for dockerfile templating if there is!

@nuclearsandwich
Copy link
Member

Is the templating system worth a separate issue as this discussion is already quite long in the tooth? 🐀

Also, is there a more standard Dockerfile templating engine yet? Our current python based empy templates are getting a little unwieldy, and I'd like to refactor.

empy is still baked fairly deeply into the ROS ecosystem and I think that as templating engine's go it is quite flexible with respect to refactoring as it facilitates everything from basic @variable_substitution to @{ # full python snippets within blocks }. But that does lead to fairly heavyweight template lifting.

That being said, I think that any code-behind templating system would probably work fine in place of empy.

But I think the empy project is no longer maintained, or at least the documentation site is dead:

I don't see this as a current blocker unless there are bugs in empy itself that are affecting the project.

Official library images must build with Dockerfiles right? The earthfile format used by https://earthly.dev has support for composition and factoring via COMMAND and several other similar structures which would probably cut down on copy-pasting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants