-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Cached and paralleled DKMS #9446
Comments
Sounds like you're trying to re-implement gentoo/funtoo (or similar source code based distributions) which already solved handling of system wide compile flags (and be it only the amount of gcc's running in parallel). IMHO it's no good idea to interfere with system settings regarding the compiler. In case long build times are a problem for DKMS user... adding a howto ('speed up compilation') would be the reasonable approach. |
You can already do what you are requesting by creating a DKMS config file. Look at the As an example, for RHEL/CentOS/Fedora and ZFS 0.7.13, if you look at the
This will run all jobs in parallel and not limit the number which are run. A better option would be to limit it to the number of processors on the machine:
However, none of the above should be necessary as modern versions of DKMS default to running
Since Enabling I suspect that you'll find that neither of these changes make much difference. There aren't many files that have to be compiled and I recommend profiling the build process on your machine to find out what is really taking a long time and focus on optimizing those parts. |
On October 14, 2019 2:58 pm, Christopher Voltz wrote:
I suspect that you'll find that neither of these changes make much difference. There aren't many files that have to be compiled and `gcc` is pretty fast (and DKMS should already be running `make` in parallel). My experience has been that the build step is not the one which takes a long time. The pre-build steps (e.g., `configure`) and post-build steps (e.g., `dracut`) are what take a long time on my machines.
I recommend profiling the build process on your machine to find out what is really taking a long time and focus on optimizing those parts.
Also note that the biggest chunk of those pre-build steps is determining
what interfaces the kernel has, and that has just been (mostly)
parallelized in master. Our configure walltime dropped from ~2.5 minutes
to ~20 seconds. It was the biggest part of building the modules and
userspace. I suspect once that change makes it way to Debian, DKMS
module building will also be greatly sped up on systems with more than
say 2 or 4 cores.
|
@cvoltz Wow, I'm ashamed to have posted a feature request without digging deeper into what you've provided... you're right, those are both viable options that keep the ownership away from ZFS and even the package management side of things (which is where I thought this thread would go). And yes, after going through the logs... you're correct... the config step is a huge laggard for me, much less on the compile side. Anyways, I appreciate your response, as well as everyone elses... I'll close this up... sorry for the noise! Cheers |
On slower systems (such as my own), recompiling ZFS during kernel upgrades can be an extremely lengthy process... I am currently recompiling with DKMS and am a half hour in with no end in sight yet.
In the past, I had hacked up the dkms.conf file to enable ccache and added -j# to the make command and saw great improvements and I don't see any downside from my perspective of doing this. I'm wondering if it's do-able, or desirable, to add in to the dkms script an option for job number and the ability to use ccache (if installed on the user's machine).
For job number, I realize this is more difficult since it is more of a user preference... I'm thinking of maybe creating a config file outside of zfs and more a part of the packaging script on my distro (ArchLinux PKGBUILD) that allows me to set the job number without worrying about an update wiping out my settings.
What do you think?
The text was updated successfully, but these errors were encountered: