Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add FrameworkBenchmarks performance test #3746

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

joeyleeeeeee97
Copy link
Contributor

@joeyleeeeeee97 joeyleeeeeee97 commented Jun 9, 2022

FrameworkBenchmarks is mentioned in #1112 .

Personnaly I am using this test a lot, it's open, popular, and covers a wide range of frameworks.

https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Java

@joeyleeeeeee97
Copy link
Contributor Author

joeyleeeeeee97 commented Jun 9, 2022

Grinder not working, so I will rebuild this later. https://ci.adoptopenjdk.net/view/Test_grinder/job/Grinder/4964/

Verified on my test machine, works well.

@llxia
Copy link
Contributor

llxia commented Jun 9, 2022

Infra Issue created: adoptium/infrastructure#2597

@smlambert
Copy link
Contributor

Its unable to delete workspace because of the json file from this benchmark... could it be the case that something still has that file open, when we try to delete the workspace, which blocks the deletion?

@llxia
Copy link
Contributor

llxia commented Jun 9, 2022

yes, I suspect the previous benchmark job failed in a bad state and left the process running (holding the file). The new job landed on the same machine cannot delete the workspace due to the file.

At this point, the machine needs to be cleaned up. Also, we need to enhance the test framework to terminate the leftover processes and clean up the workspace. If this cannot be properly handled at the test level, it should be handled at TKG/Jenkins level.

@joeyleeeeeee97 joeyleeeeeee97 changed the title Add FrameworkBenchmarks, starting from netty Add FrameworkBenchmarks performance test Jun 10, 2022
@joeyleeeeeee97
Copy link
Contributor Author

joeyleeeeeee97 commented Jun 10, 2022

@llxia @smlambert
I have tested this locally.

external tests runs framework inner unit tests to ensure frameworks behaving correctly.
perf framework tests stresses common worloads to ensure frameworks are performing well under test VMs.
  • The pitfall is some frameworks are hardcoded with compile target 11, so most of these won't be running on jdk8,
  • I am copying TEST_JDK_HOME into docker to finish the benchmark, trying to minimize our modifications.
  • These benchmarks are very sensitive and takes a long time to finish, might worth enabling several of them as sanity during weekly or ga release.

@smlambert
Copy link
Contributor

The issue seen with not being able to delete workspace seems to be related to the results directory being created as root (instead of Jenkins). See adoptium/infrastructure#2597 (comment)

@llxia
Copy link
Contributor

llxia commented Jun 13, 2022

@joeyleeeeeee97 do you see the results folder in your local run? I suspect it is produced by docker as it mounted the local dir https://github.com/TechEmpower/FrameworkBenchmarks#explanation-of-the-tfb-script

@joeyleeeeeee97
Copy link
Contributor Author

@llxia Yes. Should I change the permission in results in running script to get removable results?

@llxia
Copy link
Contributor

llxia commented Jun 14, 2022

Yes, we need to change to jenkins. Currently, the results folder is owned by root which cannot be removed by jenkins user.

@joeyleeeeeee97
Copy link
Contributor Author

@llxia I added a extra step to make results removable.
docker run --entrypoint /bin/bash --rm --network tfb -v /var/run/docker.sock:/var/run/docker.sock -v ${TEST_RESROOT}:/FrameworkBenchmarks techempower/tfb -c "chmod -R 777 /FrameworkBenchmarks/results"

FrameworkBenchmarks is running with root, I raise an issue for them.

@joeyleeeeeee97
Copy link
Contributor Author

joeyleeeeeee97 commented Jun 20, 2022

http://ci.dragonwell-jdk.io/job/Test_openjdk11_dragonwell_special.perf_x86-64_linux/9/console

18:05:44 TEST TARGETS SUMMARY
18:05:44 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
18:05:44 PASSED test targets:
18:05:44 FrameworkBenchmarks-netty_0
18:05:44 FrameworkBenchmarks-quarkus_0
18:05:44 FrameworkBenchmarks-spring_0
18:05:44 FrameworkBenchmarks-spring-jpa_0
18:05:44 FrameworkBenchmarks-spark_0
18:05:44 FrameworkBenchmarks-servlet3_0
18:05:44 FrameworkBenchmarks-servlet_0
18:05:44
18:05:44 TOTAL: 7 EXECUTED: 7 PASSED: 7 FAILED: 0 DISABLED: 0 SKIPPED: 0

@llxia @ShelleyLambert
As aspecial.perf yet, please take a look

Copy link
Contributor

@smlambert smlambert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for you work on this @joeyleeeeeee97 - a couple of notes to work out, since this is a bit of a 'hybrid' test to add to the suite, it is a perf test that requires Docker. Up until now, we have been using the label ci.role.perf for performance benchmarks to guarantee that the hardware that the benchmarks are running on are same (so that when we look at historical results, they do not fluctuate wildly due to hardware differences). For our external tests, all of which run in containers, we append sw.tool.docker label to select machines that have Docker installed (as not all machines attached to our Jenkins server do have it).

We will need to think about the right approach to ensure this test will be sent to a machine with Docker installed and whether or not it needs to be pinned to specific hardware (are the benchmarks sensitive to that?).

Also, you can remove the changes to the testenv.properties file as those are not needed/desired.

@joeyleeeeeee97
Copy link
Contributor Author

joeyleeeeeee97 commented Jun 23, 2022

@smlambert
Update:
Removed changes in testenv.
This test is not pinned to any specific hardware, only needs to have docker.

To dispatch these to docker, maybe we could:

  1. Install docker on all ci.role.perf machines if this is acceptable.
  2. Add DOCKER_REQUIRED when running some test pipelines, but this also relies on that we have some machines that both labeled ci.role.perf and sw.tool.docker, currently we have these
		if (GROUP == "external" || (GROUP == "perf" && LEVEL == "special") ) {
					DOCKER_REQUIRED = true
					DOCKERIMAGE_TAG = "nightly"
					if (LEVEL == "sanity" || LEVEL == "extended") {
						EXTRA_DOCKER_ARGS = '-v ${TEST_JDK_HOME}:/opt/java/openjdk'
					}
				}
For example,If we pin `special.perf` tests on these machines, does this affects bumbleBench test results?

Personally I think adding DOCKER_REQUIRED is better because in long term we might have other tests rely on docker, for example some system tests and some other perf tests might need a docker.

@smlambert
Copy link
Contributor

I am glad you are asking these questions @joeyleeeeeee97 - I agree we need more machines that can support container based testing (ie. with Docker installed).

If we pin special.perf to https://ci.adoptopenjdk.net/label/sw.tool.docker&&ci.role.perf/ that implies we can only run these benchmarks on 2 platforms at the project (x86-64_linux and s390x_linux), when the tests get sent to the other platforms (which based on the playlist, since there are no platform restrictions, we will try to do, they will fail with "no machines available with that tag").

I think we have to ensure that we have Docker installed on a broad base of platforms, and then add a platformCapabilities tag to the playlist in this PR to exclude the systems where we are not able to run Docker.

@joeyleeeeeee97
Copy link
Contributor Author

joeyleeeeeee97 commented Jun 27, 2022

  • ensure that we have Docker installed on a broad base of platforms
    I think we could follow this in this infrastructure issue, I will try to add these machines using ansible.

  • add a platformCapabilities tag to the playlist in this PR to exclude the systems
    Currently only enable x86 arch

		<platformRequirements>os.linux,arch.x86,bits.64</platformRequirements>

Do we have a or semantic in platformRequirements? like

		<platformRequirements>os.linux,(arch.x86|arch.390),bits.64</platformRequirements>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants