Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems to execute pyperformance benchmark #769

Open
jirka-h opened this issue Jan 12, 2024 · 1 comment
Open

Problems to execute pyperformance benchmark #769

jirka-h opened this issue Jan 12, 2024 · 1 comment

Comments

@jirka-h
Copy link

jirka-h commented Jan 12, 2024

Hi Michael,

I'm running into a weird problem while running the pts/pyperformance benchmark. Running this benchmark like this:

./phoronix-test-suite batch-run pts/pyperformance
or
./phoronix-test-suite batch-run pts/single-threaded

works fine. However, when it's called from other suite, it fails:

./phoronix-test-suite batch-run my
...
PyPerformance 1.0.0:
    pts/pyperformance-1.0.2
    Test 18 of 20
    Estimated Trial Run Count:    3                     
    Estimated Test Run-Time:      3 Minutes             
    Estimated Time To Completion: 9 Minutes [00:33 CET] 
        Started Run 1 @ 00:24:48
        The test quit with a non-zero exit status.
        Started Run 2 @ 00:24:52
        The test quit with a non-zero exit status.
        Started Run 3 @ 00:24:56
        The test quit with a non-zero exit status.
        E: pyperformance run: error: argument -b/--benchmarks: expected one argument

The suite my is defined like this and stored at /var/lib/phoronix-test-suite/test-suites/local/my/suite-definition.xml

<?xml version="1.0"?>
<!--Phoronix Test Suite v9.8.0m1-->
<PhoronixTestSuite>
  <SuiteInformation>
    <Title>My</Title>
    <Version>1.0.0</Version>
    <TestType>System</TestType>
    <Description>My Test Suite</Description>
    <Maintainer>Jirka Hladky</Maintainer>
  </SuiteInformation>
  <Execute>
    <Test>pts/single-threaded</Test>
    <Mode>BATCH</Mode>
  </Execute>
</PhoronixTestSuite>

On successful runs, benchmark is started like this:

pyperformance run -r -b nbody
pyperformance run -r -b pathlib
...

When it fails, it runs without the name of the subtest:

pyperformance run -r -b

How can I fix that? I want to use my suite definition and include pts/single-threaded which in turn includes pts/pyperformance, but I don't know how to fix the problem with pyperformance being run without the subtest names.

Thanks a lot!
Jirka

@jirka-h
Copy link
Author

jirka-h commented Jan 15, 2024

I have found a work around. When I add to suite definition the test pts/pyperformance explicitly like this:

<Execute>
  <Test>pts/pyperformance</Test>
  <Mode>BATCH</Mode>
</Execute>
<Execute>
  <Test>pts/server-cpu-tests</Test>
</Execute>

then:

  1. pts/pyperformance is called from pts/server-cpu-tests and fails.
  2. pts/pyperformance is then called correctly, with all subtests

This is far from being perfect but at least I can test pts/pyperformance now together with all pts/server-cpu-tests benchmarks.

PyPerformance 1.0.0:
    pts/pyperformance-1.0.2
    Test 101 of 118
    Estimated Trial Run Count:    3                             
    Estimated Test Run-Time:      4 Minutes                     
    Estimated Time To Completion: 1 Hour, 7 Minutes [08:04 CET] 
    Test will timeout after ~8 minutes if any individual run incomplete/hung.
        Started Run 1 @ 06:57:39
        The test quit with a non-zero exit status.
        Started Run 2 @ 06:57:44
        The test quit with a non-zero exit status.
        Started Run 3 @ 06:57:48
        The test quit with a non-zero exit status.
        E: pyperformance run: error: argument -b/--benchmarks: expected one argument
PyPerformance 1.0.0:
    pts/pyperformance-1.0.2 [Benchmark: go]
    Test 102 of 118
    Estimated Trial Run Count:    3                             
    Estimated Test Run-Time:      4 Minutes                     
    Estimated Time To Completion: 1 Hour, 4 Minutes [08:01 CET] 
    Test will timeout after ~8 minutes if any individual run incomplete/hung.
        Started Run 1 @ 06:57:58
        Started Run 2 @ 06:59:20
        Started Run 3 @ 07:00:18

    Benchmark: go:
        303
        303
        303

    Average: 303 Milliseconds
    Deviation: 0.00%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant