Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Standardising Benchmarks, with support for nanobench as an option for its backend #6448

Draft
wants to merge 21 commits into
base: master
Choose a base branch
from

Conversation

vrnimje
Copy link
Contributor

@vrnimje vrnimje commented Feb 27, 2024

Background context: #6357

Proposed Changes

  • Added bench-marking interface perftests_report, which also supports nanobench as the backend
  • Added option for using nanobench, or default to timed-loop, as the back-end for performance measurement
  • Fixed compilation issues with the above mentioned PR

@StellarBot
Copy link

Can one of the admins verify this patch?

Copy link

codacy-production bot commented Feb 27, 2024

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
+0.00% 74.00%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (eb2bf57) 217975 185524 85.11%
Head commit (2575d63) 190903 (-27072) 162490 (-23034) 85.12% (+0.00%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#6448) 100 74 74.00%

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

Codacy will stop sending the deprecated coverage status from June 5th, 2024. Learn more

@isidorostsa
Copy link
Contributor

isidorostsa commented Mar 5, 2024

Congratulations for getting nanobench into the build system!
I left some minor corrections and a suggestion, do not worry about the tests, they are not failing because of your changes.
Great PR overall :)

@vrnimje
Copy link
Contributor Author

vrnimje commented Mar 6, 2024

Congratulations for getting nanobench into the build system! I left some minor corrections and a suggestion, do not worry about the tests, they are not failing because of your changes. Great PR overall :)

Thanks for the response, but I am unable to see your suggestions in the PR. Could you please check once from your side?

if(NOT nanobench_POPULATED)
fetchcontent_populate(nanobench)
endif()
set(NANOBENCH_ROOT ${nanobench_SOURCE_DIR})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you could use Nanobench_ROOT to prevent cmake from complaining

GIT_TAG v4.3.11
GIT_SHALLOW TRUE)

# fetchcontent_makeavailable(nanobench)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did this not work initially?

Copy link
Contributor Author

@vrnimje vrnimje Mar 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. I was just testing by just using this call, could nanobench be inserted into the system.

COMPONENT cmake
)

# add_library(nanobench::nanobench ALIAS nanobench)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think you should uncomment "NAMESPACE" and "add_library" to follow the namespace+alias as in other files?

e.g.

add_library(Asio::asio ALIAS asio)

]
}{{^-last}},{{/-last}}
{{/result}} ]
})DELIM";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This format is suited for internal consumption e.g. to be used along with a python plotting tool. Could we use a template to output the average in a human readable format?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I think then we could make use of an command line option in the benchmarks to invoke the internal template.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be possible to do this with a global variable through the hpx command line arguments facilities, e.g. without having to modify benchmarks to pass the template as an explicit variable, I will get back to you when I find out.

Copy link
Contributor Author

@vrnimje vrnimje Mar 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ohh, so something like --hpx:threads
Will look into this

if (vm.count("hpx:threads"))
{
threads_str = vm["hpx:threads"].as<std::string>();
if ("all" == threads_str)
{
threads = init_threads;
if (batch_threads != static_cast<std::size_t>(-1))
{
threads = batch_threads;
}
}
else if ("cores" == threads_str)
{
threads = init_cores;
if (batch_threads != static_cast<std::size_t>(-1))
{
threads = batch_threads;
}
}
else
{
threads = hpx::util::from_string<std::size_t>(threads_str);
}
if (threads == 0)
{
throw hpx::detail::command_line_error(
"Number of --hpx:threads must be greater than 0");
}
#if defined(HPX_HAVE_MAX_CPU_COUNT)
if (threads > HPX_HAVE_MAX_CPU_COUNT)
{
// clang-format off
throw hpx::detail::command_line_error("Requested more than "
HPX_PP_STRINGIZE(HPX_HAVE_MAX_CPU_COUNT)" --hpx:threads "
"to use for this application, use the option "
"-DHPX_WITH_MAX_CPU_COUNT=<N> when configuring HPX.");
// clang-format on
}
#endif
}

Copy link
Contributor Author

@vrnimje vrnimje Mar 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be possible to do this with a global variable through the hpx command line arguments facilities, e.g. without having to modify benchmarks to pass the template as an explicit variable, I will get back to you when I find out.

I have added a hpx:verbose_bench command line option in dd126c6 for the same. Here is the corresponding CI run testing it: test-run

Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
…x perf test

Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
…minor changes to plotting script

* Further simplified the default template for the benchmark

Signed-off-by: Vedant <vedantnimjed@gmail.com>
…accordingly

Signed-off-by: Vedant <vedantnimjed@gmail.com>
…orrect option

Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Signed-off-by: Vedant <vedantnimjed@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants