Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-131832

Benchmarking for libstdc++

Linking RHIVOS CVEs to...Migration: Automation ...Sync from "Extern...XMLWordPrintable

    • None
    • rhel-pt-cpp-libs
    • None
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • Unspecified
    • Unspecified
    • Unspecified
    • None

      Goal

      • Develop a suite of microbenchmarks for libstdc++
        • As a libstdc++ developer I want to be able to measure the performance of library components. It would help when making changes, to ensure that performance regressions aren't introduced or that optimizations do actually have benefits. It would allow us to track performance over time. A performant standard library implementation is important to customers and gives us a competitive advantage over other operating systems.

      Acceptance criteria

      Replace the current make check-performance tests with something better, with clearer output that makes it more useful for performance measurement and optimization. See Glibc's make bench for ideas. See what libcxx is doing for its own benchmarking.

      • Running benchmarks should be as simple as make check-performance or make bench so that developers are not unwilling to run the tests.
      • Selecting a subset of benchmarks to run should be easy (naming tests in testsuite/testsuite_files_performance is acceptable, but could be better, e.g. see the --benchmark_filter=regex option of Google benchmark, or the BENCHSET makefile variable of Glibc's benchtests.
      • Investigate whether it's better to run a benchmark in a loop for a fixed time and then report performance based on how many times it looped in that time, or to run for a fixed number of loops and report the total time taken.
      • The current check-performance output shows times in seconds, which is only meaningful for long-running tests where you can compare e.g. 22s to 25s. It's also unclear what a "good" time is for a given test - is 25s good, or much too slow? It would be better to report the time for a single iteration of microbenchmark, measured in appropriate units (e.g. nanoseconds, microseconds). See Google benchmark.
      • It should be easy to run the benchmarks with different compilation flags, e.g. different -std and -O options, and with -m32. Currently that involves editing scripts/check_performance or scripts/testsuite_flags.in.
      • There's a missing prerequisite on libtestc++.a for the current performance tests, as that library is needed but won't be built by make check-performance. It just assumes that you've previously done make check to build it (which assumes that it was built with the same flags, e.g. if make check finishes by testing with -m32 then the library will be built for the wrong arch).

      Components that should be covered:

      • std::string - both modifying strings, and searching strings
      • containers, especially std::vector
      • std::format (and to_string and to_chars)
      • std::filesystem::path
      • concurrency - semaphores, atomic wait/notify
      • std::regex (low priority because we know it's bad)
      • memory pool resources?

      See libstdc++-v3/testsuite/performance/* for the existing benchmarks.

              Unassigned Unassigned
              jwakely@redhat.com Jonathan Wakely
              Jonathan Wakely Jonathan Wakely
              Vaclav Kadlcik Vaclav Kadlcik
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated: