Coder Social home page Coder Social logo

p-ranav / criterion Goto Github PK

View Code? Open in Web Editor NEW
208.0 11.0 11.0 72.65 MB

Microbenchmarking for Modern C++

License: MIT License

C++ 97.31% CMake 0.86% Python 1.80% Shell 0.03%
microbenchmarks microbenchmark cpp17 cpp17-library mit header-only single-header-lib single-header single-header-library benchmarking

criterion's Introduction

Highlights

Criterion is a micro-benchmarking library for modern C++.

  • Convenient static registration macros for setting up benchmarks
  • Parameterized benchmarks (e.g., vary input size)
  • Statistical analysis across multiple runs
  • Requires compiler support for C++17 or newer standard
  • Header-only library - single header file version available at single_include/
  • MIT License

Table of Contents

Getting Started

Let's say we have this merge sort implementation that needs to be benchmarked.

template<typename RandomAccessIterator, typename Compare>
void merge_sort(RandomAccessIterator first, RandomAccessIterator last,
                Compare compare, std::size_t size) {
  if (size < 2) return;
  auto middle = first + size / 2;
  merge_sort(first, middle, compare, size / 2);
  merge_sort(middle, last, compare, size - size/2);
  std::inplace_merge(first, middle, last, compare);
}

Simple Benchmark

Include <criterion/criterion.hpp> and you're good to go.

  • Use the BENCHMARK macro to declare a benchmark
  • Use SETUP_BENCHMARK and TEARDOWN_BENCHMARK to perform setup and teardown tasks
    • These tasks are not part of the measurement
#include <criterion/criterion.hpp>

BENCHMARK(MergeSort)
{
  SETUP_BENCHMARK(
    const auto size = 100;
    std::vector<int> vec(size, 0); // vector of size 100
  )
 
  // Code to be benchmarked
  merge_sort(vec.begin(), vec.end(), std::less<int>(), size);
  
  TEARDOWN_BENCHMARK(
    vec.clear();
  )
}

CRITERION_BENCHMARK_MAIN()

What if we want to run this benchmark on a variety of sizes?

Passing Arguments

  • The BENCHMARK macro can take typed parameters
  • Use GET_ARGUMENTS(n) to get the nth argument passed to the benchmark
  • For benchmarks that require arguments, use INVOKE_BENCHMARK_FOR_EACH and provide arguments
#include <criterion/criterion.hpp>

BENCHMARK(MergeSort, std::size_t) // <- one parameter to be passed to the benchmark
{
  SETUP_BENCHMARK(
    const auto size = GET_ARGUMENT(0); // <- get the argument passed to the benchmark
    std::vector<int> vec(size, 0);
  )
 
  // Code to be benchmarked
  merge_sort(vec.begin(), vec.end(), std::less<int>(), size);
  
  TEARDOWN_BENCHMARK(
    vec.clear();
  )
}

// Run the above benchmark for a number of inputs:

INVOKE_BENCHMARK_FOR_EACH(MergeSort,
  ("/10", 10),
  ("/100", 100),
  ("/1K", 1000),
  ("/10K", 10000),
  ("/100K", 100000)
)

CRITERION_BENCHMARK_MAIN()

Passing Arguments (Part 2)

Let's say we have the following struct and we need to create a std::shared_ptr to it.

struct Song {
  std::string artist;
  std::string title;
  Song(const std::string& artist_, const std::string& title_) :
    artist{ artist_ }, title{ title_ } {}
};

Here are two implementations for constructing the std::shared_ptr:

// Functions to be tested
auto Create_With_New() { 
  return std::shared_ptr<Song>(new Song("Black Sabbath", "Paranoid")); 
}

auto Create_With_MakeShared() { 
  return std::make_shared<Song>("Black Sabbath", "Paranoid"); 
}

We can setup a single benchmark that takes a std::function<> and measures performance like below.

BENCHMARK(ConstructSharedPtr, std::function<std::shared_ptr<Song>()>) 
{
  SETUP_BENCHMARK(
    auto test_function = GET_ARGUMENT(0);
  )

  // Code to be benchmarked
  auto song_ptr = test_function();
}

INVOKE_BENCHMARK_FOR_EACH(ConstructSharedPtr, 
  ("/new", Create_With_New),
  ("/make_shared", Create_With_MakeShared)
)

CRITERION_BENCHMARK_MAIN()

CRITERION_BENCHMARK_MAIN and Command-line Options

CRITERION_BENCHMARK_MAIN() provides a main function that:

  1. Handles command-line arguments,
  2. Runs the registered benchmarks
  3. Exports results to file if requested by user.

Here's the help/man generated by the main function:

foo@bar:~$ ./benchmarks -h

NAME
     ./benchmarks -- Run Criterion benchmarks

SYNOPSIS
     ./benchmarks
           [-w,--warmup <number>]
           [-l,--list] [--list_filtered <regex>] [-r,--run_filtered <regex>]
           [-e,--export_results {csv,json,md,asciidoc} <filename>]
           [-q,--quiet] [-h,--help]
DESCRIPTION
     This microbenchmarking utility repeatedly executes a list of benchmarks,
     statistically analyzing and reporting on the temporal behavior of the executed code.

     The options are as follows:

     -w,--warmup number
          Number of warmup runs (at least 1) to execute before the benchmark (default=3)

     -l,--list
          Print the list of available benchmarks

     --list_filtered regex
          Print a filtered list of available benchmarks (based on user-provided regex)

     -r,--run_filtered regex
          Run a filtered list of available benchmarks (based on user-provided regex)

     -e,--export_results format filename
          Export benchmark results to file. The following are the supported formats.

          csv       Comma separated values (CSV) delimited text file
          json      JavaScript Object Notation (JSON) text file
          md        Markdown (md) text file
          asciidoc  AsciiDoc (asciidoc) text file

     -q,--quiet
          Run benchmarks quietly, suppressing activity indicators

     -h,--help
          Print this help message

Exporting Results (csv, json, etc.)

Benchmarks can be exported to one of a number of formats: .csv, .json, .md, and .asciidoc.

Use --export_results (or -e) to export results to one of the supported formats.

foo@bar:~$ ./vector_sort -e json results.json -q # run quietly and export to JSON

foo@bar:~$ cat results.json
{
  "benchmarks": [
    {
      "name": "VectorSort/100",
      "warmup_runs": 2,
      "iterations": 2857140,
      "mean_execution_time": 168.70,
      "fastest_execution_time": 73.00,
      "slowest_execution_time": 88809.00,
      "lowest_rsd_execution_time": 84.05,
      "lowest_rsd_percentage": 3.29,
      "lowest_rsd_index": 57278,
      "average_iteration_performance": 5927600.84,
      "fastest_iteration_performance": 13698630.14,
      "slowest_iteration_performance": 11260.12
    },
    {
      "name": "VectorSort/1000",
      "warmup_runs": 2,
      "iterations": 2254280,
      "mean_execution_time": 1007.70,
      "fastest_execution_time": 640.00,
      "slowest_execution_time": 102530.00,
      "lowest_rsd_execution_time": 647.45,
      "lowest_rsd_percentage": 0.83,
      "lowest_rsd_index": 14098,
      "average_iteration_performance": 992355.48,
      "fastest_iteration_performance": 1562500.00,
      "slowest_iteration_performance": 9753.24
    },
    {
      "name": "VectorSort/10000",
      "warmup_runs": 2,
      "iterations": 259320,
      "mean_execution_time": 8833.26,
      "fastest_execution_time": 6276.00,
      "slowest_execution_time": 114548.00,
      "lowest_rsd_execution_time": 8374.15,
      "lowest_rsd_percentage": 0.11,
      "lowest_rsd_index": 7905,
      "average_iteration_performance": 113208.45,
      "fastest_iteration_performance": 159337.16,
      "slowest_iteration_performance": 8729.96
    }
  ]
}

Building Library and Samples

cmake -Hall -Bbuild
cmake --build build

# run `merge_sort` sample
./build/samples/merge_sort/merge_sort

Generating Single Header

python3 utils/amalgamate/amalgamate.py -c single_include.json -s .

Contributing

Contributions are welcome, have a look at the CONTRIBUTING.md document for more information.

License

The project is available under the MIT license.

criterion's People

Contributors

p-ranav avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

criterion's Issues

feature: ability to measure throughput

Hi, great work, and crisp coding style! I have a question: I am trying to add throughput measurement functionality -- so I could use your work in H5CPP and other HDF5 performance measurement code; and wondering what is the best way to tackle this problem?

Would it be possible to return a long double from the criterion::benchmark_config::FN = std::function<...>? Or perhaps some other suggestion?
By no means am I here distract you from your work -- but I am already a consumer to you argparse and criterion is very similar how I would organise code base of this type; and most importantly criterion can provide almost what IO benchmarking would need.

I understand if the answer is no, but I though it is better to ask.
best wishes: steven

A few small issues on Windows

Hello there,

I've been playing around with criterion on Windows and ran into a few small problems.

The first is it looks like criterion is using designated initializers. These were added to C some time ago (C11 I think) but are only making their way to C++ in C++20. I think they've existed as extensions in GCC/clang for a while too but they're not standard. To use these I'd either bump the required version of the library (I updated my CMakeLists.txt file to use `target_compile_features(... cxx_std_20)' or remove use of them for now (code would be a little more verbose but could be used more widely right now). I also hit some warnings about the use of some macros but I don't remember them off of the top of my head (I was building with Visual Studio 2019).

The other problem I ran into is the loading bar when running a benchmark doesn't display correctly on Windows. It creates a newline character for each step which means your result go scrolling off the screen pretty quickly. I'm not sure what the fix would be but there probably is a cross platform way to get it working on macOS/Linux and Windows (I was using the new Microsoft Terminal app).

Otherwise it worked like a charm and has been very interesting to experiment with!

Cheers,

Tom

Compile fails with clang 16 C++20

[ 98%] Built target unittests
In file included from /home/davidd/prog/uri/examples/benchmarks.cpp:36:
In file included from /home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/criterion.hpp:11:
In file included from /home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/main.hpp:9:
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:1106:47: fatal error: instantiating fold expression with 257 arguments exceeded expression nesting limit of 256
constexpr int count = ((valid[I] ? 1 : 0) + ...);
~~~~~~~~~~~~~~~~~~~~~~^~~~
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:1119:34: note: in instantiation of function template specialization 'magic_enum::detail::values<criterion::options::export_options::format_type, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256>' requested here
inline constexpr auto values_v = values(
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:1122:62: note: in instantiation of variable template specialization 'magic_enum::detail::values_v' requested here
template inline constexpr std::size_t count_v = values_v.size();
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:1401:25: note: in instantiation of variable template specialization 'magic_enum::detail::count_vcriterion::options::export_options::format_type' requested here
static_assert(detail::count_v> 0,
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:2566:41: note: in instantiation of function template specialization 'magic_enum::enum_castcriterion::options::export_options::format_type' requested here
auto maybe_enum_value = magic_enum::enum_cast(arguments[next_index]);
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:2213:16: note: in instantiation of function template specialization 'structopt::details::parser::parse_enum_argumentcriterion::options::export_options::format_type' requested here
result = parse_enum_argument(name);
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:2648:31: note: (skipping 3 contexts in backtrace; use -ftemplate-backtrace-limit=0 to see all)
auto [value, success] = parse_argument(field_name.c_str());
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:2248:31: note: in instantiation of function template specialization 'structopt::details::parser::parse_argumentcriterion::options::export_options' requested here
auto [value, success] = parse_argument(name);
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:2701:19: note: in instantiation of function template specialization 'structopt::details::parser::parse_optional_argumentcriterion::options::export_options' requested here
value = parse_optional_argument(name);
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/main.hpp:61:1: note: in instantiation of function template specialization 'structopt::details::parser::operator()<std::optionalcriterion::options::export_options>' requested here
STRUCTOPT(criterion::options, warmup, list, list_filtered, run_filtered, export_results, quiet,
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:2860:19: note: expanded from macro 'STRUCTOPT'
#define STRUCTOPT VISITABLE_STRUCT
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:793:27: note: expanded from macro 'VISITABLE_STRUCT'
VISIT_STRUCT_PP_MAP(VISIT_STRUCT_MEMBER_HELPER, VA_ARGS)
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:2929:12: note: in instantiation of function template specialization 'structopt::app::parsecriterion::options' requested here
return parse(arguments);
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/main.hpp:75:49: note: in instantiation of function template specialization 'structopt::app::parsecriterion::options' requested here
auto options = structopt::app(program_name).parsecriterion::options(argc, argv);
^
/home/davidd/prog/uri/build/_deps/criterion-src/include/criterion/details/structopt.hpp:1106:47: note: use -fbracket-depth=N to increase maximum nesting level
constexpr int count = ((valid[I] ? 1 : 0) + ...);
^
1 error generated.
make[2]: *** [CMakeFiles/benchmarks.dir/build.make:76: CMakeFiles/benchmarks.dir/examples/benchmarks.cpp.o] Error 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.