Coder Social home page Coder Social logo

ucbrise / piranha Goto Github PK

View Code? Open in Web Editor NEW
86.0 2.0 24.0 73.25 MB

Piranha: A GPU Platform for Secure Computation

License: MIT License

Makefile 0.55% Dockerfile 0.05% C++ 77.22% Python 4.45% Shell 0.48% Cuda 17.25%
gpu-acceleration multi-party-computation privacy-preserving-machine-learning

piranha's Introduction

Piranha: A GPU Platform for Secure Computation

cute cuddly PIRANHA >:D courtesy of Vivian Fang @ vivi.sh

Piranha is a C++-based platform for accelerating secure multi-party computation (MPC) protocols on the GPU in a protocol-independent manner. It is designed both for MPC developers, providing a modular structure for easily adding new protocol implementations, and secure application developers, allowing execution on any Piranha-implemented protocols. This repo currently includes a secure ML inference and training application, which you can find in /nn.

Piranha is described in more detail in our USENIX Security '22 paper! If you have questions, please create git issues; for eventual replies, you can also reach out to [email protected].

usenix-available usenix-functional usenix-reproduced

Warning: This is an academic proof-of-concept prototype and has not received careful code review. This implementation is NOT ready for production use.

Build

This project requires an NVIDIA GPU, and assumes you have your GPU drivers and the NVIDIA CUDA Toolkit already installed. The following has been tested on AWS with the Deep Learning Base AMI (Ubuntu 18.04 ) Version 53.5 AMI.

  1. Checkout external modules
git submodule update --init --recursive
  1. Build CUTLASS
cd ext/cutlass
mkdir build
cmake .. -DCUTLASS_NVCC_ARCHS=<YOUR_GPU_ARCH_HERE> -DCMAKE_CUDA_COMPILER_WORKS=1 -DCMAKE_CUDA_COMPILER=<YOUR NVCC PATH HERE>
make -j
  1. Install GTest. We use it for unit testing.
sudo apt install libgtest-dev libssl-dev
cd /usr/src/gtest
sudo mkdir build
cd build
sudo cmake ..
sudo make
sudo make install
  1. Create some necessary directories
mkdir output; mkdir files/MNIST; mkdir files/CIFAR10
  1. Download the MNIST/CIFAR10 datasets, if using. This step might take a while
cd scripts
sudo pip install torch torchvision
python download_{mnist, cifar10}.py
  1. Build Piranha at a specific fixed point precision and for a particular protocol. 3-party replicated secret sharing is the default and doesn't require a command-line flag.
make -j8 PIRANHA_FLAGS="-DFLOAT_PRECISION=<NBITS> -D{TWOPC,FOURPC}"

Run

  1. Copy and set up a run configuration using config.json as a base. It is already set up to perform a 10-epoch SecureML training run; simply specify party IPs in the configuration.

  2. Run Piranha on each machine with a party number (0 -> n_parties - 1):

./piranha -p <PARTY NUM> -c <CONFIG FILE>

Running locally

You may want to run Piranha on a local machine for development. An example configuration for 3-party local execution can be found at files/samples/localhost_config.json with an accompanying runfile. You can modify the runfile to change which GPUs Piranha uses for each party using the CUDA_VISIBLE_DEVICES environment variable. The script uses GPUs 0-2 by default, but can be changed to run on a single GPU as well. Note that due to contention, hosting several parties on a single GPU will limit the problem sizes you can test and incur some additional overhead.

Start the computation with:

./files/samples/localhost_runner.sh

Citation

You can cite the paper using the following BibTeX entry (the paper links to this repo):

@inproceedings {watson22piranha,
    author = {Watson, Jean-Luc and Wagh, Sameer and Popa, Raluca Ada},
    title = {Piranha: A {GPU} Platform for Secure Computation},
    booktitle = {31st USENIX Security Symposium (USENIX Security 22)},
    year = {2022},
    isbn = {978-1-939133-31-1},
    address = {Boston, MA},
    pages = {827--844},
    url = {https://www.usenix.org/conference/usenixsecurity22/presentation/watson},
    publisher = {USENIX Association},
    month = aug,
}

Artifact Evaluation

For our experiments, we use a cluser of AWS GPU-provisioned machines. Reviewers should have credentials to access the environment, but due to resource limits, we can only support one reviewer evaluating at a time. You can run Piranha to regenerate Figures 4, 5, 6, and 7, as well as Tables 2, 3, and 4.

Evaluation runs through experiments/run_experiment.py, which should be executed on the control instance we provide with the required dependencies. Here are the relevant options:

usage: run_experiment.py [-h] [--start] [--stop] [--figure FIGURE] [--table TABLE] [--generate] [--fast] [--verbose]

Run artifact evaluation!

optional arguments:
  -h, --help       show this help message and exit
  --start          Provision cluster for experiments. _Please suspend the cluster while not running experiments :)_
  --stop           Suspend evaluation machines.
  --figure FIGURE  Figure # to run.
  --table TABLE    Table # to run.
  --generate       Generate figure/table images.
  --fast           Run all the (relatively) fast runs, see README for more information
  --verbose        Display verbose run commands, helpful for debugging
  • You can start and stop the cluster with --start and --stop, respectively. Please use these if you're not running evaluation! GPU instances are not cheap and cost about $450/day to keep running.

  • Use the --figure and --table flags to run data generation for each of the paper's figures/tables. They're fairly automatic and should run without intervention.

  • Generate each figure/table with the --generate flag. You can run the evaluation script on partial results and the results will reflect those partial values. Figures generate .png files in artifact_figures/artifact while table replication generates JSON. You can compare to the paper figures/tables generated into artifact_figures/paper from hardcoded data.

  • Very important note on timing. Unfortunately, MPC still requires a significant amount of time (~30 hrs/training run) on a larger network like VGG16. A conservative estimate is that for Figure 5 alone, > 270 computation-hours are required to replicate the full figure. We've included a --fast flag if you'd like to replicate every other datapoint first (will still require a number of compute-hours), then come back to the VGG-based values.

  • Use --verbose if something isn't working and you want to take a look at the raw output or need an error message. In the backend, we use Ansible to communicate with each of the machines in the cluster.

piranha's People

Contributors

jlwatson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

piranha's Issues

Low accuracy of LeNet

Hey. I try to run the experiments for LeNet model. However i noticed that the convergence of LeNet is weird. The test accuracy of LeNet remains low (far from 98%). I only changed the localhost_config.json to run locally with the following configurations:

  1. 3PC computation based on Falcon
  2. network: files/models/lenet.json
  3. batch size: 128
  4. epoch: 5
  5. lr: 2^-4
  6. precision: 16

The output is as follows:
image

Getting min,-nan,avg,-nan,max,-nan during training

Hi. When running piranha with:

  • 3 parties
  • Command to build: make -j12 PIRANHA_FLAGS="-DFLOAT_PRECISION=64"
  • Model: lenet-norelu model with MNIST
  • Localhost with a single GPU, setting all CUDA_VISIBLE_DEVICES of localhost_runner.sh to 0.
  • Localhost config as follows:
{
    "_desc_num_parties": "Number of parties involved in the MPC computation. Should match compiled application protocol.",
    "num_parties": 3,

    "_desc_party_ips": "IP addresses of each party, ordered by party number",
    "party_ips": ["127.0.0.1", "127.0.0.1", "127.0.0.1"],

    "_desc_party_users": "Usernames for each party, ordered by party number. Can be used by scripts to SSH into machines if necessary to start piranha processes",
    "party_users": [],

    "_desc_run_unit_tests": "Run unit tests",
    "run_unit_tests": false,

    "_desc_unit_test_only": "Only run unit tests, do not try to run any full Piranha applications",
    "unit_test_only": false,

    "_desc_debug_print": "Show debug output",
    "debug_print": true,

    "_desc_debug_overflow": "Test for overflow and print related debug output. Breaks security by revealing intermediary values.",
    "debug_overflow": false,

    "_desc_debug_sqrt": "Test sqrt for invalid input and print related debug output. Breaks security by revealing intermediary values.",
    "debug_sqrt": false,

    "_desc_run_name": "Descriptive name for run, used to name log files with something useful",
    "run_name": "piranha_localhost",

    "_desc_network": "Path to NN architecture to use for the run",
    "network": "files/models/lenet-norelu.json",

    "_desc_custom_epochs": "Enable custom number of epochs. Otherwise set by size of learning rate schedule",
    "custom_epochs": true,

    "_desc_custom_epoch_count": "Number of epochs to train for, if custom_epochs is set",
    "custom_epoch_count": 10,

    "_desc_custom_iterations": "Enable custom number of iterations. Otherwise set by training dataset size.",
    "custom_iterations": false,

    "_desc_custom_iteration_count": "Number of iterators to train per epoch, if custom_iterations is set",
    "custom_iteration_count": 0,

    "_desc_custom_batch_size": "Enable custom batch size. Otherwise set by architecture configuration.",
    "custom_batch_size": true,

    "_desc_custom_batch_size_count": "Desired custom batch size",
    "custom_batch_size_count": 256,

    "_desc_nn_seed": "Seed for NN initialization",
    "nn_seed": 343934585,

    "_desc_preload": "Preload weights from a snapshot directory instead of training from scratch",
    "preload": false,

    "_desc_preload_path": "Directory path from which to preload network weights",
    "preload_path": "",

    "_desc_lr_schedule": "Learning rate schedule, in negative powers of 2 (e.g. 3 -> learning rate of 2^-3). Assumes that the number of LR exponents matches the desired number of training epochs",
    "lr_schedule": [3, 3, 3, 4, 4, 5, 6, 7, 8, 9],

    "_desc_test_only": "Only run NN test, skip training (useful if weights have been preloaded)",
    "test_only": false,

    "_desc_inference_only": "Only run inference (forward pass), not backward pass training",
    "inference_only": false,

    "_desc_no_test": "Do not run testing after training epochs",
    "no_test": false,

    "_desc_last_test": "Only run a test pass after the last training epoch",
    "last_test": true,

    "_desc_iteration_snapshots": "Take snapshots at each training iteration",
    "iteration_snapshots": false,

    "_desc_test_iteration_snapshots": "Take snapshots of a '1PC' test network running the same data",
    "test_iteration_snapshots": false,

    "_desc_epoch_snapshots": "Take snapshots after each training epoch",
    "epoch_snapshots": false,

    "_desc_eval_accuracy": "Evaluation: print training/test accuracy",
    "eval_accuracy": true,

    "_desc_eval_inference_stats": "Evaluation: print runtime and communication statistics for each inference forward pass",
    "eval_inference_stats": false,

    "_desc_eval_train_stats": "Evaluation: print runtime and communication statistics for each training forward-backward pass",
    "eval_train_stats": false,

    "_desc_eval_fw_peak_memory": "Evaluation: print peak memory usage during each forward pass",
    "eval_fw_peak_memory": false,

    "_desc_eval_bw_peak_memory": "Evaluation: print peak memory usage during each backward pass",
    "eval_bw_peak_memory": false,
    
    "_desc_eval_epoch_stats": "Evaluation: print cumulative runtime and communication statistics for each training epoch",
    "eval_epoch_stats": true,

    "_desc_print_activations": "Print output activations for each layer every forward pass",
    "print_activations": false,

    "_desc_print_deltas": "Print input gradient to each layer every backward pass",
    "print_deltas": false,
    
    "_desc_debug_all_forward": "Print debug information for all layer forward passes",
    "debug_all_forward": true,

    "_desc_debug_all_backward": "Print debug information for all layer backward passes",
    "debug_all_backward": true
}

I get the following output:

run unit tests? false
config network: "files/models/lenet-norelu.json"
network filename: files/models/lenet-norelu.json
----------------------------------------------
(1) CNN Layer             28 x 28 x 1
                          5 x 5         (Filter Size)
                          1 , 0         (Stride, padding)
                          256           (Batch Size)
                          24 x 24 x 20  (Output)
----------------------------------------------
(2) Maxpool Layer         24 x 24 x 20
                          2             (Pooling Size)
                          2             (Stride)
                          256           (Batch Size)
----------------------------------------------
(3) ReLU Layer            256 x 2880
----------------------------------------------
(4) CNN Layer             12 x 12 x 20
                          5 x 5         (Filter Size)
                          1 , 0         (Stride, padding)
                          256           (Batch Size)
                          8 x 8 x 50    (Output)
----------------------------------------------
(5) Maxpool Layer         8 x 8 x 50
                          2             (Pooling Size)
                          2             (Stride)
                          256           (Batch Size)
----------------------------------------------
(6) ReLU Layer            256 x 800
----------------------------------------------
(7) FC Layer              800 x 500
                          256            (Batch Size)
----------------------------------------------
(8) ReLU Layer            256 x 500
----------------------------------------------
(9) FC Layer              500 x 10
                          256            (Batch Size)
TRAINING, EPOCHS = 10 ITERATIONS = 234

 == Training (10 epochs) ==

 -- Epoch 0 (234 iterations, log_lr = 3) --
iteration,0
layer 0
cnn,fw activation,min,-nan,avg,-nan,max,-nan
layer 1
maxpool,fw activation,min,-nan,avg,-nan,max,-nan
layer 2
relu,fw activation,min,-nan,avg,-nan,max,-nan
layer 3
cnn,fw activation,min,-nan,avg,-nan,max,-nan
layer 4
maxpool,fw activation,min,-nan,avg,-nan,max,-nan
layer 5
relu,fw activation,min,-nan,avg,-nan,max,-nan
layer 6
fc,fw activation,min,-nan,avg,-nan,max,-nan
layer 7
relu,fw activation,min,-nan,avg,-nan,max,-nan
layer 8
fc,fw activation,min,-nan,avg,-nan,max,-nan
layer 8
fc,bw input delta,min,inf,avg,inf,max,inf
max bw dW value: -nan
max bw db value: -inf
layer 7
relu,bw input delta,min,-nan,avg,-nan,max,-nan
layer 6
fc,bw input delta,min,-nan,avg,-nan,max,-nan
max bw dW value: -nan
max bw db value: -nan
layer 5
relu,bw input delta,min,-nan,avg,-nan,max,-nan
layer 4
maxpool,bw input delta,min,-nan,avg,-nan,max,-nan
layer 3
cnn,bw input delta,min,-nan,avg,-nan,max,-nan
max bw dF value: -nan
layer 2
relu,bw input delta,min,-nan,avg,-nan,max,-nan
layer 1
maxpool,bw input delta,min,-nan,avg,-nan,max,-nan
layer 0
cnn,bw input delta,min,-nan,avg,-nan,max,-nan
max bw dF value: -nan
iteration,1
layer 0
cnn,fw activation,min,-nan,avg,-nan,max,-nan
...

Lack of randomness in shares in 2PC

Hello,

Insofar as I can tell, in the 2PC code, shares of beaver triples are generated as (0, 0, 0) for both parties. Due to this (and some other code), shares of all intermediate values are of the form (X, 0). This is worrying for two reasons. First, this is insecure and may produce incorrect latency numbers. Second, all truncations are in fact exact and do not emulate local truncation errors in MPC when done with correct beaver triples. Could you suggest a way to measure accuracy when the MPC suffers from errors caused by local truncation?

Thanks!

Timing differences in two tests: copy data from device to host

Hi @jlwatson ,
When I test the timing results for copying data from device to host, I got the different results as shown:

  1. First test to print uint64_t in DeviceData
# util.cuh
template<typename T, typename I>
void printDeviceDataForInteger(DeviceData<T, I> &device_data)
{
    std::vector<uint64_t> host_temp(device_data.size());
    thrust::copy(device_data.begin(), device_data.end(), host_temp.begin());

    for (int i = 0; i < host_temp.size(); i++) {
        printf("%lld ", host_temp[i]);
    }
    std::cout << std::endl;
}
# DeviceData.cu
TYPED_TEST(DeviceDataTest, IntegerDeviceData) {
    using T = typename TestFixture::ParamType;
    DeviceData<T> d1 = {1, 2, 3};
    DeviceData<T> d2 = {1, 1, 1};
    d1 += d2;
    printDeviceDataForInteger(d1);
}

The result is around 32538ms
2. The second test is the original DRELU2 test in piranha:

TYPED_TEST(FuncTest, DRELU2) {

    ....omit.....

    //Change to <uint8_t>
    DeviceData<uint64_t> super_result(result.size());
    reconstruct(result, super_result);

    printDeviceData(super_result, "actual", false);
    assertDeviceData(super_result, expected, false);
}

while the result is around 13ms as shown:

[ RUN      ] FuncTest/1.DRELU2
actual:
0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 
src/test/../mpc/../gpu/../util/util.cuh:343: Failure
Expected: (fabs(host_result[i] - expected[i])) <= (epsilon), actual: 1 vs 0.001
[  FAILED  ] FuncTest/1.DRELU2, where TypeParam = TPC<unsigned long, thrust::detail::normal_iterator<thrust::device_ptr<unsigned long> > > 
(13 ms)

What's the reason for these two different timing results?

An error in RSS.inl

Hi,

piranha/src/mpc/RSS.inl

Lines 123 to 142 in 8fd8219

void RSSBase<T, I>::setPublic(std::vector<double> &v) {
std::vector<T> shifted_vals;
for (double f : v) {
shifted_vals.push_back((T) (f * (1 << FLOAT_PRECISION)));
}
switch (partyNum) {
case PARTY_A:
thrust::copy(shifted_vals.begin(), shifted_vals.end(), shareA->begin());
shareB->zero();
break;
case PARTY_B:
shareA->zero();
shareB->zero();
case PARTY_C:
shareA->zero();
thrust::copy(shifted_vals.begin(), shifted_vals.end(), shareB->begin());
break;
}
};

In PARTY_B, we miss break which will cause errors.

Run piranha on a single GPU machine with multiple GPUs

Hi Jean-Luc @jlwatson ,
As introduced, piranha runs on 2/3/4 machines in the AWS cluster.
However, currently, I have only one GPU machine with 4 Tesla GPUs or some GPU machines with different hardware configurations.
Does piranha provide a simulation for running it on a single GPU machine with multiple GPUs?
Thanks in advance!

Share conversion in Piranha

Hi, @jlwatson
For share conversions in RSS, boolean to arithmetic is implemented in the function bitexpand(DeviceData<T, I> *a, DeviceData<U, I2> *b). Does piranha implement arithmetic to boolean (A2B, decomposition)?

BTW, does piranha implement the functionality of the equality test for two integers in 3PC (RSS)?

Questions on the communication size of AlexNet (on CIFAR 10) and LeNet (on MNIST) in Table 2 of the paper

We noticed that the shape of data samples in CIFAR 10 is 32x32x3, while for MNIST is 28x28x1. Moreover, the number of parameters in AlexNet is much bigger than LeNet. Combining these two observations, we arrived at why, with the same protocol (e.g., Falcon), the communication size of LeNet training MNIST is 485.90 GB while the communication size of AlexNet training CIFAR 10 is 382.18GB. Is there any optimization used in AlexNet?

value

colabで実行すると,makeする際,valueの値を用いているためmake出来ないとエラーがでた.以下にエラーの文の一部を出力する.

/usr/local/cuda/bin/nvcc -dc -Xcompiler="-O3,-w,-std=c++14,-pthread,-msse4.1,-maes,-msse2,-mpclmul,-fpermissive,-fpic,-pthread" -Xcudafe "--diag_suppress=declared_but_not_referenced" -Xcompiler="-DFLOAT_PRECISION=8 -DTWOPC" -c src/main.cu -o build/main.o -I '/usr/local/cuda-11.5/include' -I 'ext/cutlass/include' -I 'ext/cutlass/tools/util/include' -I 'include'
/usr/local/cuda/bin/nvcc -dc -Xcompiler="-O3,-w,-std=c++14,-pthread,-msse4.1,-maes,-msse2,-mpclmul,-fpermissive,-fpic,-pthread" -Xcudafe "--diag_suppress=declared_but_not_referenced" -Xcompiler="-DFLOAT_PRECISION=8 -DTWOPC" -c src/nn/AveragepoolLayer.cu -o build/AveragepoolLayer.o -I '/usr/local/cuda-11.5/include' -I 'ext/cutlass/include' -I 'ext/cutlass/tools/util/include' -I 'include'
/usr/local/cuda/bin/nvcc -dc -Xcompiler="-O3,-w,-std=c++14,-pthread,-msse4.1,-maes,-msse2,-mpclmul,-fpermissive,-fpic,-pthread" -Xcudafe "--diag_suppress=declared_but_not_referenced" -Xcompiler="-DFLOAT_PRECISION=8 -DTWOPC" -c src/nn/CNNLayer.cu -o build/CNNLayer.o -I '/usr/local/cuda-11.5/include' -I 'ext/cutlass/include' -I 'ext/cutlass/tools/util/include' -I 'include'
/usr/local/cuda/bin/nvcc -dc -Xcompiler="-O3,-w,-std=c++14,-pthread,-msse4.1,-maes,-msse2,-mpclmul,-fpermissive,-fpic,-pthread" -Xcudafe "--diag_suppress=declared_but_not_referenced" -Xcompiler="-DFLOAT_PRECISION=8 -DTWOPC" -c src/nn/FCLayer.cu -o build/FCLayer.o -I '/usr/local/cuda-11.5/include' -I 'ext/cutlass/include' -I 'ext/cutlass/tools/util/include' -I 'include'
/usr/local/cuda/bin/nvcc -dc -Xcompiler="-O3,-w,-std=c++14,-pthread,-msse4.1,-maes,-msse2,-mpclmul,-fpermissive,-fpic,-pthread" -Xcudafe "--diag_suppress=declared_but_not_referenced" -Xcompiler="-DFLOAT_PRECISION=8 -DTWOPC" -c src/nn/LNLayer.cu -o build/LNLayer.o -I '/usr/local/cuda-11.5/include' -I 'ext/cutlass/include' -I 'ext/cutlass/tools/util/include' -I 'include'
/usr/local/cuda/bin/nvcc -dc -Xcompiler="-O3,-w,-std=c++14,-pthread,-msse4.1,-maes,-msse2,-mpclmul,-fpermissive,-fpic,-pthread" -Xcudafe "--diag_suppress=declared_but_not_referenced" -Xcompiler="-DFLOAT_PRECISION=8 -DTWOPC" -c src/nn/MaxpoolLayer.cu -o build/MaxpoolLayer.o -I '/usr/local/cuda-11.5/include' -I 'ext/cutlass/include' -I 'ext/cutlass/tools/util/include' -I 'include'
/usr/local/cuda/bin/nvcc -dc -Xcompiler="-O3,-w,-std=c++14,-pthread,-msse4.1,-maes,-msse2,-mpclmul,-fpermissive,-fpic,-pthread" -Xcudafe "--diag_suppress=declared_but_not_referenced" -Xcompiler="-DFLOAT_PRECISION=8 -DTWOPC" -c src/nn/NeuralNetwork.cu -o build/NeuralNetwork.o -I '/usr/local/cuda-11.5/include' -I 'ext/cutlass/include' -I 'ext/cutlass/tools/util/include' -I 'include'
/usr/local/cuda/bin/nvcc -dc -Xcompiler="-O3,-w,-std=c++14,-pthread,-msse4.1,-maes,-msse2,-mpclmul,-fpermissive,-fpic,-pthread" -Xcudafe "--diag_suppress=declared_but_not_referenced" -Xcompiler="-DFLOAT_PRECISION=8 -DTWOPC" -c src/nn/ReLULayer.cu -o build/ReLULayer.o -I '/usr/local/cuda-11.5/include' -I 'ext/cutlass/include' -I 'ext/cutlass/tools/util/include' -I 'include'
/usr/include/c++/11/type_traits:79:52: error: redefinition of ‘constexpr const _Tp std::integral_constant<_Tp, __v>::value’
79 | template<typename _Tp, _Tp __v>
| ^
/usr/include/c++/11/type_traits:67:29: note: ‘constexpr const _Tp value’ previously declared here
67 | static constexpr _Tp value = __v;
| ^~~~~
/usr/include/c++/11/type_traits:79:52: error: redefinition of ‘constexpr const _Tp std::integral_constant<_Tp, __v>::value’
79 | template<typename _Tp, _Tp __v>
| ^
/usr/include/c++/11/type_traits:67:29: note: ‘constexpr const _Tp value’ previously declared here
67 | static constexpr _Tp value = __v;
| ^~~~~
include/json.hpp:3176:39: error: redefinition of ‘constexpr const T nlohmann::detail::static_const::value’
3176 | template
| ^
include/json.hpp:3173:27: note: ‘constexpr const T nlohmann::detail::static_const::value’ previously declared here
3173 | static constexpr T value{};
| ^~~~~
include/json.hpp:3176:39: error: redefinition of ‘constexpr const T nlohmann::detail::static_const::value’
3176 | template
| ^
include/json.hpp:3173:27: note: ‘constexpr const T nlohmann::detail::static_const::value’ previously declared here
3173 | static constexpr T value{};
| ^~~~~
/usr/include/c++/11/ratio:282:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’
282 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:273:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’ previously declared here
273 | static constexpr intmax_t num =
| ^~~
/usr/include/c++/11/ratio:285:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’
285 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:276:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’ previously declared here
276 | static constexpr intmax_t den =
| ^~~
/usr/include/c++/11/ratio:310:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’
310 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:306:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’ previously declared here
306 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:313:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’
313 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:307:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’ previously declared here
307 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:337:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::num’
337 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:333:34: note: ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::num’ previously declared here
333 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:340:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::den’
340 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:334:34: note: ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::den’ previously declared here
334 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:515:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::num’
515 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:511:34: note: ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::num’ previously declared here
511 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:518:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::den’
518 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:512:34: note: ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::den’ previously declared here
512 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:540:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::num’
540 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:536:34: note: ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::num’ previously declared here
536 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:543:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::den’
543 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:537:34: note: ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::den’ previously declared here
537 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:282:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’
282 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:273:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’ previously declared here
273 | static constexpr intmax_t num =
| ^~~
/usr/include/c++/11/ratio:285:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’
285 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:276:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’ previously declared here
276 | static constexpr intmax_t den =
| ^~~
/usr/include/c++/11/ratio:310:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’
310 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:306:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’ previously declared here
306 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:313:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’
313 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:307:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’ previously declared here
307 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:337:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::num’
337 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:333:34: note: ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::num’ previously declared here
333 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:340:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::den’
340 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:334:34: note: ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::den’ previously declared here
334 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:515:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::num’
515 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:511:34: note: ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::num’ previously declared here
511 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:518:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::den’
518 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:512:34: note: ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::den’ previously declared here
512 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:540:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::num’
540 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:536:34: note: ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::num’ previously declared here
536 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:543:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::den’
543 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:537:34: note: ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::den’ previously declared here
537 | static constexpr intmax_t den = type::den;
| ^~~
include/gtest/internal/gtest-internal.h:904:42: error: redefinition of ‘constexpr const bool testing::internal::HasDebugStringAndShortDebugString::value’
904 | template
| ^
include/gtest/internal/gtest-internal.h:899:38: note: ‘constexpr const bool testing::internal::HasDebugStringAndShortDebugString::value’ previously declared here
899 | static constexpr bool value =
| ^
include/gtest/internal/gtest-internal.h:904:42: error: redefinition of ‘constexpr const bool testing::internal::HasDebugStringAndShortDebugString::value’
904 | template
| ^
include/gtest/internal/gtest-internal.h:899:38: note: ‘constexpr const bool testing::internal::HasDebugStringAndShortDebugString::value’ previously declared here
899 | static constexpr bool value =
| ^
/usr/include/c++/11/type_traits:79:52: error: redefinition of ‘constexpr const _Tp std::integral_constant<_Tp, __v>::value’
79 | template<typename _Tp, _Tp __v>
| ^
/usr/include/c++/11/type_traits:67:29: note: ‘constexpr const _Tp value’ previously declared here
67 | static constexpr _Tp value = __v;
| ^~~~~
/usr/include/c++/11/type_traits:79:52: error: redefinition of ‘constexpr const _Tp std::integral_constant<_Tp, __v>::value’
79 | template<typename _Tp, _Tp __v>
| ^
/usr/include/c++/11/type_traits:67:29: note: ‘constexpr const _Tp value’ previously declared here
67 | static constexpr _Tp value = __v;
| ^~~~~
include/json.hpp:3176:39: error: redefinition of ‘constexpr const T nlohmann::detail::static_const::value’
3176 | template
| ^
include/json.hpp:3173:27: note: ‘constexpr const T nlohmann::detail::static_const::value’ previously declared here
3173 | static constexpr T value{};
| ^~~~~
/usr/include/c++/11/ratio:282:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’
282 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:273:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’ previously declared here
273 | static constexpr intmax_t num =
| ^~~
/usr/include/c++/11/ratio:285:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’
285 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:276:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’ previously declared here
276 | static constexpr intmax_t den =
| ^~~
/usr/include/c++/11/ratio:310:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’
310 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:306:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’ previously declared here
306 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:313:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’
313 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:307:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’ previously declared here
307 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:337:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::num’
337 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:333:34: note: ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::num’ previously declared here
333 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:340:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::den’
340 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:334:34: note: ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::den’ previously declared here
334 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:515:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::num’
515 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:511:34: note: ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::num’ previously declared here
511 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:518:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::den’
518 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:512:34: note: ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::den’ previously declared here
512 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:540:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::num’
540 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:536:34: note: ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::num’ previously declared here
536 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:543:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::den’
543 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:537:34: note: ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::den’ previously declared here
537 | static constexpr intmax_t den = type::den;
| ^~~
include/json.hpp:3176:39: error: redefinition of ‘constexpr const T nlohmann::detail::static_const::value’
3176 | template
| ^
include/json.hpp:3173:27: note: ‘constexpr const T nlohmann::detail::static_const::value’ previously declared here
3173 | static constexpr T value{};
| ^~~~~
/usr/include/c++/11/ratio:282:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’
282 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:273:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’ previously declared here
273 | static constexpr intmax_t num =
| ^~~
/usr/include/c++/11/ratio:285:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’
285 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:276:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’ previously declared here
276 | static constexpr intmax_t den =
| ^~~
/usr/include/c++/11/ratio:310:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’
310 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:306:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’ previously declared here
306 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:313:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’
313 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:307:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’ previously declared here
307 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:337:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::num’
337 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:333:34: note: ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::num’ previously declared here
333 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:340:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::den’
340 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:334:34: note: ‘constexpr const intmax_t std::__ratio_divide<_R1, _R2>::den’ previously declared here
334 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:515:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::num’
515 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:511:34: note: ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::num’ previously declared here
511 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:518:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::den’
518 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:512:34: note: ‘constexpr const intmax_t std::__ratio_add<_R1, _R2>::den’ previously declared here
512 | static constexpr intmax_t den = type::den;
| ^~~
/usr/include/c++/11/ratio:540:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::num’
540 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:536:34: note: ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::num’ previously declared here
536 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:543:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::den’
543 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:537:34: note: ‘constexpr const intmax_t std::__ratio_subtract<_R1, _R2>::den’ previously declared here
537 | static constexpr intmax_t den = type::den;
| ^~~
include/gtest/internal/gtest-internal.h:904:42: error: redefinition of ‘constexpr const bool testing::internal::HasDebugStringAndShortDebugString::value’
904 | template
| ^
include/gtest/internal/gtest-internal.h:899:38: note: ‘constexpr const bool testing::internal::HasDebugStringAndShortDebugString::value’ previously declared here
899 | static constexpr bool value =
| ^
include/gtest/internal/gtest-internal.h:904:42: error: redefinition of ‘constexpr const bool testing::internal::HasDebugStringAndShortDebugString::value’
904 | template
| ^
include/gtest/internal/gtest-internal.h:899:38: note: ‘constexpr const bool testing::internal::HasDebugStringAndShortDebugString::value’ previously declared here
899 | static constexpr bool value =
| ^
make: *** [Makefile:46: build/AveragepoolLayer.o] Error 1
make: *** Waiting for unfinished jobs....
make: *** [Makefile:46: build/LNLayer.o] Error 1
/usr/include/c++/11/bits/random.tcc:94:100: error: redefinition of ‘constexpr const _UIntType std::linear_congruential_engine<_UIntType, __a, __c, __m>::multiplier’
94 | template<typename _UIntType, _UIntType __a, _UIntType __c, _UIntType __m>
| ^
/usr/include/c++/11/bits/random.h:271:37: note: ‘constexpr const result_type multiplier’ previously declared here
271 | static constexpr result_type multiplier = __a;
| ^~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:98:100: error: redefinition of ‘constexpr const _UIntType std::linear_congruential_engine<_UIntType, __a, __c, __m>::increment’
98 | template<typename _UIntType, _UIntType __a, _UIntType __c, _UIntType __m>
| ^
/usr/include/c++/11/bits/random.h:273:37: note: ‘constexpr const result_type increment’ previously declared here
273 | static constexpr result_type increment = __c;
| ^~~~~~~~~
/usr/include/c++/11/bits/random.tcc:102:100: error: redefinition of ‘constexpr const _UIntType std::linear_congruential_engine<_UIntType, __a, __c, __m>::modulus’
102 | template<typename _UIntType, _UIntType __a, _UIntType __c, _UIntType __m>
| ^
/usr/include/c++/11/bits/random.h:275:37: note: ‘constexpr const result_type modulus’ previously declared here
275 | static constexpr result_type modulus = __m;
| ^~~~~~~
/usr/include/c++/11/bits/random.tcc:106:100: error: redefinition of ‘constexpr const _UIntType std::linear_congruential_engine<_UIntType, __a, __c, __m>::default_seed’
106 | template<typename _UIntType, _UIntType __a, _UIntType __c, _UIntType __m>
| ^
/usr/include/c++/11/bits/random.h:276:37: note: ‘constexpr const result_type std::linear_congruential_engine<_UIntType, __a, __c, __m>::default_seed’ previously declared here
276 | static constexpr result_type default_seed = 1u;
| ^~~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:190:1: error: redefinition of ‘constexpr const size_t std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::word_size’
190 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:510:32: note: ‘constexpr const size_t word_size’ previously declared here
510 | static constexpr size_t word_size = __w;
| ^~~~~~~~~
/usr/include/c++/11/bits/random.tcc:199:1: error: redefinition of ‘constexpr const size_t std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::state_size’
199 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:511:32: note: ‘constexpr const size_t state_size’ previously declared here
511 | static constexpr size_t state_size = __n;
| ^~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:208:1: error: redefinition of ‘constexpr const size_t std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::shift_size’
208 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:512:32: note: ‘constexpr const size_t shift_size’ previously declared here
512 | static constexpr size_t shift_size = __m;
| ^~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:217:1: error: redefinition of ‘constexpr const size_t std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::mask_bits’
217 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:513:32: note: ‘constexpr const size_t mask_bits’ previously declared here
513 | static constexpr size_t mask_bits = __r;
| ^~~~~~~~~
/usr/include/c++/11/bits/random.tcc:226:1: error: redefinition of ‘constexpr const _UIntType std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::xor_mask’
226 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:514:37: note: ‘constexpr const result_type xor_mask’ previously declared here
514 | static constexpr result_type xor_mask = __a;
| ^~~~~~~~
/usr/include/c++/11/bits/random.tcc:235:1: error: redefinition of ‘constexpr const size_t std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::tempering_u’
235 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:515:32: note: ‘constexpr const size_t tempering_u’ previously declared here
515 | static constexpr size_t tempering_u = __u;
| ^~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:244:1: error: redefinition of ‘constexpr const _UIntType std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::tempering_d’
244 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:516:37: note: ‘constexpr const result_type tempering_d’ previously declared here
516 | static constexpr result_type tempering_d = __d;
| ^~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:253:1: error: redefinition of ‘constexpr const size_t std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::tempering_s’
253 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:517:32: note: ‘constexpr const size_t tempering_s’ previously declared here
517 | static constexpr size_t tempering_s = __s;
| ^~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:262:1: error: redefinition of ‘constexpr const _UIntType std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::tempering_b’
262 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:518:37: note: ‘constexpr const result_type tempering_b’ previously declared here
518 | static constexpr result_type tempering_b = __b;
| ^~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:271:1: error: redefinition of ‘constexpr const size_t std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::tempering_t’
271 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:519:32: note: ‘constexpr const size_t tempering_t’ previously declared here
519 | static constexpr size_t tempering_t = __t;
| ^~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:280:1: error: redefinition of ‘constexpr const _UIntType std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::tempering_c’
280 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:520:37: note: ‘constexpr const result_type tempering_c’ previously declared here
520 | static constexpr result_type tempering_c = __c;
| ^~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:289:1: error: redefinition of ‘constexpr const size_t std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::tempering_l’
289 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:521:32: note: ‘constexpr const size_t tempering_l’ previously declared here
521 | static constexpr size_t tempering_l = __l;
| ^~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:298:1: error: redefinition of ‘constexpr const _UIntType std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::initialization_multiplier’
298 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:522:37: note: ‘constexpr const result_type initialization_multiplier’ previously declared here
522 | static constexpr result_type initialization_multiplier = __f;
| ^~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:308:1: error: redefinition of ‘constexpr const _UIntType std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::default_seed’
308 | template<typename _UIntType,
| ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/11/bits/random.h:523:37: note: ‘constexpr const result_type std::mersenne_twister_engine<_UIntType, __w, __n, __m, __r, __a, __u, __d, __s, __b, __t, __c, __l, __f>::default_seed’ previously declared here
523 | static constexpr result_type default_seed = 5489u;
| ^~~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:519:88: error: redefinition of ‘constexpr const size_t std::subtract_with_carry_engine<_UIntType, __w, __s, __r>::word_size’
519 | template<typename _UIntType, size_t __w, size_t __s, size_t __r>
| ^
/usr/include/c++/11/bits/random.h:710:32: note: ‘constexpr const size_t word_size’ previously declared here
710 | static constexpr size_t word_size = __w;
| ^~~~~~~~~
/usr/include/c++/11/bits/random.tcc:523:88: error: redefinition of ‘constexpr const size_t std::subtract_with_carry_engine<_UIntType, __w, __s, __r>::short_lag’
523 | template<typename _UIntType, size_t __w, size_t __s, size_t __r>
| ^
/usr/include/c++/11/bits/random.h:711:32: note: ‘constexpr const size_t short_lag’ previously declared here
711 | static constexpr size_t short_lag = __s;
| ^~~~~~~~~
/usr/include/c++/11/bits/random.tcc:527:88: error: redefinition of ‘constexpr const size_t std::subtract_with_carry_engine<_UIntType, __w, __s, __r>::long_lag’
527 | template<typename _UIntType, size_t __w, size_t __s, size_t __r>
| ^
/usr/include/c++/11/bits/random.h:712:32: note: ‘constexpr const size_t long_lag’ previously declared here
712 | static constexpr size_t long_lag = __r;
| ^~~~~~~~
/usr/include/c++/11/bits/random.tcc:531:91: error: redefinition of ‘constexpr const _UIntType std::subtract_with_carry_engine<_UIntType, __w, __s, __r>::default_seed’
531 | template<typename _UIntType, size_t __w, size_t __s, size_t __r>
| ^
/usr/include/c++/11/bits/random.h:713:37: note: ‘constexpr const result_type std::subtract_with_carry_engine<_UIntType, __w, __s, __r>::default_seed’ previously declared here
713 | static constexpr result_type default_seed = 19780503u;
| ^~~~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:670:86: error: redefinition of ‘constexpr const size_t std::discard_block_engine<_RandomNumberEngine, __p, __r>::block_size’
670 | template<typename _RandomNumberEngine, size_t __p, size_t __r>
| ^
/usr/include/c++/11/bits/random.h:898:32: note: ‘constexpr const size_t block_size’ previously declared here
898 | static constexpr size_t block_size = __p;
| ^~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:674:86: error: redefinition of ‘constexpr const size_t std::discard_block_engine<_RandomNumberEngine, __p, __r>::used_block’
674 | template<typename _RandomNumberEngine, size_t __p, size_t __r>
| ^
/usr/include/c++/11/bits/random.h:899:32: note: ‘constexpr const size_t used_block’ previously declared here
899 | static constexpr size_t used_block = __r;
| ^~~~~~~~~~
/usr/include/c++/11/bits/random.tcc:803:74: error: redefinition of ‘constexpr const size_t std::shuffle_order_engine<_RandomNumberEngine, __k>::table_size’
803 | template<typename _RandomNumberEngine, size_t __k>
| ^
/usr/include/c++/11/bits/random.h:1339:32: note: ‘constexpr const size_t table_size’ previously declared here
1339 | static constexpr size_t table_size = __k;
| ^~~~~~~~~~
/usr/include/c++/11/type_traits:79:52: error: redefinition of ‘constexpr const _Tp std::integral_constant<_Tp, __v>::value’
79 | template<typename _Tp, _Tp __v>
| ^
/usr/include/c++/11/type_traits:67:29: note: ‘constexpr const _Tp value’ previously declared here
67 | static constexpr _Tp value = __v;
| ^~~~~
/usr/include/c++/11/bits/regex.h:806:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::icase’
806 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:417:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::icase’ previously declared here
417 | static constexpr flag_type icase = regex_constants::icase;
| ^~~~~
/usr/include/c++/11/bits/regex.h:810:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::nosubs’
810 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:418:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::nosubs’ previously declared here
418 | static constexpr flag_type nosubs = regex_constants::nosubs;
| ^~~~~~
/usr/include/c++/11/bits/regex.h:814:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::optimize’
814 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:419:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::optimize’ previously declared here
419 | static constexpr flag_type optimize = regex_constants::optimize;
| ^~~~~~~~
/usr/include/c++/11/bits/regex.h:818:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::collate’
818 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:420:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::collate’ previously declared here
420 | static constexpr flag_type collate = regex_constants::collate;
| ^~~~~~~
/usr/include/c++/11/bits/regex.h:822:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::ECMAScript’
822 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:421:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::ECMAScript’ previously declared here
421 | static constexpr flag_type ECMAScript = regex_constants::ECMAScript;
| ^~~~~~~~~~
/usr/include/c++/11/bits/regex.h:826:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::basic’
826 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:422:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::basic’ previously declared here
422 | static constexpr flag_type basic = regex_constants::basic;
| ^~~~~
/usr/include/c++/11/bits/regex.h:830:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::extended’
830 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:423:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::extended’ previously declared here
423 | static constexpr flag_type extended = regex_constants::extended;
| ^~~~~~~~
/usr/include/c++/11/bits/regex.h:834:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::awk’
834 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:424:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::awk’ previously declared here
424 | static constexpr flag_type awk = regex_constants::awk;
| ^~~
/usr/include/c++/11/bits/regex.h:838:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::grep’
838 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:425:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::grep’ previously declared here
425 | static constexpr flag_type grep = regex_constants::grep;
| ^~~~
/usr/include/c++/11/bits/regex.h:842:86: error: redefinition of ‘constexpr const std::regex_constants::syntax_option_type std::__cxx11::basic_regex< , >::egrep’
842 | template<typename _Ch, typename _Tr>
| ^
/usr/include/c++/11/bits/regex.h:426:35: note: ‘constexpr const flag_type std::__cxx11::basic_regex< , >::egrep’ previously declared here
426 | static constexpr flag_type egrep = regex_constants::egrep;
| ^~~~~
include/json.hpp:3176:39: error: redefinition of ‘constexpr const T nlohmann::detail::static_const::value’
3176 | template
| ^
include/json.hpp:3173:27: note: ‘constexpr const T nlohmann::detail::static_const::value’ previously declared here
3173 | static constexpr T value{};
| ^~~~~
/usr/include/c++/11/ratio:282:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’
282 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:273:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::num’ previously declared here
273 | static constexpr intmax_t num =
| ^~~
/usr/include/c++/11/ratio:285:67: error: redefinition of ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’
285 | template<intmax_t _Num, intmax_t _Den>
| ^
/usr/include/c++/11/ratio:276:34: note: ‘constexpr const intmax_t std::ratio<_Num, _Den>::den’ previously declared here
276 | static constexpr intmax_t den =
| ^~~
/usr/include/c++/11/ratio:310:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’
310 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:306:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::num’ previously declared here
306 | static constexpr intmax_t num = type::num;
| ^~~
/usr/include/c++/11/ratio:313:59: error: redefinition of ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’
313 | template<typename _R1, typename _R2>
| ^
/usr/include/c++/11/ratio:307:34: note: ‘constexpr const intmax_t std::__ratio_multiply<_R1, _R2>::den’ previously declared here
307 | static constexpr intmax_t den = type::den;
| ^~~

わかる方解決してください.

Implementation of Rabbit (Makri et al)

The piranha usenix paper states that the comparison protocol of P-Falcon is Rabbit (Makri et al). I'm quite interested in it, but I'm not able to find the code that implements Rabbit (including the LTBits protocol that involves the evaluation of PrefixOr circuit). Could you please kindly tell me which function(s) implements Rabbit?
Thanks!

Some documents may need to be updated

Hi @jlwatson
I tried to run piranha locally on 3 GPUs and build it in the docker container.

  1. In Makefile, CUDA_VERSION=11.5 while the installed version is specified as11.6 in Dockerfile. I guess here needs the synchronization in piranha (I just changed the version in Makefileto avoid make error).

  2. After solving several configuration issues, I can run piranha locally now. I guess some commands in the Dockfile should be updated (e.g., download the dataset for training, etc..).
    Here is the running log:

root@a5cb7eceb1f6:/piranha# ./localhost_runner.sh 
run unit tests? false
config network: "files/models/secureml-norelu.json"
network filename: files/models/secureml-norelu.json
----------------------------------------------
(1) FC Layer		  784 x 128
			  512		 (Batch Size)
----------------------------------------------
(2) ReLU Layer		  512 x 128
----------------------------------------------
(3) FC Layer		  128 x 128
			  512		 (Batch Size)
----------------------------------------------
(4) ReLU Layer		  512 x 128
----------------------------------------------
(5) FC Layer		  128 x 10
			  512		 (Batch Size)
Error opening training data file at files/MNIST/train_data
Error opening training label file at files/MNIST/train_labels
Error opening test data file at files/MNIST/test_data
Error opening test label file at files/MNIST/test_label
TRAINING, EPOCHS = 10 ITERATIONS = 117
epoch,0
total time (s),63.125727
total tx comm (MB),2339.085938
total rx comm (MB),2339.085938
train accuracy,0.000000
epoch,1
total time (s),126.424196
total tx comm (MB),4678.171875
total rx comm (MB),4678.171875
train accuracy,0.000000

Here are some logs in my machine if someone wants to check:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |
+-------------------------------+----------------------+----------------------+

|   1  Tesla V100-DGXS...  On   | 00000000:08:00.0 Off |                    0 |
| N/A   38C    P0    53W / 300W |    371MiB / 32768MiB |     19%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla V100-DGXS...  On   | 00000000:0E:00.0 Off |                    0 |
| N/A   38C    P0    52W / 300W |    347MiB / 32768MiB |     18%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla V100-DGXS...  On   | 00000000:0F:00.0 Off |                    0 |
| N/A   39C    P0    54W / 300W |    339MiB / 32768MiB |     19%      Default |
+-------------------------------+----------------------+----------------------+

| Processes:                                                                                                             |
|  GPU   GI   CI        PID   Type   Process name       GPU Memory Usage|
|=================================================|
|    1   N/A  N/A      9249      C   ./piranha                         335MiB |
|    2   N/A  N/A      9250      C   ./piranha                         335MiB |
|    3   N/A  N/A      9252      C   ./piranha                         335MiB |
+-----------------------------------------------------------------------------+

Unit test case failure

Hi there!

I am currently trying to run piranha, however, it gives me failure (segfault) on unit test case @ FuncTest/2.Reconstruct. Due to this seg fault, nvprof also doesn't work. I have four machines, and running locally on each of those machines results in the same ( running three parties also results the same as well).

Is this failure expected?

image

Does Piranha support malicious protocols?

Hi, I read your platform is protocol-agnostic, but the paper mentions you implement and evaluate only semi-honest protocols. Does your platform also support malicious protocols? Thanks

Standalone version of integer kernels

Hi, I would like to better understand the tradeoffs between using built-in floating point kernels like in CryptGpu and utilizing the custom integers kernels of Piranha. I know there is a comparison in the paper for regular matrix multiplications. I want to benchmark the performance and memory overhead of the two approaches when computing convolutions and matrix multiplications of different sizes as they appear in popular neural networks.

Therefore, I wondered if there is a standalone version of the integer kernel or how to best extract it. This way, I can call the kernel from Pytorch to have a fair comparison against built-in implementations from a platform- and protocol-independent perspective.

In the piranha code, the files conv.cuh and convolution.cuh are templated and have multiple dependencies, so the task is tricky. If you have a standalone version to share, that would be superb. Otherwise, I would be grateful for any hints on how you could extract the integer kernels and make them accessible to Pytorch or other frameworks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.