Coder Social home page Coder Social logo

pomerol-ed / pomerol Goto Github PK

View Code? Open in Web Editor NEW
43.0 12.0 16.0 69.93 MB

Exact diagonalization, Lehmann's representation, Two-particle Green's functions

Home Page: http://pomerol-ed.github.io/pomerol/

License: Mozilla Public License 2.0

C++ 97.97% Python 0.12% CMake 1.80% Shell 0.11%
exact-diagonalization hubbard greens-functions pomerol quantum condensed-matter physics c-plus-plus

pomerol's Introduction

DOI Documentation Build and test (Ubuntu) Build and test (macOS)

pomerol is an exact diagonalization (full ED) code written in C++ aimed at solving condensed matter second-quantized models of interacting fermions and bosons on finite size lattices at finite temperatures. It is designed to compute thermal expectation values of observables, single- and two-particle Green's functions as well as susceptibilities.

Features

  • High performance exact calculation of Green's functions, two-particle Green's functions and susceptibilities in Matsubara domain.
  • Many-body Hamiltonians can be specified in a natural mathematical form using libcommute's Domain-Specific Language. Hamiltonian presets for commonly used lattice models are also available.
  • Automatic symmetry analysis of the many-body Hamiltonians drastically reduces computational costs.
  • Eigen 3 template library is used for numerical linear algebra.
  • MPI + OpenMP support.

Installation

From source

  • Check the dependencies:

    • A C++11 conformant compiler
    • CMake >= 3.11.0
    • Boost >= 1.54.0 (only headers are required)
    • Eigen >= 3.1.0
    • libcommute >= 0.7.2 (will be downloaded by CMake if not found installed locally)
    • An MPI 3.0 implementation
    • Git to fetch the sources
  • Download the latest sources:

    git clone https://github.com/pomerol-ed/pomerol.git
    
  • Create a (temporary) build directory and change to it:

    mkdir build && cd build
    
  • In this build directory, run

    cmake <path_to_pomerol_sources> -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=<installation_path>
    
    • CMake tries to find an installation of libcommute, whose location can be specified by adding -Dlibcommute_DIR=<libcommute_installation_path>/lib/cmake/libcommute to the command line. If this attempt fails, it will download an appropriate version of libcommute and co-install it alongside pomerol.
    • Add -DTesting=OFF to disable compilation of unit tests (not recommended).
    • Add -DProgs=ON to compile provided executables (from progs directory). Some of the executables depend on the gftools library, which will be automatically downloaded in case it cannot be found by CMake (use -Dgftools_DIR to specify its installation path). gftools supports saving to HDF5 through ALPSCore.
    • Add -DDocumentation=OFF to disable generation of reference documentation.
    • Add -DUSE_OPENMP=OFF to disable OpenMP optimization for two-particle GF calculation.
    • Add -DBUILD_SHARED_LIBS=OFF to compile static instead of shared libraries.
  • make

  • make test (if unit tests are compiled)

  • make install

  • make doc generates the Doxygen reference documentation in the doc/html subdirectory.

The library, libpomerol is built. It can be used for linking with executables. Some working executables are given in prog subdirectory.

⚠️ It has been reported that some MPICH-based MPI implementations, such as HPE Cray MPI may not properly support MPI_CXX_* datatypes, which pomerol's code depends on. In case you see failing MPI unit tests when linking to said MPI libraries, try using CMake option -DUse_MPI_C_datatypes=ON.

Interfacing with your own code and other libraries

Check the tutorial directory for an example of a pomerol-based code that is linked to external libraries.

The interface to TRIQS library is readily available: https://github.com/pomerol-ed/pomerol2triqs.

Documentation

Check https://pomerol-ed.github.io/pomerol/html/ or type make doc during compilation stage to build the reference documentation.

CHANGELOG.md lists main changes introduced in each release.

License

This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.

Academic usage

Please, attribute this work by a citation to http://dx.doi.org/10.5281/zenodo.17900.

Authors & Contributors

  • Andrey Antipov <Andrey.E.Antipov\at\gmail.com>
  • Igor Krivenko <igor.s.krivenko\at\gmail.com>
  • Mikhail Alejnikov
  • Alexey Rubtsov
  • Christoph Jung
  • Aljoscha Wilhelm
  • Junya Otsuki
  • Sergei Iskakov
  • Hiroshi Shinaoka
  • Nils Wentzell
  • Hugo U.R. Strand
  • Dominik Kiese

Development/Help

Please, feel free to contact us and to contribute!

pomerol's People

Contributors

aeantipov avatar hmenke avatar hugostrand avatar iskakoff avatar j-otsuki avatar krivenko avatar shinaoka avatar wentzell avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pomerol's Issues

Extend Symmetrizer to support arbitrary integrals of motion

Copied from pomerol-ed/pomerol2triqs#3

Dear Igor,

At present, only Sz is used for state classification as in the default mode of pomerol. In the Slater Hamiltonian, for example, Lz (w/o SO) or Jz (w/ SO) could be additionally used to reduce the block size.

Is there any way to pass an Operator op to call symm.checkSymmetry(op)? Or, if not, which is the easiest way to extend the code?

It seems for me that one should extend the function pomerol_ed::diagonalize to be able to pass a list of operators that is used for additional symmetry check.

Best regards,
Junya

Error while compiling tutorial files

Hello,

i get an error while trying to compile the tutorial. The error massage is:

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
pomerol_DEP_LIBRARIES

regards
Arthur

Disabling MPI in computing two-particle Green's function

Dear Andrey,

I'm computing two-particle Green's functions for multi-orbital models. As you know, it is very costly. Since two-particle GF has a lot of components in multi-orbital models (N_orb^4 where N_orb is the number of orbitals), it would be efficient to use MPI for the loop of the components. I can do it outside of the pomerol library (actually in a python wrapper), but probably at the same time, I have to disable MPI in the TwoParticleGF class.

Is my understanding correct? And, could you tell me how to do it? If I understand correctly, I need to modify only TwoParticleGF::compute() and skip using pMPI::mpi_skel in some way.

Just for your information, a similar question was raised recently in pomerol-ed/pomerol2triqs#5.

Best regards,
Junya Otsuki

Non-zero imaginary part in the latest release

anderson.pomerol --U 1 --beta 26 --levels -0.677193904997332 -1.63229282488024 0.677193904997332 1.63229282488024 --hoppings 0.155286870457831 0.312708587859944 0.155286870457831 0.312708587859944 --calc_gf 1 --calc_2pgf 1 --2pgf.indices 0 1 0 1 --2pgf.coeff_tol 1e-12 --2pgf.reduce_tol 1e-5 --wf_min -20 --wf_max 20

in cxx11 mode produces non-zero imag part.
This is not related to tolerances - single site has the same problem.

Problem linking boost_mpi and boost_serialization

Hi,

I'm currently having trouble making pomerol. During make, I get the following errors:

[ 48%] Linking CXX executable FieldOperatorTest
../libpomerol.so: undefined reference to boost::mpi::detail::packed_archive_recv(int, int, int, boost::mpi::packed_iarchive&, MPI_Status&)' ../libpomerol.so: undefined reference to boost::mpi::communicator::operator int() const'
../libpomerol.so: undefined reference to `boost::mpi::detail::packed_archive_send(int, int, int, boost::mpi::packed_oarchive const&)'

The most obvious source for errors would be the boost version ( 1.58.0 on this machine). I am trying to install a newer version of boost to see if that helps (but installing boost is slow...). Do you know what the minimal required version of boost is / which version are you usually using? Do you see other obvious things that could be going wrong?

Thanks in advance,

Erik

Problem with exported pomerol::pomerol target on MacOS

This issue was discovered by Sasha Lichtenstein while he was trying to build pomerol2triqs on his MacBook Pro using Boost 1.72.

CMake was issuing the following error message,

 CMake Error at c++/CMakeLists.txt:1 (add_library):
  Target "pomerol2triqs_c" links to target "Boost::mpi" but the target was
  not found.  Perhaps a find_package() call is missing for an IMPORTED
  target, or an ALIAS target is missing?

The problem roots in the way pomerol::pomerol CMake target is exported. The installed share/pomerol/pomerol.cmake contains the following lines:

set_target_properties(pomerol::pomerol PROPERTIES
  INTERFACE_INCLUDE_DIRECTORIES "/usr/include/eigen3;/usr/include;/usr/include;${_IMPORT_PREFIX}/include"
  INTERFACE_LINK_LIBRARIES "/usr/lib64/libmpi_cxx.so;/usr/lib64/libmpi.so;Boost::mpi;Boost::serialization"
)

On my own Linux laptop those lines are rather different and include explicit paths to the compiled Boost components.

set_target_properties(pomerol::pomerol PROPERTIES
  INTERFACE_INCLUDE_DIRECTORIES "/usr/include/eigen3;/usr/include;/usr/include;${_IMPORT_PREFIX}/include"
  INTERFACE_LINK_LIBRARIES "/usr/lib64/libmpi_cxx.so;/usr/lib64/libmpi.so;/usr/lib64/libboost_mpi-mt.so;/usr/lib64/libboost_serialization-mt.so"
)

I wonder if it's a duty of the consumer's CMake code to always import the Boost targets by calling find_package(Boost) along with find_package(pomerol).

Tutorial example gives zero-valued TwoParticleGF?

Dear Andrey and Igor,

I'm trying to use your tutorial example with the Hubbard dimer in ./tutorial/example2site.cpp. But I'm only getting zeros for the two-particle greens function, see below. What am I doing wrong?

Best regards, Hugo

Sites
Site "A", 1 orbital, 2 spins.
Site "B", 1 orbital, 2 spins.
Terms
-1*c^{+}_{A,0,0}c_{B,0,0}
-1*c^{+}_{B,0,0}c_{A,0,0}
-1*c^{+}_{A,0,1}c_{B,0,1}
-1*c^{+}_{B,0,1}c_{A,0,1}
-1*c^{+}_{A,0,0}c_{A,0,0}
-1*c^{+}_{A,0,1}c_{A,0,1}
-1*c^{+}_{B,0,0}c_{B,0,0}
-1*c^{+}_{B,0,1}c_{B,0,1}
Terms with 4 operators
2*c^{+}_{A,0,1}c_{A,0,1}c^{+}_{A,0,0}c_{A,0,0}
2*c^{+}_{B,0,1}c_{B,0,1}c^{+}_{B,0,0}c_{B,0,0}
=======
Indices
=======
Index 0 = (A,0,0)
Index 1 = (A,0,1)
Index 2 = (B,0,0)
Index 3 = (B,0,1)
======================
Matrix element storage
======================
Terms
-1*C^+(0)C(0) + -1*C^+(0)C(2) + -1*C^+(1)C(1) + -1*C^+(1)C(3) + -1*C^+(2)C(0) + -1*C^+(2)C(2) + -1*C^+(3)C(1) + -1*C^+(3)C(3) + -2*C^+(0)C^+(1)C(0)C(1) + -2*C^+(2)C^+(3)C(2)C(3)
N terms
1*C^+(0)C(0) + 1*C^+(1)C(1) + 1*C^+(2)C(2) + 1*C^+(3)C(3)
H commutes with N
[ H ,1*C^+(0)C(0) + 1*C^+(1)C(1) + 1*C^+(2)C(2) + 1*C^+(3)C(3) ]=0
[ H ,-0.5*C^+(0)C(0) + 0.5*C^+(1)C(1) + -0.5*C^+(2)C(2) + 0.5*C^+(3)C(3) ]=0
Preparing Hamiltonian parts...Calculating 9 jobs using 1 procs.
Calculating 9 jobs using 1 procs.
[4/9] P0 : part 3 [4] run;
[2/9] P0 : part 1 [2] run;
[3/9] P0 : part 2 [2] run;
[6/9] P0 : part 5 [2] run;
[8/9] P0 : part 7 [2] run;
[1/9] P0 : part 0 [1] run;
[5/9] P0 : part 4 [1] run;
[7/9] P0 : part 6 [1] run;
[9/9] P0 : part 8 [1] run;
done.
The value of ground energy is -3.23607
CreationOperator_1: 6 parts will be computed
Computing 1*C^+(1) in eigenbasis of the Hamiltonian: 0  16  33  50  66  83  
AnnihilationOperator_1: 6 parts will be computed
Computing 1*C(1) in eigenbasis of the Hamiltonian: 0  16  33  50  66  83  
0 | (-5.9535e-17,-0.184554)
1 | (-3.51059e-17,-0.373873)
2 | (-2.71198e-17,-0.37882)
3 | (-2.1774e-18,-0.3349)
4 | (-9.39264e-18,-0.289338)
5 | (-1.58814e-17,-0.251146)
6 | (-7.4351e-18,-0.220407)
7 | (-1.4935e-17,-0.195678)
8 | (-9.70438e-18,-0.17557)
9 | (-7.68787e-18,-0.158998)
TwoParticleGF(1111): 24 parts will be calculated
Calculating 24 jobs using 1 procs.
[1/24] P0 : part 0 [1] run;
Total 8+8=16 terms -> 6+7=13, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[2/24] P0 : part 1 [1] run;
Total 8+8=16 terms -> 4+3=7, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[3/24] P0 : part 2 [1] run;
Total 8+8=16 terms -> 6+7=13, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[4/24] P0 : part 3 [1] run;
Total 8+8=16 terms -> 4+3=7, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[5/24] P0 : part 4 [1] run;
Total 128+122=250 terms -> 94+96=190, 	tols = 8.13008e-09 (coeff), 1e-08 (res)
[6/24] P0 : part 5 [1] run;
Total 128+122=250 terms -> 94+96=190, 	tols = 8.13008e-09 (coeff), 1e-08 (res)
[7/24] P0 : part 6 [1] run;
Total 128+122=250 terms -> 94+96=190, 	tols = 8.13008e-09 (coeff), 1e-08 (res)
[8/24] P0 : part 7 [1] run;
Total 128+122=250 terms -> 94+96=190, 	tols = 8.13008e-09 (coeff), 1e-08 (res)
[9/24] P0 : part 8 [1] run;
Total 8+8=16 terms -> 8+7=15, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[10/24] P0 : part 9 [1] run;
Total 8+8=16 terms -> 8+7=15, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[11/24] P0 : part 10 [1] run;
Total 8+8=16 terms -> 6+7=13, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[12/24] P0 : part 11 [1] run;
Total 8+8=16 terms -> 6+7=13, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[13/24] P0 : part 12 [1] run;
Total 128+122=250 terms -> 94+96=190, 	tols = 8.13008e-09 (coeff), 1e-08 (res)
[14/24] P0 : part 13 [1] run;
Total 128+122=250 terms -> 94+96=190, 	tols = 8.13008e-09 (coeff), 1e-08 (res)
[15/24] P0 : part 14 [1] run;
Total 128+122=250 terms -> 94+96=190, 	tols = 8.13008e-09 (coeff), 1e-08 (res)
[16/24] P0 : part 15 [1] run;
Total 128+122=250 terms -> 94+96=190, 	tols = 8.13008e-09 (coeff), 1e-08 (res)
[17/24] P0 : part 16 [1] run;
Total 8+8=16 terms -> 6+7=13, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[18/24] P0 : part 17 [1] run;
Total 8+8=16 terms -> 8+7=15, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[19/24] P0 : part 18 [1] run;
Total 8+8=16 terms -> 6+7=13, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[20/24] P0 : part 19 [1] run;
Total 8+8=16 terms -> 8+7=15, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[21/24] P0 : part 20 [1] run;
Total 8+8=16 terms -> 4+3=7, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[22/24] P0 : part 21 [1] run;
Total 8+8=16 terms -> 4+3=7, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[23/24] P0 : part 22 [1] run;
Total 8+8=16 terms -> 6+7=13, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
[24/24] P0 : part 23 [1] run;
Total 8+8=16 terms -> 6+7=13, 	tols = 1.11111e-07 (coeff), 1e-08 (res)
done.
-2 -2 -2|(0,0)
-2 -2 -1|(0,0)
-2 -2 0|(0,0)
-2 -2 1|(0,0)
-2 -1 -2|(0,0)
-2 -1 -1|(0,0)
-2 -1 0|(0,0)
-2 -1 1|(0,0)
-2 0 -2|(0,0)
-2 0 -1|(0,0)
-2 0 0|(0,0)
-2 0 1|(0,0)
-2 1 -2|(0,0)
-2 1 -1|(0,0)
-2 1 0|(0,0)
-2 1 1|(0,0)
-1 -2 -2|(0,0)
-1 -2 -1|(0,0)
-1 -2 0|(0,0)
-1 -2 1|(0,0)
-1 -1 -2|(0,0)
-1 -1 -1|(0,0)
-1 -1 0|(0,0)
-1 -1 1|(0,0)
-1 0 -2|(0,0)
-1 0 -1|(0,0)
-1 0 0|(0,0)
-1 0 1|(0,0)
-1 1 -2|(0,0)
-1 1 -1|(0,0)
-1 1 0|(0,0)
-1 1 1|(0,0)
0 -2 -2|(0,0)
0 -2 -1|(0,0)
0 -2 0|(0,0)
0 -2 1|(0,0)
0 -1 -2|(0,0)
0 -1 -1|(0,0)
0 -1 0|(0,0)
0 -1 1|(0,0)
0 0 -2|(0,0)
0 0 -1|(0,0)
0 0 0|(0,0)
0 0 1|(0,0)
0 1 -2|(0,0)
0 1 -1|(0,0)
0 1 0|(0,0)
0 1 1|(0,0)
1 -2 -2|(0,0)
1 -2 -1|(0,0)
1 -2 0|(0,0)
1 -2 1|(0,0)
1 -1 -2|(0,0)
1 -1 -1|(0,0)
1 -1 0|(0,0)
1 -1 1|(0,0)
1 0 -2|(0,0)
1 0 -1|(0,0)
1 0 0|(0,0)
1 0 1|(0,0)
1 1 -2|(0,0)
1 1 -1|(0,0)
1 1 0|(0,0)
1 1 1|(0,0)

Computing two-particle GF in imaginary time

Hi, we've found that it's very expensive to compute two-particle GF in Matsubara frequencies especially for a seven-orbital model.
The reason is obvious: pomerol first makes a list of terms by performing summation over all intermediate states, whose computational complexity may scale as O(N^4) (N is the dimension of the Hilbert space).
Instead, one could compute G(tau1, tau2, tau3, tau4) for a given (tau1, tau2, tau3, tau4), in which summation over intermediate states is no more necessary.
Is there any attempt to implement the computation of two-particle GF in imaginary time?

ZeroPoleWeight is not necessarily real

First of all, I would like to thank Junya @j-otsuki for his major contribution! The ability to compute susceptibilities has certainly been a much requested feature.

Today I discovered a compilation failure seen only in the POMEROL_COMPLEX_MATRIX_ELEMENT=ON mode.

/home/igor/Physics/pomerol/pomerol.git/src/pomerol/SusceptibilityPart.cpp: In member function ‘void Pomerol::SusceptibilityPart::compute()’:
/home/igor/Physics/pomerol/pomerol.git/src/pomerol/SusceptibilityPart.cpp:68:36: error: no match for ‘operator+=’ (operand types are ‘Pomerol::RealType’ {aka ‘double’} and ‘std::complex<double>’)
                     ZeroPoleWeight += Ainner.value() * Binner.value() * DMpartOuter.getWeight(index1);
                     ~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I guess putting an std::real() around the RHS would make the error go away, but the problem is actually deeper than that. ZeroPoleWeight is declared to be real, but this property is guaranteed only when operator B is the Hermitian conjugate of A (if I am not mistaken).
Therefore, we should either declare ZeroPoleWeight as MelemType, or check that B = A^+ in the constructor of Susceptibility.

MPI Freeze

This freezes for 160 cores.

mpirun -machinefile $TMPDIR/machines -np $NSLOTS anderson_pp_shift.cxx11.pomerol --U 1.0 --ed -0.5 --beta 26 --levels 0.997606403047774 0.142386656471943 -0.997606403047774 -0.142386656471943 --hoppings 0.243593970410131 0.227845830863211 0.243593970410131 0.227845830863211 --calc_gf 1 --calc_2pgf 1 --2pgf.indices 0 1 0 1 --2pgf.coeff_tol 1e-12 --2pgf.reduce_tol 1e-5 --wf_min -50 --wf_max 50 --wb_min -75 --wb_max 75

for executable https://gist.github.com/aeantipov/86b9a77c81d10680677339799c178307
Seems to work for small number of bosonic freqs.

Susceptibility

Dear Andrey,

Let me ask one question. Is there any way to compute the susceptibility defined by

chi(iw) = F < A(tau) B >

where iw is the bosonic Matsubara frequency, and F denotes the Fourier transform from tau to iw. More explicitly, what I want to compute is

chi_{ijkl}(iw) = F < c_i^+(tau) c_j(tau) c_k^+ c_l >

Best,
Junya

Worker didn't calculate this part.

When I try to run anderson.pomerol program as follows

mpirun -n 3 prog/bin/anderson.pomerol --calc_2pgf 1 --levels -1.0 0.0 1.0 --hoppings 1.0 1.0 1.0

it occasionally crashes with

/home/igor/Physics/pomerol/pomerol.git/src/pomerol/Hamiltonian.cpp:92: Worker3 didn't calculate part4
terminate called after throwing an instance of 'std::logic_error'
  what():  Worker didn't calculate this part.

Failing Build

Hey Andrey,

I am trying to build the current master branch on a CentOS 7.4 Machine with

gcc 7.3.0
eigen 3.3.4
boost 1.68

I am running into the following error during compilation.
It looks like at mpi_skel.hpp:97 you are trying to use boost::mpi::broadcast a std::vector<..> which requires vector to be serializeable.

[  1%] Building CXX object CMakeFiles/pomerol.dir/src/pomerol/Hamiltonian.cpp.o
/cm/shared/sw/pkg/devel/gcc/7.3.0/bin/c++  -Dpomerol_EXPORTS -I/mnt/home/wentzell/Dropbox/Coding/pomerol/include -I/mnt/home/wentzell/Dropbox/Coding/pomerol/build/include -I/cm/shared/sw/pkg/devel/openmpi/1.10.6-hfi-slurm17/include -I/mnt/home/wentzell/opt/boost-1.68/include -I/mnt/home/wentzell/Dropbox/Coding/pomerol/build/gtest/include  -Wno-register -march=broadwell -std=c++11 -Wno-unused-local-typedefs -fopenmp -O3 -DNDEBUG -fPIC   -o CMakeFiles/pomerol.dir/src/pomerol/Hamiltonian.cpp.o -c /mnt/home/wentzell/Dropbox/Coding/pomerol/src/pomerol/Hamiltonian.cpp
In file included from /mnt/home/wentzell/opt/boost-1.68/include/boost/serialization/split_member.hpp:23:0,
                 from /mnt/home/wentzell/opt/boost-1.68/include/boost/serialization/nvp.hpp:26,
                 from /mnt/home/wentzell/opt/boost-1.68/include/boost/serialization/complex.hpp:23,
                 from /mnt/home/wentzell/Dropbox/Coding/pomerol/include/pomerol/Misc.h:28,
                 from /mnt/home/wentzell/Dropbox/Coding/pomerol/include/pomerol/Hamiltonian.h:10,
                 from /mnt/home/wentzell/Dropbox/Coding/pomerol/src/pomerol/Hamiltonian.cpp:1:
/mnt/home/wentzell/opt/boost-1.68/include/boost/serialization/access.hpp: In instantiation of 'static void boost::serialization::access::serialize(Archive&, T&, unsigned int) [with Archive = boost::mpi::packed_oarchive; T = std::vector<int>]':                             
/mnt/home/wentzell/opt/boost-1.68/include/boost/serialization/serialization.hpp:68:22:   required from 'void boost::serialization::serialize(Archive&, T&, unsigned int) [with Archive = boost::mpi::packed_oarchive; T = std::vector<int>]'                                    
/mnt/home/wentzell/opt/boost-1.68/include/boost/serialization/serialization.hpp:126:14:   required from 'void boost::serialization::serialize_adl(Archive&, T&, unsigned int) [with Archive = boost::mpi::packed_oarchive; T = std::vector<int>]'                               
/mnt/home/wentzell/opt/boost-1.68/include/boost/archive/detail/oserializer.hpp:153:40:   required from 'void boost::archive::detail::oserializer<Archive, T>::save_object_data(boost::archive::detail::basic_oarchive&, const void*) const [with Archive = boost::mpi::packed_oarchive; T = std::vector<int>]'
/mnt/home/wentzell/opt/boost-1.68/include/boost/archive/detail/oserializer.hpp:106:1:   required from 'class boost::archive::detail::oserializer<boost::mpi::packed_oarchive, std::vector<int> >'                                                                               
/mnt/home/wentzell/opt/boost-1.68/include/boost/archive/detail/oserializer.hpp:258:13:   required from 'static void boost::archive::detail::save_non_pointer_type<Archive>::save_standard::invoke(Archive&, const T&) [with T = std::vector<int>; Archive = boost::mpi::packed_oarchive]'
/mnt/home/wentzell/opt/boost-1.68/include/boost/archive/detail/oserializer.hpp:315:22:   [ skipping 4 instantiation contexts, use -ftemplate-backtrace-limit=0 to disable ]                                                                                                     
/mnt/home/wentzell/opt/boost-1.68/include/boost/mpi/packed_oarchive.hpp:112:5:   required from 'void boost::mpi::packed_oarchive::save_override(const T&) [with T = std::vector<int>]'                                                                                          
/mnt/home/wentzell/opt/boost-1.68/include/boost/archive/detail/interface_oarchive.hpp:70:9:   required from 'Archive& boost::archive::detail::interface_oarchive<Archive>::operator<<(const T&) [with T = std::vector<int>; Archive = boost::mpi::packed_oarchive]'             
/mnt/home/wentzell/opt/boost-1.68/include/boost/mpi/collectives/broadcast.hpp:113:12:   required from 'void boost::mpi::detail::broadcast_impl(const boost::mpi::communicator&, T*, int, int, mpl_::false_) [with T = std::vector<int>; mpl_::false_ = mpl_::bool_<false>]'     
/mnt/home/wentzell/opt/boost-1.68/include/boost/mpi/collectives/broadcast.hpp:141:25:   required from 'void boost::mpi::broadcast(const boost::mpi::communicator&, T&, int) [with T = std::vector<int>]'                                                                        
/mnt/home/wentzell/Dropbox/Coding/pomerol/include/mpi_dispatcher/mpi_skel.hpp:97:30:   required from 'std::map<int, int> pMPI::mpi_skel<WrapType>::run(const boost::mpi::communicator&, bool) [with WrapType = pMPI::PrepareWrap<Pomerol::HamiltonianPart>]'                    
/mnt/home/wentzell/Dropbox/Coding/pomerol/src/pomerol/Hamiltonian.cpp:34:72:   required from here
/mnt/home/wentzell/opt/boost-1.68/include/boost/serialization/access.hpp:116:11: error: 'class std::vector<int>' has no member named 'serialize'                                                                                                                                
         t.serialize(ar, file_version);

I have configured cmake with

cmake ../ -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$HOME/opt/pomerol -DProgs=ON -DPOMEROL_COMPLEX_MATRIX_ELEMENTS=ON -DPOMEROL_USE_OPENMP=ON -DCXX11=ON -DTesting=ON

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.