Coder Social home page Coder Social logo

Branch cleaning for v0.6.0 about grid HOT 50 CLOSED

aportelli avatar aportelli commented on July 18, 2024
Branch cleaning for v0.6.0

from grid.

Comments (50)

aportelli avatar aportelli commented on July 18, 2024 1

It was just to be completely sure, but $(MAKE) probably exists since a very long time so this should be ok.

from grid.

paboyle avatar paboyle commented on July 18, 2024

I integrated mpi3-master-slave; merged with develop, as discussed for 0.6.0 clean:

MacBook-Pro-2:mpi3l peterboyle$ ./benchmarks/Benchmark_dwf --help
[warn] kq_init: detected broken kqueue; not using.: No such file or directory
[warn] kq_init: detected broken kqueue; not using.: No such file or directory
Grid : Message : Grid MPI-3 configuration: detected 1 Ranks 1 Nodes 1 with ranks-per-node
Grid : Message : Grid MPI-3 configuration: using one lead process per node
Grid : Message : Grid MPI-3 configuration: reduced communicator has size 1
Grid : Message : Node 0 led by MPI rank 0
Grid : Message : { 0 }
Grid : Message : --help : this message
Grid : Message :
Grid : Message : Geometry:
Grid : Message : --mpi n.n.n.n : default MPI decomposition
Grid : Message : --threads n : default number of OMP threads
Grid : Message : --grid n.n.n.n : default Grid size
Grid : Message : --shm M : allocate M megabytes of shared memory for comms
Grid : Message :
Grid : Message : Verbose and debug:
Grid : Message : --log list : comma separted list of streams from Error,Warning,Message,Performance,Iterative,Integrator,Debug,Colours
Grid : Message : --decomposition : report on default omp,mpi and simd decomposition
Grid : Message : --debug-signals : catch sigsegv and print a blame report
Grid : Message : --debug-stdout : print stdout from EVERY node
Grid : Message : --timestamp : tag with millisecond resolution stamps
Grid : Message :
Grid : Message : Performance:
Grid : Message : --dslash-generic: Wilson kernel for generic Nc
Grid : Message : --dslash-unroll : Wilson kernel for Nc=3
Grid : Message : --dslash-asm : Wilson kernel for AVX512
Grid : Message : --lebesgue : Cache oblivious Lebesgue curve/Morton order/Z-graph stencil looping
Grid : Message : --cacheblocking n.m.o.p : Hypercuboidal cache blocking
Grid : Message :

from grid.

paboyle avatar paboyle commented on July 18, 2024

knl-stats can be pruned. Put the functionality in on a different feature.

from grid.

aportelli avatar aportelli commented on July 18, 2024

Thanks. On my side I have extended the README file quite a bit and polished the build system, especially to smoothen the process of compiling against MKL on an Intel system. Please let me know if anything does not work or looks like it could be improved.

from grid.

aportelli avatar aportelli commented on July 18, 2024

You have been changing the build system (to include the new MPI3 model I guess) but you removed the auto configuration for MPI3, any reason for doing that?

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

hirep can be eliminated too

double prec already tested and merged in release/0.6.0

from grid.

aportelli avatar aportelli commented on July 18, 2024

I have included FFT optimisations and the feynman-rules minus the QED part to develop. I noticed that on one side Guido committed fftw3.h and on the other side Peter removed pulling this header from their repository. Is that normal? Are we now comfortable to ship fftw3.h with the code?
Please advise.

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

fftw3.h should be dropped out, I have no idea why it came into one of my
commits.

2016-11-03 14:38 GMT+00:00 Antonin Portelli [email protected]:

I have included FFT optimisations and the feynman-rules minus the QED part
to develop. I noticed that on one side Guido committed fftw3.h and on the
other side Peter removed pulling this header from their repository. Is that
normal? Are we now comfortable to ship fftw3.h with the code?
Please advise.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#63 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA9sg7464upvBh2KP1KhuWTCj3lILedCks5q6fH0gaJpZM4KlYJK
.

Guido Cossu
Senior Researcher
School of Physics and Astronomy
Institute for Particle and Nuclear Physics
James Clerk Maxwell Building
Peter Guthrie Tait Road, Edinburgh EH9 3FD
Tel: +44 (0)131 650 6572
http://www.ph.ed.ac.uk/people/guido-cossu

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

Hi guys, when I compile with AVX512 on our new shiny computer I get:

checking for library containing __gmpf_init... -lgmp
checking for library containing mpfr_init... -lmpfr
checking for library containing fftw_execute... none required
checking for library containing fftwf_execute... none required
configure: error: "SIMD option AVX512MIC not supported by the Intel compiler"

Which is weird as the read says that only Intel is supported. I think there is some guard in place and does not see that CC is in fact icc in this case. Can someone please look into this? This is somewhat new, before it did not complain.

from grid.

aportelli avatar aportelli commented on July 18, 2024

Hi Thorsten, please give a good read at the new README, I have removed AVX512MIC because it was just redundant, just use KNL.

On 4 Nov 2016, at 15:29, Thorsten Kurth <[email protected] mailto:[email protected]> wrote:

Hi guys, when I compile with AVX512 on our new shiny computer I get:

checking for library containing __gmpf_init... -lgmp
checking for library containing mpfr_init... -lmpfr
checking for library containing fftw_execute... none required
checking for library containing fftwf_execute... none required
configure: error: "SIMD option AVX512MIC not supported by the Intel compiler"

Which is weird as the read says that only Intel is supported. I think there is some guard in place and does not see that CC is in fact icc in this case. Can someone please look into this? This is somewhat new, before it did not complain.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub #63 (comment), or mute the thread https://github.com/notifications/unsubscribe-auth/AAJWuMOoM1WyZpsuITFX6sXTlh4Gvc2Pks5q609kgaJpZM4KlYJK.

https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png https://github.com/paboyle/Grid #63 (comment)

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

Aha, I see, there is AVX512 generic and then there is KNL. Is the MPI3 support good or is the MPI2 better at this point in time? I am asking because I want to run large scale benchmarks today. Any hints what parameters I should set?

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

That compiled, but I had to add:

-I${MKLROOT}/include/fftw

otherwise it could not find fftw. Is that an issue for you or expected? I mean you include <fftw3.h> so you expect to be in the standard include path.

from grid.

aportelli avatar aportelli commented on July 18, 2024

Yes, that showed up this morning, I am going to commit a fix soon where you won’t have to do that anymore.

On 4 Nov 2016, at 15:50, Thorsten Kurth <[email protected] mailto:[email protected]> wrote:

That compiled, but I had to add:

otherwise it could not find fftw. Is that an issue for you or expected? I mean you include <fftw3.h> so you expect to be in the standard include path.

You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub #63 (comment), or mute the thread https://github.com/notifications/unsubscribe-auth/AAJWuHTwzJztEFjtXEr69BguxWEJavXIks5q61RXgaJpZM4KlYJK.

https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png https://github.com/paboyle/Grid #63 (comment)

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

is this thread for general issues or just build issues? I still got MPI deadlocks during setup.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.


Grid : Message        : 58 ms : Requesting 134217728 byte stencil comms buffers 
Grid : Message        : 58 ms : Grid is setup to use 32 threads
Grid : Message        : 189 ms : Making s innermost grids

from grid.

aportelli avatar aportelli commented on July 18, 2024

Actually the fftw3.h commit was already done when you posted. This thread is pre 0.6 cleanup. If you have a specific issue please open a new one and follow the checklist in the new README file.

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

MPI problem
Let check one issue at a time:
Some of these info can be useful
https://paboyle.github.io/Grid/docs/knl_build/

  • Could you confirm that with 62 or 64 threads you are able to run?

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

I did not have the latest release branch apparently. Rebuilding now and then doing single node runs stay tuned.

from grid.

paboyle avatar paboyle commented on July 18, 2024

mpi3 and mpi3l are experimental for now -- just a warning.

--enable-comms=mpi should work, and does under my laptop on openmpi, but we will only go through the prerelease testing programme which involves running on cori once we get the double precision assembly feature integrated.

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

I tried the mpi for the moment and it seems to get stuck. I will open a new issue for that.

The code builds also without the additional fftw includes. So works well for me. Thanks.

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

peter: the double precision has been merged to the release branch few days ago.

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

Ok, here my verdict:). The performance decreased significantly. Before I got about 549 Gflops per rank (single node) and now I get about 245. Something seems odd. Also, it is not good that the benchmark crashes because of this norm error for larger local volumes such as 32^4*16. I will try a smaller one but the performance was the same, 245 Gflops. Were the flop counters modified?

That is my script

#!/bin/bash
#SBATCH --ntasks-per-core=4
#SBATCH -p regular
#SBATCH -A mpccc
#SBATCH -N 1
#SBATCH -C knl,quad,cache
#SBATCH -t 1:00:00

export OMP_NUM_THREADS=64
export OMP_PLACES=threads
export OMP_PROC_BIND=spread


#some perfops
export ASMOPT=1

srun -n 1 -c 272 --cpu_bind=cores ./install/grid_sp_mpi/bin/Benchmark_dwf --grid 16.16.16.16 --mpi 1.1.1.1 --dslash-opt --cacheblocking=4.1.1.1

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

I obtain:

Grid : Message        : #### Dhop calls report 
Grid : Message        : WilsonFermion5D Number of Dhop Calls     : 200
Grid : Message        : WilsonFermion5D Total Communication time : 636 us
Grid : Message        : WilsonFermion5D CommTime/Calls           : 3.18 us
Grid : Message        : WilsonFermion5D Total Compute time       : 594237 us
Grid : Message        : WilsonFermion5D ComputeTime/Calls        : 2971.18 us
Grid : Message        : Average mflops/s per call                : 237159
Grid : Message        : Average mflops/s per call per rank       : 237159
Grid : Message        : #### Dhop calls report 
Grid : Message        : WilsonFermion5D Number of Dhop Calls     : 200
Grid : Message        : WilsonFermion5D Total Communication time : 626 us
Grid : Message        : WilsonFermion5D CommTime/Calls           : 3.13 us
Grid : Message        : WilsonFermion5D Total Compute time       : 625494 us
Grid : Message        : WilsonFermion5D ComputeTime/Calls        : 3127.47 us
Grid : Message        : Average mflops/s per call                : 225308
Grid : Message        : Average mflops/s per call per rank       : 225308
Grid : Message        : #### Dhop calls report 
Grid : Message        : WilsonFermion5D Number of Dhop Calls     : 100
Grid : Message        : WilsonFermion5D Total Communication time : 587 us
Grid : Message        : WilsonFermion5D CommTime/Calls           : 5.87 us
Grid : Message        : WilsonFermion5D Total Compute time       : 298708 us
Grid : Message        : WilsonFermion5D ComputeTime/Calls        : 2987.08 us
Grid : Message        : Average mflops/s per call                : 235897
Grid : Message        : Average mflops/s per call per rank       : 235897
Grid : Message        : #### Dhop calls report 
Grid : Message        : WilsonFermion5D Number of Dhop Calls     : 100
Grid : Message        : WilsonFermion5D Total Communication time : 592 us
Grid : Message        : WilsonFermion5D CommTime/Calls           : 5.92 us
Grid : Message        : WilsonFermion5D Total Compute time       : 288559 us
Grid : Message        : WilsonFermion5D ComputeTime/Calls        : 2885.59 us
Grid : Message        : Average mflops/s per call                : 244194
Grid : Message        : Average mflops/s per call per rank       : 244194

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

Can you confirm that you are using these settings?

https://paboyle.github.io/Grid/docs/running_knl/

On Fri, Nov 4, 2016, 17:49 Thorsten Kurth [email protected] wrote:

I obtain:

Grid : Message : #### Dhop calls report
Grid : Message : WilsonFermion5D Number of Dhop Calls : 200
Grid : Message : WilsonFermion5D Total Communication time : 636 us
Grid : Message : WilsonFermion5D CommTime/Calls : 3.18 us
Grid : Message : WilsonFermion5D Total Compute time : 594237 us
Grid : Message : WilsonFermion5D ComputeTime/Calls : 2971.18 us
Grid : Message : Average mflops/s per call : 237159
Grid : Message : Average mflops/s per call per rank : 237159

Grid : Message : #### Dhop calls report
Grid : Message : WilsonFermion5D Number of Dhop Calls : 200
Grid : Message : WilsonFermion5D Total Communication time : 626 us
Grid : Message : WilsonFermion5D CommTime/Calls : 3.13 us
Grid : Message : WilsonFermion5D Total Compute time : 625494 us
Grid : Message : WilsonFermion5D ComputeTime/Calls : 3127.47 us
Grid : Message : Average mflops/s per call : 225308
Grid : Message : Average mflops/s per call per rank : 225308

Grid : Message : #### Dhop calls report
Grid : Message : WilsonFermion5D Number of Dhop Calls : 100
Grid : Message : WilsonFermion5D Total Communication time : 587 us
Grid : Message : WilsonFermion5D CommTime/Calls : 5.87 us
Grid : Message : WilsonFermion5D Total Compute time : 298708 us
Grid : Message : WilsonFermion5D ComputeTime/Calls : 2987.08 us
Grid : Message : Average mflops/s per call : 235897
Grid : Message : Average mflops/s per call per rank : 235897

Grid : Message : #### Dhop calls report
Grid : Message : WilsonFermion5D Number of Dhop Calls : 100
Grid : Message : WilsonFermion5D Total Communication time : 592 us
Grid : Message : WilsonFermion5D CommTime/Calls : 5.92 us
Grid : Message : WilsonFermion5D Total Compute time : 288559 us
Grid : Message : WilsonFermion5D ComputeTime/Calls : 2885.59 us
Grid : Message : Average mflops/s per call : 244194
Grid : Message : Average mflops/s per call per rank : 244194


You are receiving this because you commented.

Reply to this email directly, view it on GitHub
#63 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA9sg5H7HpR4OBauPwt_pr5Vcl_dBGLoks5q63ALgaJpZM4KlYJK
.

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

Also confirm the default precision of your runs

On Fri, Nov 4, 2016, 17:56 Guido Cossu [email protected] wrote:

Can you confirm that you are using these settings?

https://paboyle.github.io/Grid/docs/running_knl/

On Fri, Nov 4, 2016, 17:49 Thorsten Kurth [email protected]
wrote:

I obtain:

Grid : Message : #### Dhop calls report
Grid : Message : WilsonFermion5D Number of Dhop Calls : 200
Grid : Message : WilsonFermion5D Total Communication time : 636 us
Grid : Message : WilsonFermion5D CommTime/Calls : 3.18 us
Grid : Message : WilsonFermion5D Total Compute time : 594237 us
Grid : Message : WilsonFermion5D ComputeTime/Calls : 2971.18 us
Grid : Message : Average mflops/s per call : 237159
Grid : Message : Average mflops/s per call per rank : 237159

Grid : Message : #### Dhop calls report
Grid : Message : WilsonFermion5D Number of Dhop Calls : 200
Grid : Message : WilsonFermion5D Total Communication time : 626 us
Grid : Message : WilsonFermion5D CommTime/Calls : 3.13 us
Grid : Message : WilsonFermion5D Total Compute time : 625494 us
Grid : Message : WilsonFermion5D ComputeTime/Calls : 3127.47 us
Grid : Message : Average mflops/s per call : 225308
Grid : Message : Average mflops/s per call per rank : 225308

Grid : Message : #### Dhop calls report
Grid : Message : WilsonFermion5D Number of Dhop Calls : 100
Grid : Message : WilsonFermion5D Total Communication time : 587 us
Grid : Message : WilsonFermion5D CommTime/Calls : 5.87 us
Grid : Message : WilsonFermion5D Total Compute time : 298708 us
Grid : Message : WilsonFermion5D ComputeTime/Calls : 2987.08 us
Grid : Message : Average mflops/s per call : 235897
Grid : Message : Average mflops/s per call per rank : 235897

Grid : Message : #### Dhop calls report
Grid : Message : WilsonFermion5D Number of Dhop Calls : 100
Grid : Message : WilsonFermion5D Total Communication time : 592 us
Grid : Message : WilsonFermion5D CommTime/Calls : 5.92 us
Grid : Message : WilsonFermion5D Total Compute time : 288559 us
Grid : Message : WilsonFermion5D ComputeTime/Calls : 2885.59 us
Grid : Message : Average mflops/s per call : 244194
Grid : Message : Average mflops/s per call per rank : 244194


You are receiving this because you commented.

Reply to this email directly, view it on GitHub
#63 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA9sg5H7HpR4OBauPwt_pr5Vcl_dBGLoks5q63ALgaJpZM4KlYJK
.

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

I am using the recommended variant of the bindings for our system. With that bindings I got about 550 Gflops last time I tried. For example, this is what I got a while ago:

Grid : Message        : #### Dhop calls report 
Grid : Message        : WilsonFermion5D Number of Dhop Calls     : 200
Grid : Message        : WilsonFermion5D Total Communication time : 685 us
Grid : Message        : WilsonFermion5D CommTime/Calls           : 3.425 us
Grid : Message        : WilsonFermion5D Total Compute time       : 378143 us
Grid : Message        : WilsonFermion5D ComputeTime/Calls        : 1890.71 us
Grid : Message        : Average mflops/s per call                : 372686
Grid : Message        : Average mflops/s per call per rank       : 372686

that is on the same lattice as above as far as I remember.

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

Precision is (should be) single, as I run them from a single precision build.

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

Thanks,
Then I will ask you to open another issue for this and provide these info

https://paboyle.github.io/Grid/docs/bug_report/

On Fri, Nov 4, 2016, 18:36 Thorsten Kurth [email protected] wrote:

Precision is (should be) single, as I run them from a single precision
build.


You are receiving this because you commented.

Reply to this email directly, view it on GitHub
#63 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA9sg8jQlCg3vt7tZ-EnogbqlTx9OjDHks5q63sqgaJpZM4KlYJK
.

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

I will add that if you are using the latest develop the correct flag is --dslash-asm

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

You can find a list of the options for the benchmark in
./Benchmark_dwf --help

Kernel options --dslash-generic, --dslash-unroll, --dslash-asm

and please use only 1 thread per core as described in the page I suggested, this line
#SBATCH --ntasks-per-core=4
in your script is suspicious

from grid.

paboyle avatar paboyle commented on July 18, 2024

Thanks Guido,
might be confused -- did the DP also go into develop? I thought I saw unimplemented DP --dslash-asm on develop this morning, but perhaps I'm mistaken. Otherwise we might have to back synch release to develop sooner.
Peter

from grid.

paboyle avatar paboyle commented on July 18, 2024

Or perhaps we are now complete on release and after verification on multinode (beyond simple single node many MPI rank tests) we just do the back synch to develop by finalising the release

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

not in develop. We decided 2 weeks ago that we concentrate on the release. so I forked from the release and then merged back when finished.

from grid.

paboyle avatar paboyle commented on July 18, 2024

ok thanks.
Will run tests on Cori over weekend and finish the release since I think we are done.

from grid.

aportelli avatar aportelli commented on July 18, 2024

There are still a couple of minor problems before release I think. Travis failed for my recursive tests target that I put in the build system, this is because the Linux VM have an archaic autotools version. That's a bit annoying...

Also there is a test that does not compile, something related to the comms. I am sure that Peter you'll directly know what to do when you will see it.

I am going to find a solution for the tests compilation tonight.

On 4 Nov 2016, at 19:55, Peter Boyle [email protected] wrote:

ok thanks.
Will run tests on Cori over weekend and finish the release since I think we are done.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

What test is not compiling?

On Fri, Nov 4, 2016, 20:05 Antonin Portelli [email protected]
wrote:

There are still a couple of minor problems before release I think. Travis
failed for my recursive tests target that I put in the build system, this
is because the Linux VM have an archaic autotools version. That's a bit
annoying...

Also there is a test that does not compile, something related to the
comms. I am sure that Peter you'll directly know what to do when you will
see it.

I am going to find a solution for the tests compilation tonight.

On 4 Nov 2016, at 19:55, Peter Boyle [email protected] wrote:

ok thanks.
Will run tests on Cori over weekend and finish the release since I think
we are done.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you commented.

Reply to this email directly, view it on GitHub
#63 (comment), or mute
the thread
https://github.com/notifications/unsubscribe-auth/AA9sgwXL2S7L3UVw2j8CbGnjA0oP1K4Jks5q64_ogaJpZM4KlYJK
.

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.

from grid.

azrael417 avatar azrael417 commented on July 18, 2024

Thanks Guido, the asm did it. So the performance is OK now. I will try the MPI next and open another issue if I encounter problems there.

from grid.

aportelli avatar aportelli commented on July 18, 2024

This is what I got on my laptop and the KNL workstation

In file included from ../../../tests/solver/Test_dwf_hdcr.cc:30:
In file included from /Users/antonin/Development/Grid/include/Grid/Grid.h:78:
In file included from /Users/antonin/Development/Grid/include/Grid/Algorithms.h:53:
/Users/antonin/Development/Grid/include/Grid/algorithms/CoarsenedMatrix.h:285:20: error: no member named
      'comm_buf' in
      'Grid::CartesianStencil<Grid::iVector<Grid::iScalar<Grid::iScalar<Grid::iScalar<Grid::Grid_simd<std::__1::complex<double>,
      __attribute__((__vector_size__(4 * sizeof(double)))) double> > > >, 32>,
      Grid::iVector<Grid::iScalar<Grid::iScalar<Grid::iScalar<Grid::Grid_simd<std::__1::complex<double>,
      __attribute__((__vector_size__(4 * sizeof(double)))) double> > > >, 32> >'
            nbr = Stencil.comm_buf[SE->_offset];
                  ~~~~~~~ ^
../../../tests/solver/Test_dwf_hdcr.cc:584:55: note: in instantiation of member function
      'Grid::CoarsenedMatrix<Grid::iScalar<Grid::iVector<Grid::iVector<Grid::Grid_simd<std::__1::complex<double>,
      __attribute__((__vector_size__(4 * sizeof(double)))) double>, 3>, 4> >,
      Grid::iScalar<Grid::iScalar<Grid::iScalar<Grid::Grid_simd<std::__1::complex<double>,
      __attribute__((__vector_size__(4 * sizeof(double)))) double> > > >, 32>::M' requested here
  CoarsenedMatrix<vSpinColourVector,vTComplex,nbasis> LDOp(*Coarse5d);
                                                      ^
1 error generated.

from grid.

paboyle avatar paboyle commented on July 18, 2024

config.summary and configure flags...

from grid.

aportelli avatar aportelli commented on July 18, 2024

I got that consistently for different configurations so that look like an error in the code. But if you want, this is an example:

----- PLATFORM ----------------------------------------
architecture (build)        : x86_64
os (build)                  : darwin16.1.0
architecture (target)       : x86_64
os (target)                 : darwin16.1.0
compiler vendor             : clang
compiler version            : 4.2.1
----- BUILD OPTIONS -----------------------------------
SIMD                        : AVX
Threading                   : no
Communications type         : mpi
Default precision           : double
RNG choice                  : ranlux48
GMP                         : yes
LAPACK                      : no
FFTW                        : yes
build DOXYGEN documentation : yes
graphs and diagrams         : yes
----- BUILD FLAGS -------------------------------------
CXXFLAGS:
    -I/Users/antonin/Development/Grid/include
    -I/usr/local/Cellar/open-mpi/2.0.1/include
    -mavx
    -I/Users/antonin/local/include
    -I/usr/local/include
    -I/usr/local/include
    -O3
    -std=c++11
LDFLAGS:
    -L/Users/antonin/Development/Grid/build/lib
    -L/usr/local/opt/libevent/lib
    -L/usr/local/Cellar/open-mpi/2.0.1/lib
    -L/Users/antonin/local/lib
    -L/usr/local/lib
    -L/usr/local/lib
LIBS:
    -lmpi
    -lfftw3f
    -lfftw3
    -lmpfr
    -lgmp
-------------------------------------------------------
../configure --enable-precision=double --enable-simd=AVX --enable-comms=mpi-auto --with-gmp=/usr/local --with-mpfr=/usr/local --prefix=/Users/antonin/local --with-fftw=${HOME}/local

from grid.

paboyle avatar paboyle commented on July 18, 2024

I see the error, but there are others once that is fixed. Danger of switching off tests in Travis
despite compile time I guess. Not easily fixed tonight, but likely over weekend.

from grid.

aportelli avatar aportelli commented on July 18, 2024

I agree, we are pushing the boundaries of what we can do with Travis, hopefully with a future solution we host we'll be able to build all the tests systematically.

from grid.

aportelli avatar aportelli commented on July 18, 2024

Ok I have written a manual recursive target for tests. Now the Travis build should pass and this increases backward compatibility with old automake versions. I also added a line in README to say that people can use make tests from the root directory.

from grid.

aportelli avatar aportelli commented on July 18, 2024

Thanks for the fix, I do confirm that on my side building all the tests now works!

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

make tests does not accept multiple concurrent compilations when ran on the root dir. This is the warning:

make -C tests tests
make[1]: warning: jobserver unavailable: using -j1.  Add '+' to parent make rule.
make[1]: Entering directory '/home/neo/Codes/Builds/Grid/tests'

...

It accepts the -j inside the directories.

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

Suggested modification: use $(MAKE) that passes the flags
in the main Makefile.am

$(MAKE) -C tests tests
instead of the current line

and in the Makefile.am in tests
for d in $(SUBDIRS); do $(MAKE) -C $${d} tests; done
instead of the current line.

Does this modification conflicts with older automake?

from grid.

aportelli avatar aportelli commented on July 18, 2024

Good spot, please go ahead and make the modification, Travis got a prehistoric automake version it will be a good test to figure out how portable it is. Thanks!

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

Travis does not compile the tests so it cannot check whether this is ok.
In principle it is very standard and recommended everywhere, so should be supported.
On my machine works, but we need to check on other platforms.

from grid.

aportelli avatar aportelli commented on July 18, 2024

So now that we finished 0.6, l propose we clean dead remote branches. I suggest that all the following branches should be deleted:

  • release/v0.6.0
  • feature/hirep
  • feature/knl-stat
  • feature/KNL_double_prec
  • feature/mpi3
  • feature/mpi3-master-slave

Please scream if you think some of these should not be removed.

from grid.

coppolachan avatar coppolachan commented on July 18, 2024

good for me.

I also would like to know if we still need the directories

Old
gcc-bug-report

from grid.

paboyle avatar paboyle commented on July 18, 2024

Old can go. gcc-bug-report is still open. Closing as this issue is stale.

from grid.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.