Coder Social home page Coder Social logo

mmgtools / parmmg Goto Github PK

View Code? Open in Web Editor NEW
31.0 31.0 18.0 47.4 MB

Distributed parallelization of 3D volume mesh adaptation

License: Other

Emacs Lisp 0.03% CMake 6.24% C 91.97% Fortran 0.36% C++ 0.56% Perl 0.72% Shell 0.13%
3d hpc mesh-adaptation mpi parallel

parmmg's People

Contributors

algiane avatar dobrzynski avatar iwheel avatar lcirrottola avatar pattakosn avatar prj- avatar prudhomm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

parmmg's Issues

Error when using a single MPI process

It seems the distributed interface cannot handle mesh distributed on a single MPI process.
I end up with the following trace followed by a segmentation fault:

  -- PMMG: CHECK INPUT DATA
  -- CHECK INPUT DATA COMPLETED.     0.000s

  -- PHASE 1 : ANALYSIS
   -- PHASE 1 COMPLETED.     0.000s
 ## Error: PMMG_preprocessMesh_distributed: parallel interface nodes must be set through the API interface

  -- PHASE 2 : ISOTROPIC MESHING

but I indeed call PMMG_Set_numberOfNodeCommunicators(mesh, 0);
How about nonconnected meshes where each component resides in a single MPI process, and thus PMMG_Set_numberOfNodeCommunicators(mesh, 0); as well on all processes?

Build fails: multiple definitions of PMMG_interp[123]bar

I have failed to compile ParMMG on two different machines with the same errors

/usr/sbin/ld: CMakeFiles/libparmmg_so.dir/src/parmmgexterns.c.o:(.bss+0x0): multiple definition of `PMMG_interp2bar'; CMakeFiles/libparmmg_so.dir/src/libparmmg.c.o:(.bss+0x0): first defined here
/usr/sbin/ld: CMakeFiles/libparmmg_so.dir/src/parmmgexterns.c.o:(.bss+0x8): multiple definition of `PMMG_interp3bar'; CMakeFiles/libparmmg_so.dir/src/libparmmg.c.o:(.bss+0x8): first defined here
/usr/sbin/ld: CMakeFiles/libparmmg_so.dir/src/parmmgexterns.c.o:(.bss+0x10): multiple definition of `PMMG_interp4bar'; CMakeFiles/libparmmg_so.dir/src/libparmmg.c.o:(.bss+0x10): first defined here

Version of programs which could be relevant:

  • cmake 3.19.3
  • make 4.3
  • gcc 10.2.0
  • MMG 5.5.2
  • ParMMG at current github.com/master (commit 715a680f0796c43c8a5e704e0481095bd40daf6e).

Configuration:

cmake /.../tmp/parmmg/src/ \
    -DCMAKE_INSTALL_PREFIX=/usr \
    -DCMAKE_BUILD_TYPE=Release \
    -USE_POINTMAP=ON \
    -DCMAKE_CXX_FLAGS='-O3 -mavx2 -fPIC -fopenmp' \
    -DCMAKE_C_FLAGS='-O3 -mavx2 -fPIC -fopenmp' \
    -DLIBPARMMG_SHARED=ON \
    -DLIBPARMMG_STATIC=ON \
    -DDOWNLOAD_MMG=OFF \
    -DMMG_DIR=/.../mmg/src/mmg-5.5.2 \
    -DMMG_BUILDDIR=/usr \
    -DDOWNLOAD_METIS=OFF \
    -DMETIS_DIR=/usr/include

Complete output

CMake Deprecation Warning at CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
  Compatibility with CMake < 2.8.12 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


-- The C compiler identification is GNU 10.2.0
-- The CXX compiler identification is GNU 10.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/sbin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/sbin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The Fortran compiler identification is GNU 10.2.0
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Check for working Fortran compiler: /usr/sbin/gfortran - skipped
-- Checking whether /usr/sbin/gfortran supports Fortran 90
-- Checking whether /usr/sbin/gfortran supports Fortran 90 - yes
-- Found MPI_C: /usr/lib/openmpi/libmpi.so (found version "3.1") 
-- Found MPI_CXX: /usr/lib/openmpi/libmpi_cxx.so (found version "3.1") 
-- Found MPI_Fortran: /usr/lib/openmpi/libmpi_usempif08.so (found version "3.1") 
-- Found MPI: TRUE (found version "3.1") found components: C CXX Fortran 
CMake Warning at CMakeLists.txt:134 (MESSAGE):
  Possible deadlocks with open-mpi (see
  https://github.com/open-mpi/ompi/issues/6568 )...


-- Compilation with mpi
-- A cache variable, namely SCOTCH_DIR, has been set to specify the install directory of SCOTCH
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE  
-- Looking for SCOTCH_graphInit
-- Looking for SCOTCH_graphInit - found
-- Performing Test SCOTCH_Num_4
-- Performing Test SCOTCH_Num_4 - Success
-- Found SCOTCH: /lib/libscotch.so;/lib/libscotcherrexit.so;-lz;-lm;-lrt  
CMake Warning at CMakeLists.txt:192 (MESSAGE):
  VTK library not found: vtk I/O will not be available.


-- Compilation with scotch: /lib/libscotch.so;/lib/libscotcherrexit.so;-lz;-lm;-lrt
-- Manual installation of Mmg: please, specify the MMG_DIR and MMG_BUILDDIR CMake variables
-- A cache variable, namely MMG_DIR, has been set to specify the install directory of MMG
-- Looking for MMG5_loadMshMesh_part1
-- Looking for MMG5_loadMshMesh_part1 - found
-- Found MMG: /usr/lib/libmmg.so  
-- Compilation with Mmg: /usr/lib/libmmg.so
-- A cache variable, namely METIS_DIR, has been set to specify the install directory of METIS
-- Looking for METIS_NodeND
-- Looking for METIS_NodeND - found
-- Performing Test METIS_idx_t_4
-- Performing Test METIS_idx_t_4 - Success
-- Found METIS: /usr/lib/libmetis.so  
-- Compilation with metis: /usr/lib/libmetis.so
-- Found Doxygen: /usr/sbin/doxygen (found version "1.9.1") found components: doxygen dot 
-- Configuring done
-- Generating done
-- Build files have been written to: /.../tmp/parmmg/build
Scanning dependencies of target pmmgtypes_header
Scanning dependencies of target pmmgversion_header
Scanning dependencies of target genheader_pmmg
Scanning dependencies of target GenerateGitHash
Scanning dependencies of target pmmg_header
[  0%] Built target pmmgversion_header
[  1%] Built target pmmgtypes_header
[  1%] Building C object CMakeFiles/genheader_pmmg.dir/scripts/genheader.c.o
[  2%] Getting git commit hash
[  2%] Built target pmmg_header
Scanning dependencies of target copy_libpmmgtypes
Scanning dependencies of target copy_pmmgversion
   > Found a git branch: master
   > Found a git commit: 715a680f0796c43c8a5e704e0481095bd40daf6e
   > Found a git date: 2020-11-26 11:41:57 +0100
Scanning dependencies of target copy_libpmmg
[  3%] Copying /.../tmp/parmmg/src/src/libparmmgtypes.h in /.../tmp/parmmg/build/include/parmmg/libparmmgtypes.h
[  4%] Copying /.../tmp/parmmg/build/src/parmmg/pmmgversion.h in /.../tmp/parmmg/build/include/parmmg/pmmgversion.h
[  4%] Built target GenerateGitHash
[  5%] Copying /.../tmp/parmmg/src/src/libparmmg.h in /.../tmp/parmmg/build/include/parmmg/libparmmg.h
[  5%] Built target copy_libpmmgtypes
[  5%] Built target copy_pmmgversion
Scanning dependencies of target libparmmg_a
Scanning dependencies of target libparmmg_so
Scanning dependencies of target copy_pmmggithash
[  5%] Built target copy_libpmmg
[  6%] Copying /.../tmp/parmmg/build/src/parmmg/git_log_pmmg.h in /.../tmp/parmmg/build/include/parmmg/git_log_pmmg.h
[  7%] Linking C executable bin/genheader_pmmg
[  7%] Built target copy_pmmggithash
[ 10%] Building C object CMakeFiles/libparmmg_so.dir/src/API_functionsf_pmmg.c.o
[ 10%] Building C object CMakeFiles/libparmmg_so.dir/src/API_functions_pmmg.c.o
[ 11%] Building C object CMakeFiles/libparmmg_so.dir/src/barycoord_pmmg.c.o
[ 12%] Building C object CMakeFiles/libparmmg_so.dir/src/communicators_pmmg.c.o
[ 14%] Building C object CMakeFiles/libparmmg_so.dir/src/chkcomm_pmmg.c.o
[ 14%] Building C object CMakeFiles/libparmmg_so.dir/src/analys_pmmg.c.o
[ 15%] Building C object CMakeFiles/libparmmg_a.dir/src/API_functions_pmmg.c.o
[ 15%] Built target genheader_pmmg
[ 16%] Building C object CMakeFiles/libparmmg_a.dir/src/API_functionsf_pmmg.c.o
/.../tmp/parmmg/src/src/API_functions_pmmg.c: In function ‘PMMG_Init_parameters’:
/.../tmp/parmmg/src/src/API_functions_pmmg.c:430:2: warning: #warning Option -nosurf imposed by default [-Wcpp]
  430 | #warning Option -nosurf imposed by default
      |  ^~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c: In function ‘PMMG_Init_parameters’:
/.../tmp/parmmg/src/src/API_functions_pmmg.c:430:2: warning: #warning Option -nosurf imposed by default [-Wcpp]
  430 | #warning Option -nosurf imposed by default
      |  ^~~~~~~
[ 17%] Building C object CMakeFiles/libparmmg_so.dir/src/coorcell_pmmg.c.o
[ 19%] Building C object CMakeFiles/libparmmg_so.dir/src/debug_pmmg.c.o
/.../tmp/parmmg/src/src/API_functions_pmmg.c: In function ‘PMMG_Set_solName’:
/.../tmp/parmmg/src/src/API_functions_pmmg.c:153:7: warning: ‘strncat’ specified bound depends on the length of the source argument [-Wstringop-overflow=]
  153 |       strncat ( defname, defroot,strlen(defroot)+1 );
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:148:15: note: length computed here
  148 |         len = strlen(defroot)+5; // defroot+".sol"
      |               ^~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:153:7: warning: ‘strncat’ specified bound depends on the length of the source argument [-Wstringop-overflow=]
  153 |       strncat ( defname, defroot,strlen(defroot)+1 );
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:148:15: note: length computed here
  148 |         len = strlen(defroot)+5; // defroot+".sol"
      |               ^~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:153:7: warning: ‘strncat’ specified bound depends on the length of the source argument [-Wstringop-overflow=]
  153 |       strncat ( defname, defroot,strlen(defroot)+1 );
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:153:34: note: length computed here
  153 |       strncat ( defname, defroot,strlen(defroot)+1 );
      |                                  ^~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c: In function ‘PMMG_Set_solName’:
/.../tmp/parmmg/src/src/API_functions_pmmg.c:153:7: warning: ‘strncat’ specified bound depends on the length of the source argument [-Wstringop-overflow=]
  153 |       strncat ( defname, defroot,strlen(defroot)+1 );
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:148:15: note: length computed here
  148 |         len = strlen(defroot)+5; // defroot+".sol"
      |               ^~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:153:7: warning: ‘strncat’ specified bound depends on the length of the source argument [-Wstringop-overflow=]
  153 |       strncat ( defname, defroot,strlen(defroot)+1 );
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:148:15: note: length computed here
  148 |         len = strlen(defroot)+5; // defroot+".sol"
      |               ^~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:153:7: warning: ‘strncat’ specified bound depends on the length of the source argument [-Wstringop-overflow=]
  153 |       strncat ( defname, defroot,strlen(defroot)+1 );
      |       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/.../tmp/parmmg/src/src/API_functions_pmmg.c:153:34: note: length computed here
  153 |       strncat ( defname, defroot,strlen(defroot)+1 );
      |                                  ^~~~~~~~~~~~~~~
[ 20%] Building C object CMakeFiles/libparmmg_so.dir/src/distributegrps_pmmg.c.o
[ 21%] Building C object CMakeFiles/libparmmg_so.dir/src/distributemesh_pmmg.c.o
[ 23%] Building C object CMakeFiles/libparmmg_so.dir/src/free_pmmg.c.o
[ 23%] Building C object CMakeFiles/libparmmg_a.dir/src/analys_pmmg.c.o
[ 24%] Building C object CMakeFiles/libparmmg_so.dir/src/grpsplit_pmmg.c.o
[ 25%] Building C object CMakeFiles/libparmmg_a.dir/src/barycoord_pmmg.c.o
/.../tmp/parmmg/src/src/grpsplit_pmmg.c: In function ‘PMMG_splitPart_grps’:
/.../tmp/parmmg/src/src/grpsplit_pmmg.c:1735:2: warning: #warning : fix this conditional jump [-Wcpp]
 1735 | #warning: fix this conditional jump
      |  ^~~~~~~
[ 26%] Building C object CMakeFiles/libparmmg_a.dir/src/chkcomm_pmmg.c.o
[ 28%] Building C object CMakeFiles/libparmmg_a.dir/src/communicators_pmmg.c.o
[ 29%] Building C object CMakeFiles/libparmmg_so.dir/src/hash_pmmg.c.o
[ 30%] Building C object CMakeFiles/libparmmg_so.dir/src/inout_pmmg.c.o
[ 31%] Building C object CMakeFiles/libparmmg_a.dir/src/coorcell_pmmg.c.o
[ 32%] Building C object CMakeFiles/libparmmg_a.dir/src/debug_pmmg.c.o
[ 33%] Building CXX object CMakeFiles/libparmmg_so.dir/src/inoutcpp_pmmg.cpp.o
[ 34%] Building C object CMakeFiles/libparmmg_so.dir/src/interactionmap_pmmg.c.o
[ 35%] Building C object CMakeFiles/libparmmg_a.dir/src/distributegrps_pmmg.c.o
[ 37%] Building C object CMakeFiles/libparmmg_so.dir/src/interpmesh_pmmg.c.o
/.../tmp/parmmg/src/src/interpmesh_pmmg.c: In function ‘PMMG_copySol_point’:
/.../tmp/parmmg/src/src/interpmesh_pmmg.c:355:2: warning: #warning Luca: when surface adapt will be ready, distinguish BDY from PARBDY [-Wcpp]
  355 | #warning Luca: when surface adapt will be ready, distinguish BDY from PARBDY
      |  ^~~~~~~
/.../tmp/parmmg/src/src/interpmesh_pmmg.c: In function ‘PMMG_interpMetricsAndFields_mesh’:
/.../tmp/parmmg/src/src/interpmesh_pmmg.c:613:2: warning: #warning Luca: make this part consistent with metrics interpolation [-Wcpp]
  613 | #warning Luca: make this part consistent with metrics interpolation
      |  ^~~~~~~
[ 38%] Building C object CMakeFiles/libparmmg_so.dir/src/libparmmg.c.o
/.../tmp/parmmg/src/src/libparmmg.c: In function ‘PMMG_preprocessMesh_distributed’:
/.../tmp/parmmg/src/src/libparmmg.c:239:2: warning: #warning hmin/hmax computed on each proc while we want a global value from the global bounding box and/or the global metric field... [-Wcpp]
  239 | #warning hmin/hmax computed on each proc while we want a global value from the global bounding box and/or the global metric field...
      |  ^~~~~~~
/.../tmp/parmmg/src/src/libparmmg.c:298:2: warning: #warning Luca: check this function [-Wcpp]
  298 | #warning Luca: check this function
      |  ^~~~~~~
[ 39%] Building C object CMakeFiles/libparmmg_a.dir/src/distributemesh_pmmg.c.o
[ 40%] Building C object CMakeFiles/libparmmg_so.dir/src/libparmmg1.c.o
[ 41%] Building C object CMakeFiles/libparmmg_so.dir/src/libparmmg_tools.c.o
/.../tmp/parmmg/src/src/libparmmg1.c: In function ‘PMMG_parmmglib1’:
/.../tmp/parmmg/src/src/libparmmg1.c:674:2: warning: #warning Luca: until analysis is not ready [-Wcpp]
  674 | #warning Luca: until analysis is not ready
      |  ^~~~~~~
[ 42%] Building C object CMakeFiles/libparmmg_so.dir/src/libparmmg_toolsf.c.o
[ 43%] Building C object CMakeFiles/libparmmg_so.dir/src/linkedlist_pmmg.c.o
[ 44%] Building C object CMakeFiles/libparmmg_so.dir/src/loadbalancing_pmmg.c.o
[ 46%] Building C object CMakeFiles/libparmmg_so.dir/src/locate_pmmg.c.o
/.../tmp/parmmg/src/src/locate_pmmg.c: In function ‘PMMG_locatePoint_foundConvex’:
/.../tmp/parmmg/src/src/locate_pmmg.c:539:2: warning: #warning Luca: check distance computation [-Wcpp]
  539 | #warning Luca: check distance computation
      |  ^~~~~~~
[ 47%] Building C object CMakeFiles/libparmmg_so.dir/src/mergemesh_pmmg.c.o
/.../tmp/parmmg/src/src/mergemesh_pmmg.c: In function ‘PMMG_merge_parmesh’:
/.../tmp/parmmg/src/src/mergemesh_pmmg.c:1603:2: warning: #warning MEMORY: small inconsistency [-Wcpp]
 1603 | #warning MEMORY: small inconsistency
      |  ^~~~~~~
[ 48%] Building C object CMakeFiles/libparmmg_so.dir/src/metis_pmmg.c.o
[ 49%] Building C object CMakeFiles/libparmmg_a.dir/src/free_pmmg.c.o
[ 50%] Building C object CMakeFiles/libparmmg_so.dir/src/moveinterfaces_pmmg.c.o
[ 51%] Building C object CMakeFiles/libparmmg_so.dir/src/mpipack_pmmg.c.o
[ 52%] Building C object CMakeFiles/libparmmg_so.dir/src/mpitypes_pmmg.c.o
/.../tmp/parmmg/src/src/moveinterfaces_pmmg.c: In function ‘PMMG_check_reachability’:
/.../tmp/parmmg/src/src/moveinterfaces_pmmg.c:689:2: warning: #warning Luca: change this tag [-Wcpp]
  689 | #warning Luca: change this tag
      |  ^~~~~~~
/.../tmp/parmmg/src/src/moveinterfaces_pmmg.c: In function ‘PMMG_part_moveInterfaces’:
/.../tmp/parmmg/src/src/moveinterfaces_pmmg.c:1390:2: warning: #warning Luca: change this tag [-Wcpp]
 1390 | #warning Luca: change this tag
      |  ^~~~~~~
[ 53%] Building C object CMakeFiles/libparmmg_a.dir/src/grpsplit_pmmg.c.o
/.../tmp/parmmg/src/src/grpsplit_pmmg.c: In function ‘PMMG_splitPart_grps’:
/.../tmp/parmmg/src/src/grpsplit_pmmg.c:1735:2: warning: #warning : fix this conditional jump [-Wcpp]
 1735 | #warning: fix this conditional jump
      |  ^~~~~~~
[ 55%] Building C object CMakeFiles/libparmmg_so.dir/src/mpiunpack_pmmg.c.o
[ 56%] Building C object CMakeFiles/libparmmg_a.dir/src/hash_pmmg.c.o
[ 57%] Building C object CMakeFiles/libparmmg_a.dir/src/inout_pmmg.c.o
[ 58%] Building C object CMakeFiles/libparmmg_so.dir/src/parmmgexterns.c.o
[ 59%] Building CXX object CMakeFiles/libparmmg_a.dir/src/inoutcpp_pmmg.cpp.o
[ 60%] Building C object CMakeFiles/libparmmg_so.dir/src/quality_pmmg.c.o
[ 61%] Building C object CMakeFiles/libparmmg_so.dir/src/tag_pmmg.c.o
[ 62%] Building C object CMakeFiles/libparmmg_so.dir/src/tools_pmmg.c.o
/.../tmp/parmmg/src/src/tag_pmmg.c: In function ‘PMMG_untag_par_node’:
/.../tmp/parmmg/src/src/tag_pmmg.c:97:2: warning: #warning Option -nosurf overrides part of the surface analysis [-Wcpp]
   97 | #warning Option -nosurf overrides part of the surface analysis
      |  ^~~~~~~
/.../tmp/parmmg/src/src/tag_pmmg.c: In function ‘PMMG_untag_par_edge’:
/.../tmp/parmmg/src/src/tag_pmmg.c:117:2: warning: #warning Option -nosurf overrides part of the surface analysis [-Wcpp]
  117 | #warning Option -nosurf overrides part of the surface analysis
      |  ^~~~~~~
[ 64%] Building C object CMakeFiles/libparmmg_so.dir/src/variadic_pmmg.c.o
[ 65%] Building C object CMakeFiles/libparmmg_a.dir/src/interactionmap_pmmg.c.o
[ 66%] Building C object CMakeFiles/libparmmg_so.dir/src/zaldy_pmmg.c.o
[ 67%] Building C object CMakeFiles/libparmmg_a.dir/src/interpmesh_pmmg.c.o
[ 68%] Building C object CMakeFiles/libparmmg_a.dir/src/libparmmg.c.o
/.../tmp/parmmg/src/src/interpmesh_pmmg.c: In function ‘PMMG_copySol_point’:
/.../tmp/parmmg/src/src/interpmesh_pmmg.c:355:2: warning: #warning Luca: when surface adapt will be ready, distinguish BDY from PARBDY [-Wcpp]
  355 | #warning Luca: when surface adapt will be ready, distinguish BDY from PARBDY
      |  ^~~~~~~
/.../tmp/parmmg/src/src/interpmesh_pmmg.c: In function ‘PMMG_interpMetricsAndFields_mesh’:
/.../tmp/parmmg/src/src/interpmesh_pmmg.c:613:2: warning: #warning Luca: make this part consistent with metrics interpolation [-Wcpp]
  613 | #warning Luca: make this part consistent with metrics interpolation
      |  ^~~~~~~
/.../tmp/parmmg/src/src/libparmmg.c: In function ‘PMMG_preprocessMesh_distributed’:
/.../tmp/parmmg/src/src/libparmmg.c:239:2: warning: #warning hmin/hmax computed on each proc while we want a global value from the global bounding box and/or the global metric field... [-Wcpp]
  239 | #warning hmin/hmax computed on each proc while we want a global value from the global bounding box and/or the global metric field...
      |  ^~~~~~~
/.../tmp/parmmg/src/src/libparmmg.c:298:2: warning: #warning Luca: check this function [-Wcpp]
  298 | #warning Luca: check this function
      |  ^~~~~~~
[ 69%] Building C object CMakeFiles/libparmmg_a.dir/src/libparmmg1.c.o
/.../tmp/parmmg/src/src/libparmmg1.c: In function ‘PMMG_parmmglib1’:
/.../tmp/parmmg/src/src/libparmmg1.c:674:2: warning: #warning Luca: until analysis is not ready [-Wcpp]
  674 | #warning Luca: until analysis is not ready
      |  ^~~~~~~
[ 70%] Building C object CMakeFiles/libparmmg_a.dir/src/libparmmg_tools.c.o
[ 71%] Building C object CMakeFiles/libparmmg_a.dir/src/libparmmg_toolsf.c.o
[ 73%] Building C object CMakeFiles/libparmmg_a.dir/src/linkedlist_pmmg.c.o
[ 74%] Linking CXX shared library lib/libparmmg.so
/usr/sbin/ld: CMakeFiles/libparmmg_so.dir/src/parmmgexterns.c.o:(.bss+0x0): multiple definition of `PMMG_interp2bar'; CMakeFiles/libparmmg_so.dir/src/libparmmg.c.o:(.bss+0x0): first defined here
/usr/sbin/ld: CMakeFiles/libparmmg_so.dir/src/parmmgexterns.c.o:(.bss+0x8): multiple definition of `PMMG_interp3bar'; CMakeFiles/libparmmg_so.dir/src/libparmmg.c.o:(.bss+0x8): first defined here
/usr/sbin/ld: CMakeFiles/libparmmg_so.dir/src/parmmgexterns.c.o:(.bss+0x10): multiple definition of `PMMG_interp4bar'; CMakeFiles/libparmmg_so.dir/src/libparmmg.c.o:(.bss+0x10): first defined here
[ 75%] Building C object CMakeFiles/libparmmg_a.dir/src/loadbalancing_pmmg.c.o
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/libparmmg_so.dir/build.make:636: lib/libparmmg.so] Error 1
make[1]: *** [CMakeFiles/Makefile2:377: CMakeFiles/libparmmg_so.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 76%] Building C object CMakeFiles/libparmmg_a.dir/src/locate_pmmg.c.o
/.../tmp/parmmg/src/src/locate_pmmg.c: In function ‘PMMG_locatePoint_foundConvex’:
/.../tmp/parmmg/src/src/locate_pmmg.c:539:2: warning: #warning Luca: check distance computation [-Wcpp]
  539 | #warning Luca: check distance computation
      |  ^~~~~~~
[ 77%] Building C object CMakeFiles/libparmmg_a.dir/src/mergemesh_pmmg.c.o
[ 78%] Building C object CMakeFiles/libparmmg_a.dir/src/metis_pmmg.c.o
[ 79%] Building C object CMakeFiles/libparmmg_a.dir/src/moveinterfaces_pmmg.c.o
[ 80%] Building C object CMakeFiles/libparmmg_a.dir/src/mpipack_pmmg.c.o
[ 82%] Building C object CMakeFiles/libparmmg_a.dir/src/mpitypes_pmmg.c.o
/.../tmp/parmmg/src/src/mergemesh_pmmg.c: In function ‘PMMG_merge_parmesh’:
/.../tmp/parmmg/src/src/mergemesh_pmmg.c:1603:2: warning: #warning MEMORY: small inconsistency [-Wcpp]
 1603 | #warning MEMORY: small inconsistency
      |  ^~~~~~~
/.../tmp/parmmg/src/src/moveinterfaces_pmmg.c: In function ‘PMMG_check_reachability’:
/.../tmp/parmmg/src/src/moveinterfaces_pmmg.c:689:2: warning: #warning Luca: change this tag [-Wcpp]
  689 | #warning Luca: change this tag
      |  ^~~~~~~
/.../tmp/parmmg/src/src/moveinterfaces_pmmg.c: In function ‘PMMG_part_moveInterfaces’:
/.../tmp/parmmg/src/src/moveinterfaces_pmmg.c:1390:2: warning: #warning Luca: change this tag [-Wcpp]
 1390 | #warning Luca: change this tag
      |  ^~~~~~~
[ 83%] Building C object CMakeFiles/libparmmg_a.dir/src/mpiunpack_pmmg.c.o
[ 84%] Building C object CMakeFiles/libparmmg_a.dir/src/parmmgexterns.c.o
[ 85%] Building C object CMakeFiles/libparmmg_a.dir/src/quality_pmmg.c.o
[ 86%] Building C object CMakeFiles/libparmmg_a.dir/src/tag_pmmg.c.o
[ 87%] Building C object CMakeFiles/libparmmg_a.dir/src/tools_pmmg.c.o
/.../tmp/parmmg/src/src/tag_pmmg.c: In function ‘PMMG_untag_par_node’:
/.../tmp/parmmg/src/src/tag_pmmg.c:97:2: warning: #warning Option -nosurf overrides part of the surface analysis [-Wcpp]
   97 | #warning Option -nosurf overrides part of the surface analysis
      |  ^~~~~~~
/.../tmp/parmmg/src/src/tag_pmmg.c: In function ‘PMMG_untag_par_edge’:
/.../tmp/parmmg/src/src/tag_pmmg.c:117:2: warning: #warning Option -nosurf overrides part of the surface analysis [-Wcpp]
  117 | #warning Option -nosurf overrides part of the surface analysis
      |  ^~~~~~~
[ 88%] Building C object CMakeFiles/libparmmg_a.dir/src/variadic_pmmg.c.o
[ 89%] Building C object CMakeFiles/libparmmg_a.dir/src/zaldy_pmmg.c.o
[ 91%] Linking CXX static library lib/libparmmg.a
[ 91%] Built target libparmmg_a
make: *** [Makefile:149: all] Error 2

Interestingly, I am able to compile ParMMG on a third machine with a different (older, mostly) environment. Here are its version numbers for the same programs as above (not sure it's relevant):

  • cmake 3.13.4
  • make 4.2.1
  • gcc 8.3.0
  • MMG at latest github.com/master (commit 583b835071c6db360145bf5e49467d0f9f206244)

The ParMMG sources and the CMake configuration are identical.

Do you know how I could fix this error?

Optim doesn't improve boundary tets

Hello,

I'm using parmmg to improve the quality of an existing 3d tet mesh. However, poor quality elements on the boundary are not improved, while the interior elements are heavily improved. Is there an option preventing the optimization of the boundary elements?

Thanks!

Automatic detection of the convergency of ParMmg iterations

For now, the number of adaptation-repartitionning iterations is fixed. It can be modified by users using CLI or API.
An automatic criterion to detect convergency can:

  • simplify usage
  • give better results (if user uses too few iterations)
  • improve efficiency by not using a too large number of iterations.

local/global id for interface faces for face API mode

it seems that the global id for interface faces is needed.
We never compute/use them, we know however for a given face its id in the current proc as well as in the remote proc.

When we use PMMG_Set_triangle should the id be local or global in distributed mode ?

Should the id of PMMG_Set_triangle correspond to the local or global id of the arrays passed to PMMG_Set_ithNodeCommunicator_faces when the face is at the interface between processes ?

thanks in advance

ghost data

In a finite element code, we often use ghost mesh data. Gmsh provides this information.
After an adaptation we lose this information.
Would it be possible to provide it?
Since the interface is fixed, we know the element from the other side (we have the information in the initial mesh)
but the coordinates of the 4-th vertex(not on the interface) may have been moved.
is that correct ?

/cc @Algiane @lcirrottola

Interfacing with parmmg using a parallel mesh data structure

We have a parallel mesh data structure in feelpp that provide all kind of information regarding its parallel distribution (ghosts, edge, faces...).
When looking at the examples, a few things are unclear:

  • what parmesh->info.root returns ?
  • does _centralized mean that the mesh is built sequentially then distributed and vice/versa?
  • can we set up a parmmg mesh data structure directly with setters and getters like in sequential?
    • how do you handle interprocess data? how do you stitch/deal with the interface between processors?
    • how about the ghosts cells? don't you need them?

Slower than mmg ?

I may be using it wrong but when I run ParMmg on my computer with OpenMPI on 2 cores, the software is just 10x slower than Mmg on its own. It is also slower when running it on 1, 8 or 16 cores and generally gets slower as the number of processes increases.
I am remeshing a 3 million elements mesh with this command : "mpirun -n 2 parmmg_O3 Mesh.mesh -met Met.sol -hgrad 0"

I must mention that I get this warning at runtime "## Warning: MMG5_scotchCall: fail to determine scotch version. No renumbering." and failed to find a way to treat it.

Removal of useless reallocations of `ext_node_comm`

Allocation of parmesh->ext_node_comm at size nprocs everywhere. The memory cost should be neglectible even if the number of used communicators ( parmesh->next_node_comm) is << nprocs and it avoids lot of useless dynamic allocations.

A priori evaluation of the time / amount of work needed by Mmg to adapt an input mesh and metric

At each iteration of ParMmg we have to wait until all groups are adapted by Mmg. If work is not well balanced, we end up with MPI processes waiting.

As load-balancing is less important than forcing the interfaces migration (because non adapted interfaces lead to additional iterations), we would like to have the best load balancing that guarantees the migration of interfaces possible.

For now, load-balancing tries to balance the input number of element. We may want to find a better approximation of the workload from the input mesh and metric. We may want to investigate several tracks:

  • evaluation of the targeted number of elements
  • difference between input and targeted number of elements
  • map of area that needs to be refined and area that needs to be coarsen and evaluation of the adaptation time (coarsening takes longer than refinement)
  • etc.

add cmake support for mmg and parmmg as submodules/projects

in Feelpp, mmg and parmmg are integrated as sumodules. this case is not addressed by parmmg when it looks for mmg.
use modern cmake it is very easy to pass the full configuration of mmg to parmmg in order to compile with the right flags
If that's ok for you to have something like that, I can provide the changes I made to parmmg

Profiling: Test effect of group size at adaptation step

With ompss, small groups at adaptation steps can allow to reduce load imbalance (unused processes can take work to other processes). At the same time, small groups are creating a larger number of constrained interfaces and may lead to more iterations or to difficulties to adapt the whole mesh.

Build error when setting download options to OFF

Hello boys and girls,

I am trying to install ParMmg on a server that does not allow exterior internet connections.
This forces me to set the download flags to OFF.
I have compiled the Metis library that is in dependencies, and I have linked it using cmake as explained on the website.
I have also uploaded mmg on the server and installed it smoothly, and also linked it.

However I encounter issues in the end of the compilation :

Scanning dependencies of target copy_pmmg_headers
[ 96%] Built target copy_pmmg_headers
[ 98%] Linking CXX executable bin/parmmg_O3
lib/libparmmg.a(metis_pmmg.c.o) : Dans la fonction « PMMG_part_meshElts2metis » :
metis_pmmg.c:(.text+0x2e32) : référence indéfinie vers « METIS_SetDefaultOptions »
metis_pmmg.c:(.text+0x3384) : référence indéfinie vers « METIS_PartGraphKway »
metis_pmmg.c:(.text+0x33e9) : référence indéfinie vers « METIS_PartGraphRecursive »
lib/libparmmg.a(metis_pmmg.c.o) : Dans la fonction « PMMG_part_parmeshGrps2metis » :
metis_pmmg.c:(.text+0x4a87) : référence indéfinie vers « METIS_SetDefaultOptions »
metis_pmmg.c:(.text+0x4b09) : référence indéfinie vers « METIS_PartGraphKway »
metis_pmmg.c:(.text+0x4b75) : référence indéfinie vers « METIS_PartGraphRecursive »
CMakeFiles/parmmg.dir/build.make:99: recipe for target 'bin/parmmg_O3' failed
make[2]: *** [bin/parmmg_O3] Error 1
CMakeFiles/Makefile2:602: recipe for target 'CMakeFiles/parmmg.dir/all' failed
make[1]: *** [CMakeFiles/parmmg.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

Does anyone have an idea how I could fix this ?

Sincerely,

  • PAM

Support for ParMmg as a subproject and modern cmake

ParMmg cannot be used as a subproject. A few changes in the CMakeLists.txt would allow that.
For example use ${PROJECT_BINARY_DIR} instead of ${CMAKE_BINARY_DIR}.
Also target_include_directories could be improved and targets export could be setup

Anisotropic remeshing load imbalance with the distributed interface

In this FreeFEM example, we successively refine a mesh using ParMmg distributed interface. I've noticed that the output mesh is far from being properly load balanced in terms of local number of elements. See the table below for some results on 4 ranks, 4 successive calls to ParMmg (the initial decomposition is generated by METIS).

Rank Iteration 0 Iteration 1 Iteration 2 Iteration 3 Iteration 4
0 2301 7848 15412 35777 91092
1 2254 1548 2926 5483 11078
2 2223 7199 12946 26360 59205
3 2294 1915 3768 6071 13260

Any idea on how to mitigate this issue? Is it the caller responsibility to load balance the mesh after a call to ParMmg, e.g., with ParMETIS or PT-SCOTCH?

Crash in PMMG_Init_parMesh

Hi

parmmg crashes when I initialize the mesh data structure.
I get this message:

malloc(): invalid next size (unsorted)

it occurs in PMMG_Init_parMesh

this is with HEAD in master and mmg is being downloaded to compile parmmg

True parallel gradation of metric

For now the input metric is gradated inside each partition by Mmg. This very rough evaluation should be replaced by a real gradation of the input metric through partitions. Then we can interpolate this computed metric at each parmmg iteration in order to adapt on the suitable size map (the interpolation is already called but from the metric that is gradated by Mmg.

MPI errors

I've been testing ParMmg and it looks promising. I managed to get it to run successfully for a small-ish problem using OpenMPI, but I haven't succeed with mpich for any cases, or with OpenMP for larger cases.

With OpenMPI I get a lot of warnings like

Read -1, expected 162457973, errno = 1

and

## Error: PMMG_check_extEdgeComm: rank 23:
       2 different points (dist 1.292470e-26:0.000000e+00,-1.136868e-13,0.000000e+00) in the same position (51435) of the external communicator 23 3 (6 th item):
       - point : 4.472009e+03 -5.431501e+02 1.254939e+02

For a large problem and OpenMPI I get segfaults at -- PHASE 3 : MERGE MESHES OVER PROCESSORS

With mpich I get a crash after a lot of ## Error: PMMG_check_extEdgeComm: rank 2: messages:

Fatal error in PMPI_Allreduce: Other MPI error, error stack:
PMPI_Allreduce(450)...........: MPI_Allreduce(sbuf=0x7ffc50901a88, rbuf=0x7ffc50901a8c, count=1, datatype=MPI_INT, op=MPI_MAX, comm=MPI_COMM_WORLD) failed
PMPI_Allreduce(436)...........: 
MPIR_Allreduce_impl(293)......: 
MPIR_Allreduce_intra_auto(178): 
MPIR_Allreduce_intra_auto(84).: 

Are these known issues?

Fix contiguous integration

Migration from Inria solution (Jenkins + Inria VM) onto github actions for reduced maintenance and access to external users.

Idempotence of adaptation algorithm

Idempotence of Mmg algo (that is, Mmg doesn't change the mesh if it is already a computational mesh and if it is suitable in term of edge lengths and qualities) should improve:

  • load balancing after first iteration (mesh is almost entirely adapted at first iter)
  • efficiency: Mmg will have almost no work after first iteration

Idempotence may be influenced by:

  • type or metric (isotropic / anisotropic)
  • gradation
  • required gradation

Non-consistent exit statuses

The exit statuses of functions are non-consistent in the library, e.g., PMMG_Set_meshSize returns 1 on success, while PMMG_bdryUpdate returns PMMG_SUCCESS (defined to 0) on success. Would it be possible to follow the standard and always return 0 (or PMMG_SUCCESS) on success? Otherwise, it makes error checking in calling libraries extremely difficult to handle, see, e.g., https://gitlab.com/petsc/petsc/-/merge_requests/4889#note_863666426, as one needs to jump back and forth between if (return code != 1) // calling-library custom error-handling and if (return code != 0) // calling-library custom error-handling.

Edge length errors in anisotropic mode

I am both encouraging this development and have specific issues. ParMmg develop reports very weird results with Mmg develop in anisotropic mode (check below for nan, inf, and negative edge length), so I hope that once this is merged in develop, ParMmg will behave. I have no such issue with isotropic meshing.

$ '/opt/homebrew/bin/mpiexec' -n 10 /Volumes/Data/repositories/FreeFem-sources-opt/src/mpi/FreeFem++-mpi -nw 'laplace-adapt-dist-3d-PETSc.edp' -v 0 -wg -iMax 2 -verbose 5
  0 KSP Residual norm 5.856778376147e-01 
  1 KSP Residual norm 1.774332796564e-02 
  2 KSP Residual norm 2.013850875238e-03 
  3 KSP Residual norm 1.369002836462e-04 
  4 KSP Residual norm 1.039126769538e-05 
  5 KSP Residual norm 9.117826241568e-07 

  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
   MODULE PARMMGLIB_DISTRIBUTED: IMB-LJLL : 1.4.0 (Nov. 05, 2021)
  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
     git branch: HEAD
     git commit: 3972301c607e94e4250aff5cd9c964b1251c7a88
     git date:   2022-02-14 18:05:45 +0100


  -- PMMG: CHECK INPUT DATA
  -- CHECK INPUT DATA COMPLETED.     0.000s

  -- PHASE 1 : ANALYSIS

  -- RESULTING EDGE LENGTHS  1359
     AVERAGE LENGTH               1.5065
     SMALLEST EDGE LENGTH         0.8333      255    248
     LARGEST  EDGE LENGTH         3.2025      159    212 

  -- PARALLEL MESH QUALITY  1981   9072
     BEST   0.743447  AVRG.   0.416806  WRST.   0.149224 (PROC 8 - ELT 542)
     HISTOGRAMM:  100.00 % > 0.12
                   29.20 % >  0.5
       0.6 < Q <   0.8      1623    17.89 %
       0.4 < Q <   0.6      2557    28.19 %
       0.2 < Q <   0.4      4822    53.15 %
       0.0 < Q <   0.2        70     0.77 %
   -- PHASE 1 COMPLETED.     0.062s

  -- PHASE 2 : ANISOTROPIC MESHING
       group splitting                   0.001s

       adaptation: iter 1   cumul. timer 0.000s
       mmg                               0.552s
       metric and fields interpolation   0.001s
       load balancing                    0.158s

       adaptation: iter 2   cumul. timer 0.713s
       mmg                               0.435s
       metric and fields interpolation   0.001s
       load balancing                    0.071s

       adaptation: iter 3   cumul. timer 1.223s
       mmg                               0.192s
       metric and fields interpolation   0.004s
       load balancing                    0.159s


  -- PARALLEL MESH QUALITY  3583   18612
     BEST   0.997267  AVRG.   0.771946  WRST.   0.090815 (PROC 6 - ELT 2035)
     HISTOGRAMM:   99.99 % > 0.12
                   96.62 % >  0.5
       0.8 < Q <   1.0      8467    45.49 %
       0.6 < Q <   0.8      8341    44.82 %
       0.4 < Q <   0.6      1587     8.53 %
       0.2 < Q <   0.4       208     1.12 %
       0.0 < Q <   0.2         9     0.05 %

       mesh packing                      0.192s
       group merging                     0.000s
  ## Warning: MMG5_lenEdg: at least 1 negative edge length (-1.603799e-01)
  ## Warning: MMG5_lenEdg: at least 1 negative edge length (-4.139075e-02)
  ## Warning: MMG5_lenEdg: at least 1 negative edge length (-1.025905e+00)
  ## Warning: MMG5_lenEdg: at least 1 negative edge length (-5.619068e-01)
  ## Warning: MMG5_lenEdg: at least 1 negative edge length (-5.293369e-01)
  ## Warning: MMG5_lenEdg: at least 1 negative edge length (-3.432190e-02)
  ## Warning: MMG5_lenEdg: at least 1 negative edge length (-5.751909e-02)
  ## Warning: MMG5_lenEdg: at least 1 negative edge length (-1.791615e-01)

  -- RESULTING EDGE LENGTHS (ROUGH EVAL.) 27202 
     AVERAGE LENGTH                  nan
     SMALLEST EDGE LENGTH           -inf      284    277 (PROC 0)
     LARGEST  EDGE LENGTH            inf       18      4 (PROC 0)
     0.71 < L < 1.41     21087   77.52 %  

     HISTOGRAMM:
     0.00 < L < 0.30      3201   11.77 %  
     0.30 < L < 0.60       229    0.84 %  
     0.60 < L < 0.71      1219    4.48 %  
     0.71 < L < 0.90      9094   33.43 %  
     0.90 < L < 1.30     11042   40.59 %  
     1.30 < L < 1.41       951    3.50 %  
     1.41 < L < 2.00       700    2.57 %  
     2.00 < L < 5.00        44    0.16 %  
     5.   < L              722    2.65 %  

     WARNING: unable to compute the length of 23 edges
  -- PHASE 2 COMPLETED.     1.614s

   -- PHASE 3 : MESH PACKED UP
   -- PHASE 3 COMPLETED.     0.001s

   PARMMGLIB_DISTRIBUTED: ELAPSED TIME  1.676s

  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
   END OF MODULE PARMMGLIB_DISTRIBUTED: IMB-LJLL 
  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
  0 KSP Residual norm 7.699519090787e-01 
  1 KSP Residual norm 4.298789823018e-02 
  2 KSP Residual norm 7.318720576567e-03 
  3 KSP Residual norm 1.180535786134e-03 
  4 KSP Residual norm 2.049118909033e-04 
  5 KSP Residual norm 3.156081226217e-05 
  6 KSP Residual norm 4.175108326620e-06 

  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
   MODULE PARMMGLIB_DISTRIBUTED: IMB-LJLL : 1.4.0 (Nov. 05, 2021)
  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
     git branch: HEAD
     git commit: 3972301c607e94e4250aff5cd9c964b1251c7a88
     git date:   2022-02-14 18:05:45 +0100


  -- PMMG: CHECK INPUT DATA
  -- CHECK INPUT DATA COMPLETED.     0.003s

  -- PHASE 1 : ANALYSIS

  -- RESULTING EDGE LENGTHS  2673
     AVERAGE LENGTH               1.4428
     SMALLEST EDGE LENGTH         0.6648      256    255
     LARGEST  EDGE LENGTH         2.9753      161    160 

  -- PARALLEL MESH QUALITY  3583   18612
     BEST   0.987634  AVRG.   0.625663  WRST.   0.036598 (PROC 2 - ELT 630)
     HISTOGRAMM:   99.94 % > 0.12
                   79.81 % >  0.5
       0.8 < Q <   1.0      2376    12.77 %
       0.6 < Q <   0.8      8049    43.25 %
       0.4 < Q <   0.6      7082    38.05 %
       0.2 < Q <   0.4      1082     5.81 %
       0.0 < Q <   0.2        23     0.12 %
   -- PHASE 1 COMPLETED.     0.114s

  -- PHASE 2 : ANISOTROPIC MESHING
       group splitting                   0.014s

       adaptation: iter 1   cumul. timer 0.000s
       mmg                               1.146s
       metric and fields interpolation   0.002s
       load balancing                    0.161s

       adaptation: iter 2   cumul. timer 1.311s
       mmg                               0.461s
       metric and fields interpolation   0.007s
       load balancing                    0.095s

       adaptation: iter 3   cumul. timer 1.875s
       mmg                               0.716s
       metric and fields interpolation   0.003s
       load balancing                    0.200s


  -- PARALLEL MESH QUALITY  6996   37791
     BEST   0.994915  AVRG.   0.676747  WRST.   0.076175 (PROC 3 - ELT 966)
     HISTOGRAMM:   99.98 % > 0.12
                   84.47 % >  0.5
       0.8 < Q <   1.0      9568    25.32 %
       0.6 < Q <   0.8     16838    44.56 %
       0.4 < Q <   0.6      8925    23.62 %
       0.2 < Q <   0.4      2369     6.27 %
       0.0 < Q <   0.2        91     0.24 %

       mesh packing                      0.717s
       group merging                     0.000s
  ## Warning: MMG5_lenEdg: at least 1 negative edge length (-2.183505e-01)

  -- RESULTING EDGE LENGTHS (ROUGH EVAL.) 52195 
     AVERAGE LENGTH                  nan
     SMALLEST EDGE LENGTH           -inf       52      5 (PROC 0)
     LARGEST  EDGE LENGTH            inf        1    120 (PROC 0)
     0.71 < L < 1.41     35549   68.11 %  

     HISTOGRAMM:
     0.00 < L < 0.30      2887    5.53 %  
     0.30 < L < 0.60      2563    4.91 %  
     0.60 < L < 0.71      3310    6.34 %  
     0.71 < L < 0.90     11497   22.03 %  
     0.90 < L < 1.30     21436   41.07 %  
     1.30 < L < 1.41      2616    5.01 %  
     1.41 < L < 2.00      4133    7.92 %  
     2.00 < L < 5.00       554    1.06 %  
     5.   < L             3199    6.13 %  

     WARNING: unable to compute the length of 75 edges
  -- PHASE 2 COMPLETED.     2.886s

   -- PHASE 3 : MESH PACKED UP
   -- PHASE 3 COMPLETED.     0.002s

   PARMMGLIB_DISTRIBUTED: ELAPSED TIME  3.005s

  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
   END OF MODULE PARMMGLIB_DISTRIBUTED: IMB-LJLL 
  &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

Originally posted by @prj- in MmgTools/mmg#120 (comment)

Error phase 3 merging

Hello,
When using ParMmg with very large meshes I get the following error :

-- PHASE 3 : MERGE MESHES OVER PROCESSORS
rcv_buffer Exceeded max memory allowed: function: PMMG_gather_parmesh, file: /gpfs/projects/pr1eny00/PARMMG/ParMmg/src/mergemesh_pmmg.c, line: 1152
Fatal error in PMPI_Gatherv: Invalid buffer pointer, error stack:
PMPI_Gatherv(1001): MPI_Gatherv failed(sbuf=0x46b31fc8, scount=156390683, MPI_CHAR, rbuf=(nil), rcnts=0x1c89408, displs=0x212c808, MPI_CHAR, root=0, MPI_COMM_WORLD) failed
PMPI_Gatherv(887).: Null buffer pointer

Can anyone provide me with an explanation why this happens ?

Sincerely,

  • PA M

Compilation issue

Hi everyone,
I have trouble compiling ParMmg on my computer. Before compiling, both Metis and Mmg were already installed.
I followed the instructions you provided but got some issues:
The first error message I got was : metis.h could not be found. Though I previously set the path with the command:
cmake -DDOWNLOAD_METIS=OFF -DMETIS_DIR=/home/pclouzet/local ..
So I manually edited the file here:
CMakeFiles/libparmmg_a.dir/flags.make and added a -I/home/pclouzet/local which solved the error message.

The second error message I got was : mmg3d.h could not be found. In deed I don't have such file in mmg directory but have libmmg3d.h. So I though it was a typo and modified
#include "mmg3d.h" to #include "libmmg3d.h" in the file ParMmg/src/parmmg.h
However that last step lead to a lot of warning messages ending with error of compilation.

This last error message (see attached file) is very generic and dont provide any hint on what went wrong so I'm sharing this with the community!

My setup:
cmake version 3.22.1
gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
mmg 5.7.0
metis latest version

Attached are the warning messages I got
compilation.log

Please do tell if you need any other information!
Thanks in advance
Pierre

Robustness if user provides non contiguous input partitions

Test if we support non-contiguous input partitions. I think that Metis doesn't but I don't know for Scotch or advancing-front interface migration. I either don't know if we fix contiguity before first repartitionning.

  • Creation of a continuous integration test case
  • Check if we fix contiguity before first partitionning
  • If not check if Metis is robust to this
  • if Scotch is
  • if advancing-front method is

Graph coloring using scotch

Implementation:

  • Add possibility to call scotch instead of Metis for initial partitioning (nodes of graph are mesh elts)
  • Add possibility to call scotch instead of Metis for interface migration (nodes of graphs are groups)
  • Add weighted graph support
  • Add Scotch and PT-Scotch partitionning

Checks:

  • Robustness to non contiguous partitions (if PT-Scotch is used)

If Scotch graph is stored under the same format as Metis graph we only have to rename the PMMG_graph_meshElts2metis, PMMG_graph_parmeshGrps2metis and PMMG_graph_parmeshGrps2parmetis functions.

Otherwise we will have to create new fonctions to store the graph at suitable format and to refactorize common parts of the construction of graphs at scotch and metis format.

Undeclared MPI symbols

There are compilation failures with old MPI implementations, see https://community.freefem.org/t/make-petsc-slepc-errors-on-centos-6-10/960.

/home/xlzheng/local/FreeFem-sources/3rdparty/ff-petsc/petsc-3.15.0/fr/externalpackages/git.parmmg/src/zaldy_pmmg.c: In function 'PMMG_parmesh_SetMemGloMax':
/home/xlzheng/local/FreeFem-sources/3rdparty/ff-petsc/petsc-3.15.0/fr/externalpackages/git.parmmg/src/zaldy_pmmg.c:65:5: warning: implicit declaration of function 'MPI_Comm_split_type' [-Wimplicit-function-declaration]
     MPI_Comm_split_type( parmesh->comm, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL,
     ^
/home/xlzheng/local/FreeFem-sources/3rdparty/ff-petsc/petsc-3.15.0/fr/externalpackages/git.parmmg/src/zaldy_pmmg.c:65:41: error: 'MPI_COMM_TYPE_SHARED' undeclared (first use in this function)
     MPI_Comm_split_type( parmesh->comm, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL,
                                         ^
/home/xlzheng/local/FreeFem-sources/3rdparty/ff-petsc/petsc-3.15.0/fr/externalpackages/git.parmmg/src/zaldy_pmmg.c:65:41: note: each undeclared identifier is reported only once for each function it appears in

It would be nice to either bypass the failures or error out more gracefully, maybe at configure time?

Build failure on platforms where char is unsigned

Refer to the issues in mmg
MmgTools/mmg#83
I got the same compilation error in parmmg on arm platform.
I think it is the same issue.

In file included from /home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/inoutcpp_3d.cpp:37:
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:149:93: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
  149 | _idirinv[4][4] = {{-1,0,1,2},{0,-1,2,1},{0,1,-1,2},{0,2,1,-1}};
      |                                                              ^

/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:149:93: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:149:93: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:149:93: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
  153 | -1,2,1,0}, {-1,1,2,-1,-1,0},{2,-1,1,-1,0,-1},{1,2,-1,0,-1,-1}};
      |                                                              ^

/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]
/home/ubuntu/ParMmg/build/Mmg-prefix/src/Mmg/src/mmg3d/mmg3d.h:153:119: error: narrowing conversion of ‘-1’ from ‘int’ to ‘char’ [-Wnarrowing]

Refactoring of `mpipack_pmmg.h` and `mpiunpack_pmmg.h` to use `MPI_Struct` instead of char array.

The mesh packing inside a char array implies that we need large intergers to store the message size.

An alternative that was implemented in the past (and was working I think) used hierarchical MPI_Struct to define the C structures used in the mesh (points, xpoints, tetra...) as well as the mesh itself. It should allow to easier the mesh communication and to reduce the size of the integer used to provide the message size (because the MPI structures will be larger than chars which is the minimal unit for the type of variables).

See issue #113 and PR#117

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.