Coder Social home page Coder Social logo

scorec / core Goto Github PK

View Code? Open in Web Editor NEW
179.0 179.0 63.0 10.45 MB

parallel finite element unstructured meshes

License: Other

CMake 3.52% TeX 2.94% C++ 86.18% Shell 0.12% C 6.62% HTML 0.01% Python 0.29% Fortran 0.09% SWIG 0.23%
adaptive bigger-meshes c c-plus-plus cmake finite-elements hpc meshes mpi parallel parallel-computing unstructured-meshes

core's People

Contributors

a-jp avatar agalli93 avatar ajinkyadahale avatar angelyr avatar asroy avatar avinmoharana avatar barbam avatar bartlettroscoe avatar bgranzow avatar bobpaw avatar cwsmith avatar ibaned avatar jacobmerson avatar jaredcrean2 avatar kennethejansen avatar matthb2 avatar mortezah avatar mrasquin avatar paulrevere4 avatar rickybalin avatar samanthajlee2 avatar samiullah-malik avatar seegyoung avatar shamse avatar sim-saurabh avatar usmanriaz07 avatar wrtobin avatar yangf4 avatar yetanotherminion avatar zaidedan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

core's Issues

Phasta restart has required dependence on apf_sim

phRestart.cc has a required dependence on the apf_sim package by including apfSim.h. This can be seen here:

https://github.com/SCOREC/core/blob/master/phasta/phRestart.cc#L13

This breaks the SCOREC/core build when ENABLE_SIMMETRIX is set to OFF. It looks like the correct thing to do would be to place the phRestart.cc source file under this if statement in the phasta/CMakeLists.txt file:

https://github.com/SCOREC/core/blob/master/phasta/CMakeLists.txt#L25

I would like SCOREC developers to be conscious that many users of SCOREC/core do not build with the Simmetrix tools enabled.

What is the purpose of getNodeXi member of a shape function

I was looking at the getNodeXi function and I noticed that it is set to return Vector(0,0,0) in the base class, and then is never overloaded by the child classes. I thought that perhaps the getNodeXi would give the location of each node for a FieldElement. However when I implemented this function for the quadratic Lagrange shape function I started failing the unit tests for shapefun2 and bezierMesh. The failing tests do not directly call getNodeXi, so before I went down the path of debugging, I thought I would ask what the intended purpose of getNodeXi is. If it needs to be implemented, I will be happy to implement it, and make the tests pass. Otherwise I would like to know how to get the location of the nodes in Xi space for a fieldshape or field element.

Here is my implementation for LagrangeQuadratic.

    void getNodeXi(int et, int node, Vector3& xi)
    {
      switch(node) {
        case 0:
          xi = Vector3(-1,-1, 0);
          break;
        case 1:
          xi = Vector3( 1,-1, 0);
          break;
        case 2:
          xi = Vector3( 1, 1, 0);
          break;
        case 3:
          xi = Vector3(-1, 1, 0);
          break;
        case 4:
          xi = Vector3( 0,-1, 0);
          break;
        case 5:
          xi = Vector3( 1, 0, 0);
          break;
        case 6:
          xi = Vector3( 0, 1, 0);
          break;
        case 7:
          xi = Vector3(-1, 0, 0);
          break;
        case 8:
          xi = Vector3( 0, 0, 0);
          break;
        default:
          fail("node index not found");
          break;
      }
    }

Chef + repartition

Consider giving Chef the ability to increase part count by a non-integer factor

put back Trilinos support in single-build

Getting approval from Zoltan2 and Albany is likely to be more work than putting back the TriBITS support, and single-build really is a lot better for users trying to link to us.

create branch for pumi w/ ghosting

Ideally, we should be able to convert SVN commits to Git commits and somehow replay them into the central Git repo, on a branch that separates at the time when the code was copied to Redmine. Then git merge ought to be able to apply most of the big changes such as the CMake stuff.

Merge core-sim

Simmetrix, Inc. have given their approval for us to publish the portions of our code which call their APIs, those portions currently being in a separate repository. This issue will track progress towards merging all that code into this repository.

How to set field values over mesh element?

Is there a way to set the values over a vector field using the same numbering mapping to nodes on mesh elements as returned by

apf::getElementNumbers(apf::Numbering*, apf::MeshEntity*, apf::NewArray<int> & )

I have created a vector field and zeroed it. Now I want to modify the field components for each node on the field shape function. The element node mapping returned by getElementNumbers directly corresponds to my vector of displacements, which I now want to write to each node using the vector field. What is the best way to go from the global element mapping to the individual nodes of the vector field?

It looks like the inverse of what I am trying to do exists in the:

int FieldDataOf<T>::getElementData(MeshEntity* entity, NewArray<T>& data)

Where it aligns the nodes into the correct order and then retrieves the values from the field. Will running this in reverse give the same ordering as we get when retrieving the numberings?

fmemopen, open_memstream break OS X compatability

Compiling at: 532450e
on OS X El Capitan results in warnings of the type:

/Users/bng/core/phasta/phstream.cc:56:13: error: use of undeclared identifier 'fmemopen';
FILE* f = fmemopen(rs->restart, rs->rSz, "r");

/Users/bng/core/phasta/phstream.cc:63:13: error: use of undeclared identifier 'open_memstream';
FILE* f = open_memstream(&(rs->restart), &(rs->rSz));

These are probably essential to the phasta file stream stuff, so they still need to exist. The best solution is to probably disable the compilation of these methods if architecture type = OS X.

field not showing up until data is written

Creating a new field on a mesh, then attempting to write it to a file is a segfault.
Minimal working example

apf::Field* node_f = apf::createField(m, "nodeField", apf::SCALAR, m->getShape());
apf::writeVtkFiles("batman_elm", m);

then

expected tag "nodeField_ver" on entity type 0
Aborted (core dumped)

But if you zero the field apf::zeroField(node_f); then no segfault.
I could not find the place in the documentation that indicates the need to write data to a field to
make it appear, so this issue is more of a heads up that the behavior of Field is not very clear if
you are only working from the doxygen.

Verified Lagrange Quadratic Shape functions?

Have the second order Lagrange shape functions and their gradients been checked in any unit-tests?
I need to verify that they are actually correct, so if they have not been tested I will go ahead and write some tests for them.

Add DTD for external readers of paraview .vtu format

The vtu files generated by writeVtkFiles seems to be XML, would it make sense to include a DTD for these so that external parsers can load the meshes for human readable editing? My project is looking for an easy way to edit tags on meshes manually after visual inspection from ParaView, and the vtu format appears to offer really easy way to do this. Most of the code I am writing is equivalent to a DTD in a validating parser, so I though it would be nice this feature built into the .pvtu and .vtu files.

Importantly, ParaView seems to ignore any type of DTD, even if it is malformed and incorrect. So this would not impact any existing workflows

Below is an example simple DTD I whipped up for a single .vtu file, it needs a little more work for all of the cases in apfVtk.cc but it seems very possible based on inspection of the source.

<!DOCTYPE VTKFile [
    <!ELEMENT VTKFile (UnstructuredGrid?)>
    <!ATTLIST VTKFile
        type (UnstructuredGrid) #REQUIRED>
    <!ELEMENT UnstructuredGrid (Piece*)> <!-- can have more than one <Piece> -->
    <!ATTLIST Piece
        NumberOfPoints CDATA #IMPLIED
        NumberOfCells CDATA #IMPLIED>
    <!ELEMENT Piece ((Cells| Points|PointData|CellData)?, 
    (Points|Cells|PointData|CellData)?,
    (Points|Cells|PointData|CellData)?,
    (Points|Cells|PointData|CellData)?)> <!-- only one of each type in any order -->
    <!ELEMENT Points (DataArray)>
    <!ELEMENT PointData (DataArray*)>
    <!ELEMENT DataArray (#PCDATA)>
    <!ATTLIST DataArray 
        type (Float64|Int32|Int64|UInt8) #REQUIRED
        Name CDATA #IMPLIED
        NumberOfComponents CDATA #IMPLIED
        format (ascii) #REQUIRED>
    <!ELEMENT Cells (DataArray*)>
    <!ELEMENT CellData (DataArray*)> 
    ]>

update Doxyfile

doxygen seems to say that several variables are deprecated/obsolete

Segfault in apf::writeVtkFiles

I was running some old code on travis and I am now getting segfaults when I call apf::writeVtkFiles.

I compiled PUMI at commit d4cecce

This is the error I get

$ mpirun ./bin/bug.out 
mesh verified in 0.000027 seconds
end of my code

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 32535 RUNNING AT localhost.localdomain
=   EXIT CODE: 139
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

This is my version of mpirun

$ mpirun --version
HYDRA build details:
    Version:                                 3.1
    Release Date:                            Thu Feb 20 11:41:13 CST 2014
    CC:                              gcc  -m64 -O2 -fPIC -Wl,-z,noexecstack 
    CXX:                             g++  -m64 -O2 -fPIC -Wl,-z,noexecstack 
    F77:                             gfortran -m64 -O2 -fPIC -Wl,-z,noexecstack 
    F90:                             gfortran -m64 -O2 -fPIC -Wl,-z,noexecstack 
    Configure options:                       '--disable-option-checking' '--prefix=/usr' '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-sharedlibs=gcc' '--enable-shared' '--enable-lib-depend' '--disable-rpath' '--enable-fc' '--with-device=ch3:nemesis' '--with-pm=hydra:gforker' '--includedir=/usr/include/mpich-x86_64' '--bindir=/usr/lib64/mpich/bin' '--libdir=/usr/lib64/mpich/lib' '--datadir=/usr/share/mpich' '--mandir=/usr/share/man/mpich' '--docdir=/usr/share/mpich/doc' '--htmldir=/usr/share/mpich/doc' '--with-hwloc-prefix=system' 'FC=gfortran' 'F77=gfortran' 'CFLAGS=-m64 -O2 -fPIC -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -O2' 'CXXFLAGS=-m64 -O2 -fPIC -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'FCFLAGS=-m64 -O2 -fPIC -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'FFLAGS=-m64 -O2 -fPIC -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -O2' 'LDFLAGS=-Wl,-z,noexecstack ' 'MPICHLIB_CFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'MPICHLIB_CXXFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'MPICHLIB_FCFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'MPICHLIB_FFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' '--cache-file=/dev/null' '--srcdir=.' 'CC=gcc' 'LIBS=-lrt -lpthread ' 'CPPFLAGS= -I/builddir/build/BUILD/mpich-3.1/src/mpl/include -I/builddir/build/BUILD/mpich-3.1/src/mpl/include -I/builddir/build/BUILD/mpich-3.1/src/openpa/src -I/builddir/build/BUILD/mpich-3.1/src/openpa/src -I/builddir/build/BUILD/mpich-3.1/src/mpi/romio/include'
    Process Manager:                         pmi
    Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist
    Topology libraries available:            hwloc
    Resource management kernels available:   user slurm ll lsf sge pbs cobalt
    Checkpointing libraries available:       
    Demux engines available:                 poll select

Here is a minimal working example, using vanilla apf mesh functions to build a single quad, then attempt to serialize it to disk using apf::writeVtkFiles

#include <stdint.h>
#include <stdio.h>
#include <iostream>
#include <iomanip>
#include <cmath>
#include <cassert>

#include <apf.h>
#include <apfMesh2.h>
#include <apfMesh.h>
#include <gmi_mesh.h>
#include <gmi_null.h>
#include <apfShape.h>
#include <gmi_mesh.h>
#include <gmi_null.h>
#include <apfMDS.h>
#include <PCU.h>
#include <apfNumbering.h>

#define SECRET_BUILDER_NUMBERING "SecretBuilderNumbering"

int main(int argc, char *argv[])
{
    MPI_Init(&argc,&argv);
    PCU_Comm_Init();

    apf::Mesh2* mesh = NULL;


    gmi_register_null();
    gmi_model* g = gmi_load(".null");
    mesh = apf::makeEmptyMdsMesh(g, 2, false);

    apf::Numbering* numbers = apf::createNumbering(mesh,SECRET_BUILDER_NUMBERING, 
                                                   mesh->getShape(), 1);
    apf::MeshEntity** vertices = new apf::MeshEntity*[4];
    for(uint32_t counter = 0; counter < 4; counter++) {
        vertices[counter] = mesh->createVert(0);
        apf::number(numbers,vertices[counter],0,0,counter);
    }


    /*this element is flipped upside down around x axis*/
    apf::Vector3 tmp_vec(1.35,2.36,0.0);
    mesh->setPoint(vertices[0], 0, tmp_vec);
    tmp_vec.x() = 2.57; tmp_vec.y() = 2.70;
    mesh->setPoint(vertices[1], 0, tmp_vec);
    tmp_vec.x() = 2.51; tmp_vec.y() = 1.75;
    mesh->setPoint(vertices[2], 0, tmp_vec);
    tmp_vec.x() = 1.64; tmp_vec.y() = 1.28;
    mesh->setPoint(vertices[3], 0, tmp_vec);

    apf::buildElement(mesh, NULL, apf::Mesh::QUAD, vertices);

    apf::deriveMdsModel(mesh);
    mesh->acceptChanges();
    mesh->verify();

    apf::changeMeshShape(mesh, apf::getLagrange(2));
    apf::MeshIterator *it;
    apf::MeshEntity* e;
    it = mesh->begin(2);
    e = mesh->iterate(it);

    apf::Downward down;
    uint32_t sz = mesh->getDownward(e, 1, down);
    assert(4 == sz);
    tmp_vec.x() = 2.09; tmp_vec.y() = 2.36;
    mesh->setPoint(down[0], 0, tmp_vec);
    tmp_vec.x() = 2.73; tmp_vec.y() = 2.25;
    mesh->setPoint(down[1], 0, tmp_vec);
    tmp_vec.x() = 2.01; tmp_vec.y() = 1.72;
    mesh->setPoint(down[2], 0, tmp_vec);
    tmp_vec.x() = 1.15; tmp_vec.y() = 1.77;
    mesh->setPoint(down[3], 0, tmp_vec);
    /*set the center node*/
    tmp_vec.x() = 1.90; tmp_vec.y() = 2.05;
    mesh->setPoint(e, 0, tmp_vec);
    delete[] vertices;

    std::cout << "end of my code" << std::endl;

    apf::writeVtkFiles("batman_elm", mesh);

    PCU_Comm_Free();
    MPI_Finalize();

    return 0;
}

error: identifier "Field_def" is undefined

Hi,
I m reopening an issue, as I get the same error using core instead of core-sim:

I m trying to compile the latest version but it fails because of some undefined identifier.
I m using the latest SimModeler library (10.0-161031), with the following components:

components=( gmcore-linux64.tgz aciskrnl-linux64.tgz discrete-linux64.tgz fdcore-linux64.tgz gmabstract-linux64.tgz gmadv-linux64.tgz msadapt-linux64.tgz msadv-linux64.tgz mscore-linux64.tgz msparalleladapt-linux64.tgz msparallelmesh-linux64.tgz pskrnl-linux64.tgz )

Here the error message when compiling the most recent code (latest commit ab1d695 ):

di73yeq2@login22:/gpfs/work/pr63qo/di73yeq2/core/build> ../install_coresim.sh
-- The CXX compiler identification is Intel 15.0.4.20150805
-- The C compiler identification is Intel 15.0.4.20150805
-- Check for working CXX compiler: /lrz/sys/parallel/mpi.ibm/pecurrent/intel/bin/mpicc
-- Check for working CXX compiler: /lrz/sys/parallel/mpi.ibm/pecurrent/intel/bin/mpicc -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working C compiler: /lrz/sys/parallel/mpi.ibm/pecurrent/intel/bin/mpicc
-- Check for working C compiler: /lrz/sys/parallel/mpi.ibm/pecurrent/intel/bin/mpicc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- CMAKE_VERSION: 3.4.0
-- SCOREC_VERSION: 2.1.0
-- BUILD_TESTING: OFF
-- BUILD_SHARED_LIBS: OFF
-- CMAKE_INSTALL_PREFIX: /gpfs/work/pr63qo/di73yeq2/core/lib
-- CMAKE_CXX_FLAGS: -O2 -g -Werror -Wall
-- Try C99 C flag = [ ]
-- Performing Test C99_FLAG_DETECTED
-- Performing Test C99_FLAG_DETECTED - Failed
-- Try C99 C flag = [-std=gnu99]
-- Performing Test C99_FLAG_DETECTED
-- Performing Test C99_FLAG_DETECTED - Success
-- CMAKE_C_FLAGS = -std=gnu99 -O2 -g -Werror -Wall
-- IS_TESTING: OFF
-- BUILD_EXES: ON
-- MPIRUN: MPIRUN-NOTFOUND -np
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- ENABLE_SIMMETRIX: ON
-- ENABLE_OMEGA_H: OFF
-- SIM_ARCHOS x64_rhel5_gcc41
-- Found SIMMODSUITE: /gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/lib/x64_rhel5_gcc41/libSimPartitionedMesh-mpi.a;/gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/lib/x64_rhel5_gcc41/libSimField.a;/gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/lib/x64_rhel5_gcc41/libSimAdvMeshing.a;/gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/lib/x64_rhel5_gcc41/libSimPartitionedMesh-mpi.a;/gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/lib/x64_rhel5_gcc41/libSimMeshing.a;/gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/lib/x64_rhel5_gcc41/libSimMeshTools.a;/gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/lib/x64_rhel5_gcc41/libSimModel.a;/gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/lib/x64_rhel5_gcc41/libSimPartitionWrapper-mpich2.a
-- PCU_COMPRESS: OFF
-- LION_COMPRESS: OFF
-- MDS_SET_MAX: 256
-- MDS_ID_TYPE: long
-- ENABLE_ZOLTAN: OFF
-- ENABLE_STK: OFF
-- ENABLE_STK_MESH: OFF
-- ENABLE_DSP: OFF
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:

ENABLE_THREADS

-- Build files have been written to: /gpfs/work/pr63qo/di73yeq2/core/build
Scanning dependencies of target pcu
[ 0%] Building C object pcu/CMakeFiles/pcu.dir/pcu.c.o
[ 0%] Building C object pcu/CMakeFiles/pcu.dir/pcu_aa.c.o
[ 1%] Building C object pcu/CMakeFiles/pcu.dir/pcu_coll.c.o
[ 1%] Building C object pcu/CMakeFiles/pcu.dir/pcu_io.c.o
[ 1%] Building C object pcu/CMakeFiles/pcu.dir/pcu_buffer.c.o
[ 1%] Building C object pcu/CMakeFiles/pcu.dir/pcu_mpi.c.o
[ 2%] Building C object pcu/CMakeFiles/pcu.dir/pcu_msg.c.o
[ 2%] Building C object pcu/CMakeFiles/pcu.dir/pcu_order.c.o
[ 2%] Building C object pcu/CMakeFiles/pcu.dir/pcu_pmpi.c.o
[ 2%] Building C object pcu/CMakeFiles/pcu.dir/noto/noto_malloc.c.o
[ 2%] Building C object pcu/CMakeFiles/pcu.dir/reel/reel.c.o
[ 4%] Linking C static library libpcu.a
[ 4%] Built target pcu
Scanning dependencies of target gmi
[ 4%] Building C object gmi/CMakeFiles/gmi.dir/gmi.c.o
[ 4%] Building C object gmi/CMakeFiles/gmi.dir/agm.c.o
[ 5%] Building C object gmi/CMakeFiles/gmi.dir/gmi_base.c.o
[ 5%] Building C object gmi/CMakeFiles/gmi.dir/gmi_file.c.o
[ 5%] Building C object gmi/CMakeFiles/gmi.dir/gmi_lookup.c.o
[ 5%] Building C object gmi/CMakeFiles/gmi.dir/gmi_mesh.c.o
[ 6%] Building C object gmi/CMakeFiles/gmi.dir/gmi_null.c.o
[ 6%] Building C object gmi/CMakeFiles/gmi.dir/gmi_analytic.c.o
[ 6%] Linking C static library libgmi.a
[ 6%] Built target gmi
Scanning dependencies of target gmi_sim
[ 6%] Building CXX object gmi_sim/CMakeFiles/gmi_sim.dir/gmi_sim.cc.o
[ 8%] Linking CXX static library libgmi_sim.a
[ 8%] Built target gmi_sim
Scanning dependencies of target mth
[ 8%] Building CXX object mth/CMakeFiles/mth.dir/mthQR.cc.o
[ 9%] Linking CXX static library libmth.a
[ 9%] Built target mth
Scanning dependencies of target lion
[ 9%] Building CXX object lion/CMakeFiles/lion.dir/lionBase64.cc.o
[ 9%] Building CXX object lion/CMakeFiles/lion.dir/lionNoZLib.cc.o
[ 10%] Linking CXX static library liblion.a
[ 10%] Built target lion
Scanning dependencies of target apf
[ 10%] Building CXX object apf/CMakeFiles/apf.dir/apf.cc.o
[ 10%] Building CXX object apf/CMakeFiles/apf.dir/apfCavityOp.cc.o
[ 12%] Building CXX object apf/CMakeFiles/apf.dir/apfElement.cc.o
[ 12%] Building CXX object apf/CMakeFiles/apf.dir/apfField.cc.o
[ 12%] Building CXX object apf/CMakeFiles/apf.dir/apfFieldOf.cc.o
[ 12%] Building CXX object apf/CMakeFiles/apf.dir/apfGradientByVolume.cc.o
[ 13%] Building CXX object apf/CMakeFiles/apf.dir/apfIntegrate.cc.o
[ 13%] Building CXX object apf/CMakeFiles/apf.dir/apfMatrix.cc.o
[ 13%] Building CXX object apf/CMakeFiles/apf.dir/apfDynamicMatrix.cc.o
[ 13%] Building CXX object apf/CMakeFiles/apf.dir/apfDynamicVector.cc.o
[ 14%] Building CXX object apf/CMakeFiles/apf.dir/apfMatrixField.cc.o
[ 14%] Building CXX object apf/CMakeFiles/apf.dir/apfMesh.cc.o
[ 14%] Building CXX object apf/CMakeFiles/apf.dir/apfMesh2.cc.o
[ 14%] Building CXX object apf/CMakeFiles/apf.dir/apfMigrate.cc.o
[ 14%] Building CXX object apf/CMakeFiles/apf.dir/apfScalarElement.cc.o
[ 16%] Building CXX object apf/CMakeFiles/apf.dir/apfScalarField.cc.o
[ 16%] Building CXX object apf/CMakeFiles/apf.dir/apfShape.cc.o
[ 16%] Building CXX object apf/CMakeFiles/apf.dir/apfIPShape.cc.o
[ 16%] Building CXX object apf/CMakeFiles/apf.dir/apfHierarchic.cc.o
[ 17%] Building CXX object apf/CMakeFiles/apf.dir/apfVector.cc.o
[ 17%] Building CXX object apf/CMakeFiles/apf.dir/apfVectorElement.cc.o
[ 17%] Building CXX object apf/CMakeFiles/apf.dir/apfVectorField.cc.o
[ 17%] Building CXX object apf/CMakeFiles/apf.dir/apfPackedField.cc.o
[ 18%] Building CXX object apf/CMakeFiles/apf.dir/apfNumbering.cc.o
[ 18%] Building CXX object apf/CMakeFiles/apf.dir/apfMixedNumbering.cc.o
[ 18%] Building CXX object apf/CMakeFiles/apf.dir/apfAdjReorder.cc.o
[ 18%] Building CXX object apf/CMakeFiles/apf.dir/apfVtk.cc.o
[ 20%] Building CXX object apf/CMakeFiles/apf.dir/apfFieldData.cc.o
[ 20%] Building CXX object apf/CMakeFiles/apf.dir/apfTagData.cc.o
[ 20%] Building CXX object apf/CMakeFiles/apf.dir/apfCoordData.cc.o
[ 20%] Building CXX object apf/CMakeFiles/apf.dir/apfArrayData.cc.o
[ 21%] Building CXX object apf/CMakeFiles/apf.dir/apfUserData.cc.o
[ 21%] Building CXX object apf/CMakeFiles/apf.dir/apfPartition.cc.o
[ 21%] Building CXX object apf/CMakeFiles/apf.dir/apfConvert.cc.o
[ 21%] Building CXX object apf/CMakeFiles/apf.dir/apfConstruct.cc.o
[ 21%] Building CXX object apf/CMakeFiles/apf.dir/apfVerify.cc.o
[ 22%] Building CXX object apf/CMakeFiles/apf.dir/apfGeometry.cc.o
[ 22%] Building CXX object apf/CMakeFiles/apf.dir/apfBoundaryToElementXi.cc.o
[ 22%] Building CXX object apf/CMakeFiles/apf.dir/apfFile.cc.o
[ 22%] Linking CXX static library libapf.a
[ 22%] Built target apf
Scanning dependencies of target apf_sim
[ 24%] Building CXX object apf_sim/CMakeFiles/apf_sim.dir/apfSIM.cc.o
/gpfs/work/pr63qo/di73yeq2/core/apf_sim/apfSIM.cc(32): error: identifier "Field_def" is undefined
pPolyField pf = static_cast(Field_def(fd));
^

/gpfs/work/pr63qo/di73yeq2/core/apf_sim/apfSIMDataOf.h(21): error: identifier "Field_def" is undefined
pf = static_cast(Field_def(fd));
^

compilation aborted for /gpfs/work/pr63qo/di73yeq2/core/apf_sim/apfSIM.cc (code 2)
make[2]: *** [apf_sim/CMakeFiles/apf_sim.dir/apfSIM.cc.o] Error 2
make[1]: *** [apf_sim/CMakeFiles/apf_sim.dir/all] Error 2
make: *** [all] Error 2
[ 4%] Built target pcu
[ 6%] Built target gmi
[ 8%] Built target gmi_sim
[ 9%] Built target mth
[ 10%] Built target lion
[ 22%] Built target apf
[ 24%] Building CXX object apf_sim/CMakeFiles/apf_sim.dir/apfSIM.cc.o
/gpfs/work/pr63qo/di73yeq2/core/apf_sim/apfSIM.cc(32): error: identifier "Field_def" is undefined
pPolyField pf = static_cast(Field_def(fd));
^

/gpfs/work/pr63qo/di73yeq2/core/apf_sim/apfSIMDataOf.h(21): error: identifier "Field_def" is undefined
pf = static_cast(Field_def(fd));
^

compilation aborted for /gpfs/work/pr63qo/di73yeq2/core/apf_sim/apfSIM.cc (code 2)
make[2]: *** [apf_sim/CMakeFiles/apf_sim.dir/apfSIM.cc.o] Error 2
make[1]: *** [apf_sim/CMakeFiles/apf_sim.dir/all] Error 2
make: *** [all] Error 2

here is my install script:

di73yeq2@login22:/gpfs/work/pr63qo/di73yeq2/core/build> cat ../install_coresim.sh
cmake ..
-DSIM_MPI=mpich2
-DCMAKE_C_COMPILER="which mpicc"
-DCMAKE_CXX_COMPILER="which mpicc"
-DCMAKE_C_FLAGS="-O2 -g -Wall"
-DCMAKE_CXX_FLAGS="-O2 -g -Wall"
-DMDS_ID_TYPE=long
-DENABLE_THREADS:BOOL=OFF
-DSIMMETRIX_LIB_DIR="/gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/lib/x64_rhel5_gcc41/"
-DSIMMODSUITE_INCLUDE_DIR="/gpfs/work/pr63qo/di73yeq2/SimModelerLib/10.0-161031/include"
-DENABLE_ZOLTAN=OFF
-DCMAKE_INSTALL_PREFIX="/gpfs/work/pr63qo/di73yeq2/core/lib"
-DENABLE_SIMMETRIX:BOOL=ON

make
make install

Thanks for helping,

Thomas Ulrich

ostream operator<< does not work for non explicitly instantiated template parameters to apf::Matrix

to reproduce, create empty matrices

    apf::Matrix<3,3> three;
        apf::Matrix<2,3> twothree;
    apf::Matrix<3,2> threetwo;
    std::cout << three << std::endl;
    std::cout << threetwo << std::endl;
    std::cout << twothree << std::endl;

we will fail to compile the first time we try to use the general templated ostream operator.
if we only use the overloaded templated parameters for <1,1> , <2,2>, <3,3> , and <4,4> there is no error. The same error is given for <2,3> as <3,2>. The templated ostream operator looks correct to me, I do not know enough C++ to understand why it not working.

mpicxx -isystem ~/Documents/GitHub/googletest/include -isystem ~/Documents/GitHub/googletest "-cxx=clang++ -Wno-c++11-long-long" -Wall -g -I ./inc --pedantic-errors -I ./lib/pumi/include -c src/StiffnessContributor2D.cc -o obj/StiffnessContributor2D.o
src/StiffnessContributor2D.cc:52:12: error: invalid operands to binary expression ('ostream' (aka 'basic_ostream<char>') and
      'apf::Matrix<2, 3>')
        std::cout << twothree << std::endl;
        ~~~~~~~~~ ^  ~~~~~~~~
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:245:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'const void *' for 1st argument; take the address of the argument with &
      operator<<(const void* __p)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:108:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to '__ostream_type &(*)(__ostream_type &)' for 1st argument
      operator<<(__ostream_type& (*__pf)(__ostream_type&))
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:117:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to '__ios_type &(*)(__ios_type &)' for 1st argument
      operator<<(__ios_type& (*__pf)(__ios_type&))
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:127:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'std::ios_base &(*)(std::ios_base &)' for 1st argument
      operator<<(ios_base& (*__pf) (ios_base&))
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:166:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'long' for 1st argument
      operator<<(long __n)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:170:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'unsigned long' for 1st argument
      operator<<(unsigned long __n)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:174:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'bool' for 1st argument
      operator<<(bool __n)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:178:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'short' for 1st argument
      operator<<(short __n);
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:181:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'unsigned short' for 1st argument
      operator<<(unsigned short __n)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:189:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'int' for 1st argument
      operator<<(int __n);
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:192:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'unsigned int' for 1st argument
      operator<<(unsigned int __n)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:201:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'long long' for 1st argument
      operator<<(long long __n)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:205:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'unsigned long long' for 1st argument
      operator<<(unsigned long long __n)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:220:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'double' for 1st argument
      operator<<(double __f)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:224:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'float' for 1st argument
      operator<<(float __f)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:232:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to 'long double' for 1st argument
      operator<<(long double __f)
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:270:7: note: candidate function not viable:
      no known conversion from 'apf::Matrix<2, 3>' to '__streambuf_type *' (aka 'basic_streambuf<char, std::char_traits<char> >
      *') for 1st argument
      operator<<(__streambuf_type* __sb);
      ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:476:5: note: candidate function [with _CharT
      = char, _Traits = std::char_traits<char>] not viable: no known conversion from 'apf::Matrix<2, 3>' to 'char' for 2nd
      argument
    operator<<(basic_ostream<_CharT, _Traits>& __out, char __c)
    ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:482:5: note: candidate function
      [with _Traits = std::char_traits<char>] not viable: no known conversion from 'apf::Matrix<2, 3>' to 'char' for 2nd argument
    operator<<(basic_ostream<char, _Traits>& __out, char __c)
    ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:488:5: note: candidate function
      [with _Traits = std::char_traits<char>] not viable: no known conversion from 'apf::Matrix<2, 3>' to 'signed char' for 2nd
      argument
    operator<<(basic_ostream<char, _Traits>& __out, signed char __c)
    ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:493:5: note: candidate function
      [with _Traits = std::char_traits<char>] not viable: no known conversion from 'apf::Matrix<2, 3>' to 'unsigned char' for 2nd
      argument
    operator<<(basic_ostream<char, _Traits>& __out, unsigned char __c)
    ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:530:5: note: candidate function
      [with _Traits = std::char_traits<char>] not viable: no known conversion from 'apf::Matrix<2, 3>' to 'const char *' for 2nd
      argument
    operator<<(basic_ostream<char, _Traits>& __out, const char* __s)
    ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:543:5: note: candidate function
      [with _Traits = std::char_traits<char>] not viable: no known conversion from 'apf::Matrix<2, 3>' to 'const signed char *'
      for 2nd argument
    operator<<(basic_ostream<char, _Traits>& __out, const signed char* __s)
    ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/ostream:548:5: note: candidate function
      [with _Traits = std::char_traits<char>] not viable: no known conversion from 'apf::Matrix<2, 3>' to 'const unsigned char *'
      for 2nd argument
    operator<<(basic_ostream<char, _Traits>& __out, const unsigned char* __s)
    ^
/usr/bin/../lib/gcc/x86_64-redhat-linux/4.9.2/../../../../include/c++/4.9.2/bits/ostream.tcc:321:5: note: candidate function
      [with _CharT = char, _Traits = std::char_traits<char>] not viable: no known conversion from 'apf::Matrix<2, 3>' to
      'const char *' for 2nd argument
    operator<<(basic_ostream<_CharT, _Traits>& __out, const char* __s)
    ^
./lib/pumi/include/apfVector.h:206:15: note: candidate function not viable: no known conversion from 'apf::Matrix<2, 3>' to
      'const apf::Vector3' for 2nd argument
std::ostream& operator<<(std::ostream& s, apf::Vector3 const& v);
              ^
./lib/pumi/include/apfMatrix.h:247:15: note: candidate function not viable: no known conversion from 'Matrix<2, 3>' to
      'const Matrix<1, 1>' for 2nd argument
std::ostream& operator<<(std::ostream& s, apf::Matrix<1,1> const& A);
              ^
./lib/pumi/include/apfMatrix.h:249:15: note: candidate function not viable: no known conversion from 'Matrix<[...], 3>' to
      'const Matrix<[...], 2>' for 2nd argument
std::ostream& operator<<(std::ostream& s, apf::Matrix<2,2> const& A);
              ^
./lib/pumi/include/apfMatrix.h:251:15: note: candidate function not viable: no known conversion from 'Matrix<2, [...]>' to
      'const Matrix<3, [...]>' for 2nd argument
std::ostream& operator<<(std::ostream& s, apf::Matrix<3,3> const& A);
              ^
./lib/pumi/include/apfMatrix.h:253:15: note: candidate function not viable: no known conversion from 'Matrix<2, 3>' to
      'const Matrix<4, 4>' for 2nd argument
std::ostream& operator<<(std::ostream& s, apf::Matrix<4,4> const& A);
              ^
./lib/pumi/include/apfDynamicVector.h:111:15: note: candidate function not viable: no known conversion from 'apf::Matrix<2, 3>'
      to 'const apf::DynamicVector' for 2nd argument
std::ostream& operator<<(std::ostream& s, apf::DynamicVector const& x);
              ^
./lib/pumi/include/apfDynamicMatrix.h:201:15: note: candidate function not viable: no known conversion from 'apf::Matrix<2, 3>'
      to 'const apf::DynamicMatrix' for 2nd argument
std::ostream& operator<<(std::ostream& s, apf::DynamicMatrix const& A);


Are Numberings on nodes or Entities?

When I try to number mesh edges on a linear mesh, I get a segmentation fault. Is the numbering on nodes or is it on the field itself. If it is on nodes then the error makes sense, because there are no nodes on edges in a linear mesh.

How do I stick an integer to mesh entities like edges so that when I change the shape functions I can still look up the same tag from the actual entity obtained from the mesh iterator over a certain dimension? Do I have to do this outside of the mesh by using an externally applied map to all of the entities?

I am still on the quest to attach boundary conditions to a mesh generated programmatically so I know where all of the edges are.

apf::getElementNumbers should return size

I believe the apf::getElementNumbers should return the integer. Right now this function is just a wrapper over numbering->getData()->getElementData used as some template wizardry.

getElementData takes a NewArray and will dynamically resize it, then returns the new size of the array. Now NewArray has no internal representation of size, so this return is very necessary to keep track of the size of the array. Right now the apf::getElementNumbers black boxes the new size and there is no way to tell the new size if reallocation occurs.

This is the proposed change:

diff --git a/apf/apfNumbering.cc b/apf/apfNumbering.cc
index cd39870..538943c 100644
--- a/apf/apfNumbering.cc
+++ b/apf/apfNumbering.cc
@@ -180,9 +180,9 @@ int countComponents(Numbering* n)
   return n->countComponents();
 }

-void getElementNumbers(Numbering* n, MeshEntity* e, NewArray<int>& numbers)
+int getElementNumbers(Numbering* n, MeshEntity* e, NewArray<int>& numbers)
 {
-  n->getData()->getElementData(e,numbers);
+  return n->getData()->getElementData(e,numbers);
 }

 int countFixed(Numbering* n)
diff --git a/apf/apfNumbering.h b/apf/apfNumbering.h
index 9a9a016..909c417 100644
--- a/apf/apfNumbering.h
+++ b/apf/apfNumbering.h
@@ -81,7 +81,7 @@ int countComponents(Numbering* n);
 /** \brief returns the node numbers of an element
   \details numbers are returned in the standard
            element node ordering for its shape functions */
-void getElementNumbers(Numbering* n, MeshEntity* e, NewArray<int>& numbers);
+int getElementNumbers(Numbering* n, MeshEntity* e, NewArray<int>& numbers);

 /** \brief return the number of fixed degrees of freedom */ 
 int countFixed(Numbering* n);

ParMA handling of mixed meshes

ParMA does not fix fragmented boundary layer stacks and creates additional fragmentation. I suspect this is causing an increase in the total number of vertices; more vertices on the part boundary.

splittingbndrylayers

add tests for DynamicMatrix

commit a1f17cc makes it pretty clear that code isn't tested. this issue is something we can have an undergrad / work study do, so low priority but keep it in mind.

Test Wiki Page ambigous about Zoltan requirement for tests

When I run the tests, I encounter below errors

CMakeFiles/ptnParma.dir/ptnParma.cc.o: In function `getPlan':
/home/shivaebola/Documents/GitHub/core/test/ptnParma.cc:68: undefined reference to `apf::makeZoltanGlobalSplitter(apf::Mesh*, int, int, bool)'
collect2: error: ld returned 1 exit status
test/CMakeFiles/ptnParma.dir/build.make:157: recipe for target 'test/ptnParma' failed
make[2]: *** [test/ptnParma] Error 1
CMakeFiles/Makefile2:3006: recipe for target 'test/CMakeFiles/ptnParma.dir/all' failed
make[1]: *** [test/CMakeFiles/ptnParma.dir/all] Error 2
Makefile:147: recipe for target 'all' failed
make: *** [all] Error 2

My steps in directory core

$ git pull
$ cd build
$ source ../config.sh

config.sh is

cmake .. \
  -DCMAKE_C_COMPILER="mpicc" \
  -DCMAKE_CXX_COMPILER="mpicxx" \
  -DCMAKE_C_FLAGS="-O2 -g -Wall" \
  -DCMAKE_CXX_FLAGS="-O2 -g -Wall" \
  -DENABLE_THREADS=ON \
  -DENABLE_ZOLTAN=OFF \
  -DCMAKE_INSTALL_PREFIX="/home/shivaebola/Documents/GitHub/FEP/a4/lib/pumi" \
  -DIS_TESTING=True \
  -DMESHES="/home/shivaebola/Documents/GitHub/core/meshes"

then I build

$ make -j 4

and encounter the above mentioned error based on Zoltan.
It seems to indicate that we need Zoltan to run the test suit, but the wiki does not indicate this requirement. Am I doing it wrong or do I need to link with Zoltan and or ParMetis for tests to run?

Now I am still inside core/build, where I ran make, now running ctest -W, I get only 37% passing

Full results here

MeshAdapt: better metric quality

we should use a single metric to measure anisotropic quality, and isotropic quality should not involve the size field at all.

snapping to singularities

moving this issue over from SCOREC/core-sim since that is no longer used.
@mortezah wrote:

snapping to poles (or singularities in general) is problematic

How to load gmsh files

Which file format to I need to save the mesh and geometry description in Gmsh in order to read them in? I see in the source that you can load dmg files and tess files using PUMI, but I do not see either of these filetypes as an option to save in Gmsh dialog.

I need someway to load a geometry entity that has tags on it so I can iterate over the mesh enties, get their classifications, and look up what boundary conditions they have. If there is a file format where I can edit by hand the tags that entities have that would be ideal.

Any pointers to the relevant documentation or tutorials would be a great help.

malloc.h on OS X

Compiling at: 5b5c0cb
gives the following error on OS X El Capitan:

[ 34%] Building CXX object pumi/CMakeFiles/pumi.dir/pumi_ghost.cc.o
/Users/bng/core/pumi/pumi_ghost.cc:18:10: fatal error: 'malloc.h' file not found

include <malloc.h>

This is because malloc.h is nested for some reason in OS X in a directory named malloc, so that the correct syntax becomes: #include <malloc/malloc.h>

Probably the best thing to do is #include <cstdlib> instead of #include<malloc.h>

I recall PCU did something to circumvent this issue long ago, but it looks like it's since been switched to #include <stdlib.h>.

buildMapping broken

When buildMapping is on in adapt.inp, Chef exits from core/apf/apf.cc #57 at the instruction "assert( ! m->findField(name));".

So far, the adopted rule was to reconstruct the mapping between the input and output mesh for each partitioning. Therefore Chef should not look for the mapping field in the phasta restart files.

Use bob.cmake to set some default cxx flags

Seems like it would prevent some manual intervention after commits if all developers used the compile flags, -Wall -Wextra -Werror at a minimum. This could easily be accomplished by setting the commands

bob_begin_cxx_flags()
bob_end_cxx_flags()

somewhere in the top-level CMakeLists.txt. The default flags for Clang compilers in the bob.cmake package are a bit too strict, though, and it would take a lot of unneeded effort to get SCOREC/core to compile under those flags. I propose switching the Clang compile flags to -Wall -Wextra -Werror locally in cmake/bob.cmake, as well.

Any objections? If not, I'll push this change to develop.

apf::receiveAllCopies(apf::Mesh *): Assertion `a == b' failed.

Hi, I using PUMGEN, a code written by Sebastian Rettenberger, which uses SCOREC combine with Simulation Modeling Suite (Release 10.0-160429), to produce 3d tetrahedral meshes, partition them and write them in a custom netcdf format.
(here is the source https://github.com/TUM-I5/PUML)
I m running into crashes that I think are related to SCOREC. Any idea why?

Thanks in advance,

Thomas Ulrich,
PhD student, Geophysics, LMU Munich.

here is how I run the program:

pumgen -s simmodsuite -l SimModelerLib.lic --mesh "Mesh case 1" --analysis "Analysis case 1" --sim_log logtpv16.dat tpv16.smd 28 trash.28.nc

here is the program output:

Wed May 11 11:53:53, Info: Using SimModSuite
Wed May 11 11:53:57, Info: Loading model
Wed May 11 11:53:58, Info: Extracting cases
Wed May 11 11:53:58, Info: Starting the surface mesher
Wed May 11 11:54:02, Info: Starting the volume mesher
Wed May 11 11:55:39, Info: Converting mesh to APF

here is the error log:

pumgen: /gpfs/work/h019z/di73yeq/core-sim/apf/apfVerify.cc:256: void apf::receiveAllCopies(apf::Mesh *): Assertion `a == b' failed.
Abort(1) on node 18 (rank 18 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 18
Thread 11 (Thread 0x2b36fc7f6700 (LWP 35889)):
#0 0x00002b36f6ea9ce3 in epoll_wait () from /lib64/libc.so.6
#1 0x00002b36fc194436 in poe_exiting_thread () from /opt/ibmhpc/pe1402/base/intel/lib64/libpoe.so
#2 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#3 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#4 0x0000000000000000 in ?? ()

Thread 10 (Thread 0x2b36fc9f7700 (LWP 35890)):
#0 0x00002b36f6ea9ce3 in epoll_wait () from /lib64/libc.so.6
#1 0x00002b36fc192bb3 in pm_child_sig_thread () from /opt/ibmhpc/pe1402/base/intel/lib64/libpoe.so
#2 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#3 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#4 0x0000000000000000 in ?? ()

Thread 9 (Thread 0x2b36fcbf8700 (LWP 35891)):
#0 0x00002b36f5ed1527 in do_sigwait () from /lib64/libpthread.so.0
#1 0x00002b36f5ed15cd in sigwait () from /lib64/libpthread.so.0
#2 0x00002b36fc193ba2 in pm_async_thread () from /opt/ibmhpc/pe1402/base/intel/lib64/libpoe.so
#3 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#4 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#5 0x0000000000000000 in ?? ()

Thread 8 (Thread 0x2b36ff042700 (LWP 35897)):
#0 0x00002b36f5ecd66c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00002b36fda8e58b in hal_ibl_user_intr_hndlr () from /opt/ibmhpc/pe1402/base/intel/lib64/libhal64_ibm.so
#2 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#3 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#4 0x0000000000000000 in ?? ()

Thread 7 (Thread 0x2b36ff243700 (LWP 35898)):
#0 0x00002b36f6ea9ce3 in epoll_wait () from /lib64/libc.so.6
#1 0x00002b36fda8e183 in hal_ibl_async_intr_hndlr () from /opt/ibmhpc/pe1402/base/intel/lib64/libhal64_ibm.so
#2 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#3 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#4 0x0000000000000000 in ?? ()

Thread 6 (Thread 0x2b370bb49700 (LWP 35996)):
#0 0x00002b36f5ecd66c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00002b36f9b3a3a3 in shm_dispatcher_thread (arg=0x2b36ffa561ac) at /project/sprelbrew/build/rbrews002a/src/ppe/lapi/lapi_shm.c:2282
#2 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#3 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#4 0x0000000000000000 in ?? ()

Thread 5 (Thread 0x2b370bff6700 (LWP 35997)):
#0 0x00002b36f5ecd66c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00002b36f9b5236f in rc_ibl_intr_hndlr (param=0x2b36fa16315c <intr_hndlr_info+60>) at /project/sprelbrew/build/rbrews002a/src/ppe/lapi/lapi_rc_rdma_verbs_wrappers.c:1180
#2 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#3 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#4 0x0000000000000000 in ?? ()

Thread 4 (Thread 0x2b370c1f7700 (LWP 35998)):
#0 0x00002b36f6ea9ce3 in epoll_wait () from /lib64/libc.so.6
#1 0x00002b36f9b527b1 in rc_ibl_async_intr_hndlr (param=0x16) at /project/sprelbrew/build/rbrews002a/src/ppe/lapi/lapi_rc_rdma_verbs_wrappers.c:1374
#2 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#3 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#4 0x0000000000000000 in ?? ()

Thread 3 (Thread 0x2b370c3f8700 (LWP 35999)):
#0 0x00002b36f5ecd9fc in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00002b36f9b1c37f in _timer_arm (timer=0x1800c44) at /project/sprelbrew/build/rbrews002a/src/ppe/lapi/intrhndlrs.c:340
#2 0x00002b36f9b1c0e4 in _lapi_tmr_thrd (param=0x1800c44) at /project/sprelbrew/build/rbrews002a/src/ppe/lapi/intrhndlrs.c:541
#3 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#4 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#5 0x0000000000000000 in ?? ()

Thread 2 (Thread 0x2b370c5f9700 (LWP 36113)):
#0 0x00002b36f6ea0186 in poll () from /lib64/libc.so.6
#1 0x00002b36fbf5e642 in Connection::Wait (this=0x18117f0) at /project/sprelbrew/build/rbrews002a/src/ppe/pnsd/connection.cpp:169
#2 0x00002b36fbf4a3ce in internal_pnsd_api_wait_for_updates (handle=, wakeup_event=0x2b370c5ea874, device_name=, adapter_type=, win_id=, cmd_string=0x2b370c5ea860, opt_length=0x2b370c5ea870, opt=0x2b370c5ea868) at /project/sprelbrew/build/rbrews002a/src/ppe/pnsd/pnsd_api.cpp:337
#3 0x00002b36fbf4a6ac in pnsd_api_wait_for_updates (handle=207529648, wakeup_event_OUT=0x2, cmd_string_OUT=, opt_length=, opt_OUT=) at /project/sprelbrew/build/rbrews002a/src/ppe/pnsd/pnsd_api.cpp:379
#4 0x00002b36f9b410ab in preempt_monitor_thread (param=0x2b370c5ea6b0) at /project/sprelbrew/build/rbrews002a/src/ppe/lapi/lapi_preempt.c:523
#5 0x00002b36f5ec9806 in start_thread () from /lib64/libpthread.so.0
#6 0x00002b36f6ea965d in clone () from /lib64/libc.so.6
#7 0x0000000000000000 in ?? ()

Thread 1 (Thread 0x2b36fbf381e0 (LWP 35843)):
#0 0x00002b36f6e75b8f in waitpid () from /lib64/libc.so.6
#1 0x00002b36f6e09cc1 in do_system () from /lib64/libc.so.6
#2 0x00002b36f6e0a06c in system () from /lib64/libc.so.6
#3 0x00002b36fc19473f in pm_linux_print_coredump () from /opt/ibmhpc/pe1402/base/intel/lib64/libpoe.so
#4 0x00002b36fc193fda in pm_lwcf_signal_handler () from /opt/ibmhpc/pe1402/base/intel/lib64/libpoe.so
#5
#6 0x00002b36f6dfd885 in raise () from /lib64/libc.so.6
#7 0x00002b36f6dfee61 in abort () from /lib64/libc.so.6
#8 0x00002b36f6df6740 in __assert_fail () from /lib64/libc.so.6
#9 0x0000000000543cdc in apf::receiveAllCopies (m=0x8c03) at /gpfs/work/h019z/di73yeq/core-sim/apf/apfVerify.cc:256
#10 0x0000000000544991 in apf::verify (m=0x8c03) at /gpfs/work/h019z/di73yeq/core-sim/apf/apfVerify.cc:273
#11 0x00000000004540e7 in main (argc=14, argv=0x7ffde79d4fd8) at src/tools/pumgen.cpp:223

Abort(1) on node 0 (rank 0 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
Abort(1) on node 1 (rank 1 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 1
Abort(1) on node 2 (rank 2 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 2
Abort(1) on node 3 (rank 3 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 3
Abort(1) on node 4 (rank 4 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 4
Abort(1) on node 5 (rank 5 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 5
Abort(1) on node 6 (rank 6 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 6
Abort(1) on node 7 (rank 7 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 7
Abort(1) on node 8 (rank 8 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 8
Abort(1) on node 9 (rank 9 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 9
Abort(1) on node 10 (rank 10 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 10
Abort(1) on node 11 (rank 11 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 11
Abort(1) on node 12 (rank 12 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 12
Abort(1) on node 13 (rank 13 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 13
Abort(1) on node 14 (rank 14 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 14
Abort(1) on node 15 (rank 15 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 15
Abort(1) on node 16 (rank 16 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 16
Abort(1) on node 17 (rank 17 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 17
Abort(1) on node 20 (rank 20 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 20
Abort(1) on node 21 (rank 21 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 21
Abort(1) on node 22 (rank 22 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 22
Abort(1) on node 23 (rank 23 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 23
Abort(1) on node 24 (rank 24 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 24
Abort(1) on node 25 (rank 25 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 25
Abort(1) on node 26 (rank 26 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 26
Abort(1) on node 27 (rank 27 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 27
Abort(1) on node 19 (rank 19 in comm 1140850688): application called MPI_Abort(MPI_COMM_WORLD, 1) - process 19

and here is the log from SimModeler library:

-- MS_init()
--[[
10.0-160429 Platform x64_rhel5_gcc41
--]]
MS_init()
-- SimDiscrete_start(0)
--[[
SimDiscrete: 160429 Platform: x64_rhel5_gcc41
--]]
SimDiscrete_start(0)
-- SimParasolid_start(1)
--[[
SimParasolid: 160429 Platform: x64_rhel5_gcc41 Version: 27_0_142
--]]
SimParasolid_start(1)
-- GM_load("tpv16.smd", nil, nil)
r28804384 = GM_load("tpv16.smd", nil, nil)
-- GM_nativeModel(r28804384)
nil = GM_nativeModel(r28804384)
--[[
pl27460696 = PList_new()
GM_isValid(r28804384, 0, pl27460696)
PList_delete(pl27460696)
--]]
pl27460696 = PList_new()
GM_isValid(r28804384, 0, pl27460696)
PList_delete(pl27460696)
-- MS_newMeshCase(r28804384)
n27803872 = MS_newMeshCase(r28804384)
-- AttCase_setModel(AttNode_byTag(r29094432, 8), r28804384)
AttCase_setModel(AttNode_byTag(r29094432, 8), r28804384)
-- AttCase_loadedModel(AttNode_byTag(r29094432, 8))
r28804384 = AttCase_loadedModel(AttNode_byTag(r29094432, 8))
-- MS_addCubeRefinement(n27803872, 1000, {0, 0, -10000}, {20000, 0, 0}, {0, 10000, 0}, {0, 0, 10000})
MS_addCubeRefinement(n27803872, 1000, {0, 0, -10000}, {20000, 0, 0}, {0, 10000, 0}, {0, 0, 10000})
-- MS_setMeshSize(n27803872, GM_entityByTag(r28804384, 2, 2), 1, 0, "500")
MS_setMeshSize(n27803872, GM_entityByTag(r28804384, 2, 2), 1, 0, "500")
-- MS_setGlobalSizeGradationRate(n27803872, 0.25)
MS_setGlobalSizeGradationRate(n27803872, 0.25)
-- AttCase_loadedModel(AttNode_byTag(r29094432, 8))
r28804384 = AttCase_loadedModel(AttNode_byTag(r29094432, 8))
-- AttCase_setModel(AttNode_byTag(r29094432, 1), r28804384)
AttCase_setModel(AttNode_byTag(r29094432, 1), r28804384)
-- AttCase_setModel(AttNode_byTag(r29094432, 2), r28804384)
AttCase_setModel(AttNode_byTag(r29094432, 2), r28804384)
-- PM_new(0, r28804384, 1)
r29032288 = PM_new(0, r28804384, 1)
-- Progress_new()
p29219856 = Progress_new()
-- Progress_setCallback(p29219856, p4597016)
-- Progress_setCallback(p29219856, p4597016)
-- SurfaceMesher_new(n27803872, r29032288)
p29220736 = SurfaceMesher_new(n27803872, r29032288)
-- SurfaceMesher_execute(p29220736, p29219856)
SurfaceMesher_execute(p29220736, p29219856)
-- SurfaceMesher_delete(p29220736)
SurfaceMesher_delete(p29220736)
-- PM_setTotalNumParts(r29032288, 28)
PM_setTotalNumParts(r29032288, 28)
-- VolumeMesher_new(n27803872, r29032288)
p46461232 = VolumeMesher_new(n27803872, r29032288)
-- VolumeMesher_setEnforceSize(p46461232, 0)
VolumeMesher_setEnforceSize(p46461232, 0)
-- VolumeMesher_execute(p46461232, p29219856)
VolumeMesher_execute(p46461232, p29219856)
-- VolumeMesher_delete(p46461232)
VolumeMesher_delete(p46461232)
-- Progress_delete(p29219856)
Progress_delete(p29219856)
-- AttCase_associate(AttNode_byTag(r29094432, 1), nil)
AttCase_associate(AttNode_byTag(r29094432, 1), nil)
-- AttCase_unassociate(AttNode_byTag(r29094432, 1))
AttCase_unassociate(AttNode_byTag(r29094432, 1))
-- MS_deleteMeshCase(n27803872)
MS_deleteMeshCase(n27803872)
-- MS_deleteMeshCase(AttNode_byTag(r29094432, 1))
MS_deleteMeshCase(AttNode_byTag(r29094432, 1))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.