Coder Social home page Coder Social logo

precice / precice Goto Github PK

View Code? Open in Web Editor NEW
676.0 39.0 168.0 39.78 MB

A coupling library for partitioned multi-physics simulations, including, but not restricted to fluid-structure interaction and conjugate heat transfer simulations.

Home Page: https://precice.org/

License: GNU Lesser General Public License v3.0

Python 1.32% C++ 95.03% C 0.30% Shell 0.31% CMake 2.90% Dockerfile 0.05% Cuda 0.09%
multi-physics fluid-structure-interaction high-performance-computing conjugate-heat-transfer coupling co-simulation openfoam calculix code-aster su2

precice's Introduction

preCICE

Project Status
preCICE website status Release Cite preCICE distribution Build status System tests

Project Quality
xSDK Policy Compatibility CII Best Practices CodeFactor CodeQL codecov Coverity

Community
Join the forum Chat on Matrix Twitter Mastodon YouTube

preCICE stands for Precise Code Interaction Coupling Environment. Its main component is a library that can be used by simulation programs to be coupled together in a partitioned way, enabling multi-physics simulations, such as fluid-structure interaction.

If you are new to preCICE, please have a look at our documentation and at precice.org. You may also prefer to get and install a binary package for the latest release (main branch).

preCICE overview

preCICE is an academic project, developed at the Technical University of Munich and at the University of Stuttgart. If you use preCICE, please cite us.

precice's People

Contributors

ajaust avatar alexander-shukaev avatar ariguiba avatar atanasoa avatar atotoun avatar benjaminrodenberg avatar boris-martin avatar davidscn avatar dependabot[bot] avatar dinimar avatar durganshu avatar fsimonis avatar fujikawas avatar ishaandesai avatar kanishkbh avatar kursatyurt avatar kyledavissa avatar luzpaz avatar makish avatar niklaskotarsky avatar oguzziya avatar petervollmer avatar pikotee avatar qwach avatar saumij avatar scheufks avatar shkodm avatar summerdave avatar timo-schrader avatar uekerman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

precice's Issues

VTK export does not export to directories

      <export:vtk timestep-interval="1" directory="vtkA" normals="0"/>

vtkA exists as a directory. After a run the directory is empty, instead there are files

vtkAMeshA-A.dt1.vtk
vtkAMeshA-A.final.vtk
vtkAMeshA-A.init.vtk

Ok, it seems the directory attribute needs a trailing slash.

Please don't concatenate paths like it in ExportVTK.cpp::42 using strings and +, it leads to suprising results. We have boost.filesystem as a dependency anyway, it can be used to build paths correctly.

Example:

namespace fs = boost::filesystem;
fs::path outfile(directory);
fs::create_directory(outfile);
outfile = outfile / fs::path(string("rank") + std::to_string(MPIrank));
std::ofstream ostream(outfile.string(), std::ios::trunc);LK

Serial PETSc RBF slightly off

Moved from Munich GitLab. Original creator: @uekerman

I ran the 2D bending tower with alya and compared the t-arch rbf with the petsc version. i look at a watchpoint in the right top corner of the tower at ts 20.

rbf-compact-tps-c2 support-radius="0.2"

Version RTOL DisplacementDeltas0 DisplacementDeltas1 Forces0 Forces1
t-arch - 0.0017963166900034 -0.0007245852873612 10.5089654079907504 18.6587433625000720
petsc 1e-7 0.0017842800852401 -0.0007212977734752 10.5226984549448588 18.4804981855719177
petsc 1e-10 0.0017842736683224 -0.0007212878932991 10.5244332006644008 18.4815742161999665
petsc 1e-15 0.0017840923151181 -0.0007214205686490 10.5199952865106408 18.4775388181988767

i set the chop-values from 1e-9 to 1e-12, but that has no influence.

@lindner any idea what could go wrong?

Compilation of preCICE on OS X

I managed to build preCICE on OSX! I needed to add a couple lines to the SConstruct file though.

I've added the last three lines of the following excerpt of the SConstruct file:

if env["compiler"] == 'icc':
    env.AppendUnique(LIBPATH = ['/usr/lib/'])
    env.Append(LIBS = ['stdc++'])
    if env["build"] == 'debug':
        env.Append(CCFLAGS = ['-align'])
    elif env["build"] == 'release':
        env.Append(CCFLAGS = ['-w', '-fast', '-align', '-ansi-alias'])
elif env["compiler"] == 'g++':
    pass
elif env["compiler"] == "clang++":
    env['ENV']['TERM'] = os.environ['TERM'] # colored compile messages from clang
    env.Append(CCFLAGS= ['-Wsign-compare']) # sign-compare not enabled in Wall with clang.
elif env["compiler"] == "g++-mp-4.9":
    env.Append(LIBS = ['libstdc++.6'])
    env.AppendUnique(LIBPATH = ['/opt/local/lib/'])

I use the following compilation script:

#!/bin/bash

set -e

find . -name '*~' -exec rm -rf {} \;

# Define which versions of the different packages are used

export BOOST_VERSION=1_53_0
export BOOST_VERSION_DOT=1.53.0

# Download Boost

wget -O boost_${BOOST_VERSION}.tar.bz2 http://downloads.sourceforge.net/project/boost/boost/${BOOST_VERSION_DOT}/boost_${BOOST_VERSION}.tar.bz2

# Remove old build files

rm -rf boost_${BOOST_VERSION}
rm -rf precice

# Unpack third party packages

tar jxfv boost_${BOOST_VERSION}.tar.bz2

rm -f boost_${BOOST_VERSION}.tar.bz2

# Set environment variables necessary for building preCICE

export BOOST_ROOT=`pwd`/boost_${BOOST_VERSION}
export PRECICE_BOOST_ROOT=${BOOST_ROOT}

# Build preCICE

git clone [email protected]:gatzhamm/precice.git
cd precice
git checkout develop
scons -j 4 build=debug python=off petsc=off mpi=off compiler=g++-mp-4.9
scons -j 4 build=release python=off petsc=off mpi=off compiler=g++-mp-4.9

Is it possible to include this? It would be much appreciated.

@floli @uekerman

Implementation of manifold mapping as a post processing

Moved from Munich GitLab. Original creator: @uekerman

We want to implement the manifold mapping that david describes in his WCCM2014 paper. the most complicated aspect hereby is that we need a coupling scheme included in another. coarse or fine solver will be the same participant. over an action we can steer from precice what the participant should do next, also we would use the same communication from both coupling schemes.

On the other, we will need two meshes, each data field twice, and two coupling schemes, defined in the config.xml

On the implementation level, we will need a new pp, that has the coarse coupling scheme as a member. (it will be instanced in the cplScheme configuration) -> how do we mark this in the config.xml -> cplScheme is tag in pp?

To optimize on the coarse level (during the pp of the fine level), we then call advance of the coarse cplScheme till convergence (ImplicitCplScheme.measureConvergence() has to be public then)

To solve the SVD of the manifold mapping scheme, we will us "Eigen", a C++ library, where we could copy the src code directly (similar to boost)

... altogether not that easy, but possible, i guess. at least i don't see a big problem coming.

parallelize IQN-IMVJ PP method

Parallelization of the IQN-IMVJ quasi-Newton update to make the multi-vector method available to the master-slave mode. Drawback of the method: The Jacobian matrix has to be stored explicitly and the update involves a couple of very time consuming and expensive matrix-matrix multiplications. All attempts to prevent us from storing the explicit representation of the jacobian matrix failed so far or, at least do have some remarkably drawbacks concerning storage and/or computational complexity. Hence, we try to parallelize the explicit version as a first attempt.


To that end, we have to parallelize the Jacobian update J_inv = J_inv_n + (W - J_inv_n*V)Z, where Z = (V^TV)^-1V^T is computed from updated QR-decomposition (already parallelized). Working branch is updatedQR branch.

Steps:
  • divide and decompose J_inv in a suitable way (block column-wise in blocks of size n_global * n_local)
  • implement parallel version of multiplication J_inv_n*V
  • implement parallel version of multiplication W_til * Z, where W_til = (W-J_inv_n*V)
    • includes: implement send/receive operation for matrix blocks
    • later: for further enhancement implement a all-reduce operation instead of looping over all slaves
  • implement parallel version of matrix-vector product J_inv *R

A .pdf document is attached, containing some ideas and concepts for this issue.

Compilation with sockets=off mpi=on fails

scons -j 4 boost_inst=on python=on petsc=off mpi=on compiler=mpicxx build=debug sockets=off gives

src/m2n/tests/PointToPointCommunicationTest.cpp: In member function 'void precice::m2n::tests::PointToPointCommunicationTest::testSocketCommunication()':
src/m2n/tests/PointToPointCommunicationTest.cpp:83:16: error: 'SocketCommunicationFactory' in namespace 'precice::com' does not name a type
       new com::SocketCommunicationFactory);

Compilation with mpi=off sockets=off (and ommiting compiler setting) succeeds.

interface function hasReadData and hasWriteData necessary

If two solvers of the same type -- let's say OpenFOAM-OpenFOAM (fluid-fluid) -- are coupled, we need API functions such as hasReadData and hasWriteData to distinguish both solvers. Imagine they use a Dirichlet-Neumann coupling, meaning they are supposed to do different things.
Only hasData is not sufficient in such a situation.

Check timings for different mapping configurations

Long time ago, I run an investigation of the Load Balancing by running the same testcase with a fixed number of processes for one participant and then adapt the number of processes for the other participant. The timings where quite nice, and you could nicely see that for some configuration one participant was idling and dor other configurations the other participant was idling. You could find the appropriate number of processes to minimize the overall runtime and maximize the efficiency.

image

I rerun the testcase with the develop branch, changeset: 617c435 - (HEAD -> develop, origin/develop, origin/HEAD) Add an explicit flush to the EventTimings (11 days ago) .
Now, the timings which are spend in advance ( measured with Eventtimings.log) are really bad and you do not see any different for the configurations:

image

I did not change the testcase, I only rerun the tests. The data mapping is done by the participant acoustic. I copied the testcase to the shared folder: shared_exafsa_work/loadbalancing

Second observation from my side:

I run a 2d testcase which a rather small interface ( 2* 1500 points, matching grid). When I run the testcase with data mapping of each participant ( both do a read consistent) it is quite fast, and for a specific number of processors for each domain, I get a restart file (0.5s of simulation time) in almost 5 minutes. When I change the data mapping to only one particpant ( read consistent, write conservative) , for the same number of processes, one restart file takes now more than 2 h! And also Eventtimings.log state that around 80% of the time is spend in advance.

# Run finished at Mon Jan 25 09:40:37 2016
# Eventname Count Total Max Min Avg T%
"GLOBAL" 1 217797 217797 217797 217797 100
"M2N::acceptMasterConnection" 1 16516 16516 16516 16516 7
"MasterSlave::broadcast" 2 0 0 0 0 0
"advance" 2000 176169 259 41 88 80
"broadcast mesh" 1 0 0 0 0 0
"feedback mesh" 1 21 21 21 21 0
"filter mesh" 1 0 0 0 0 0
"initialize" 1 17701 17701 17701 17701 8
"initializeData" 1 13 13 13 13 0
"receive global mesh" 1 12 12 12 12 0

# Run finished at Mon Jan 25 09:40:37 2016
# Eventname Count Total Max Min Avg T%
"GLOBAL" 1 215630 215630 215630 215630 100
"M2N::requestMasterConnection" 1 13 13 13 13 0
"M2N::requestMasterConnection/Publisher::read" 1 13 13 13 13 0
"MasterSlave::broadcast" 2 0 0 0 0 0
"advance" 2000 173611 112 44 86 80
"gather mesh" 1 67 67 67 67 0
"initialize" 1 1223 1223 1223 1223 0
"initialize/PointToPointCommunication::requestConnection/request/Publisher::read" 1 130 130 130 130 0
"initialize/PointToPointCommunication::requestConnection/synchronize/Publisher::read" 1 28 28 28 28 0
"initializeData" 1 74 74 74 74 0
"send global mesh" 1 0 0 0 0 0 

I think this is really weird. Can you test the timing and data mapping? Was there any big change?

No library routine should use MPI_COMM_WORLD directly

Moved from Munich GitLab. Original creator: Alexander Shukaev

Direct excerpt from MPI documentation:

[...] no library routine should use MPI_COMM_WORLD as the communicator; instead, a duplicate of a user-specified communicator should always be used. For more information, see Using MPI, 2nd edition.

NOTE: This applies for MPI_COMM_SELF and friends as well.

I would also confirm that, in general, any communication done by library should be isolated from consumer (application) code by effective privatization of communication context (see MPI_Comm_dup).

RBF mapping error: argument out of range

commit 36390c9

setup: repository http://vmbungartz6.informatik.tu-muenchen.de/davidsblom/openfoam-fsi
branch dealii

I did not apply any custom patches to precice, just the commit as mentioned at the top. Maybe it's related to issue #33 but I'm not sure.

the following error is thrown when running the 3dTube tutorial with PetSC thin plate spline interpolation for the fluid solver:

 | 10:48:17     | precice::geometry::BroadcastFilterDecomposition::filter() | Filter mesh Structure_Nodes
 | 10:48:17     | precice::geometry::BroadcastFilterDecomposition::feedback() | Feedback mesh Structure_Nodes
 | 10:48:17     | precice::impl::SolverInterfaceImpl::initialize()        | Setting up slaves communication to coupling partner/s

 | 10:48:18     | precice::impl::SolverInterfaceImpl::initialize()        | Slaves are connected
 | 10:48:18     | precice::impl::SolverInterfaceImpl::initialize()        | it 1 of 200 | dt# 1 of 100 | t 0 | dt 0.0001 | max dt 0.0001 | ongoing yes | dt complete no | write-iteration-checkpoint | 
 | 10:48:18     | precice::impl::SolverInterfaceImpl::mapWrittenData()    | Compute write mapping from mesh "Fluid_CellCenters" to mesh "Structure_CellCenters".
[1]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[1]PETSC ERROR: Argument out of range
[1]PETSC ERROR: Out of range index value 1603 maximum 1600
[1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[1]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015 
[1]PETSC ERROR: precice on a x86_64 named MacBookAir by davidblom Thu Dec 24 10:48:17 2015
[1]PETSC ERROR: Configure options --with-shared-libraries=1 --with-x=0 --with-mpi=1 --download-hypre=1 --download-mumps --download-scalapack --download-ptscotch --download-suitesparse
[1]PETSC ERROR: #1 VecSetValues_MPI() line 921 in /home/davidblom/repositories/openfoam/davidblom-3.1-ext/src/thirdParty/petsc-3.6.3/src/vec/vec/impls/mpi/pdvec.c
[1]PETSC ERROR: #2 VecSetValuesLocal() line 1059 in /home/davidblom/repositories/openfoam/davidblom-3.1-ext/src/thirdParty/petsc-3.6.3/src/vec/vec/interface/rvector.c

petsc configuration file:

<?xml version="1.0"?>

<precice-configuration>

    <log-filter component="" target="debug" switch="off" />
    <log-filter component="" target="info" switch="on" />
    <log-output column-separator=" | " log-time-stamp="no" log-time-stamp-human-readable="yes" log-machine-name="no" log-message-type="no" log-trace="yes" />

    <solver-interface dimensions="3">

        <data:vector name="Stresses" />
        <data:vector name="Displacements" />

        <mesh name="Fluid_Nodes">
            <use-data name="Displacements" />
        </mesh>

        <mesh name="Fluid_CellCenters">
            <use-data name="Stresses" />
        </mesh>

        <mesh name="Structure_Nodes">
            <use-data name="Displacements" />
        </mesh>

        <mesh name="Structure_CellCenters">
            <use-data name="Stresses" />
        </mesh>

        <participant name="Fluid_Solver">
            <use-mesh name="Fluid_Nodes" provide="yes" />
            <use-mesh name="Fluid_CellCenters" provide="yes" />
            <use-mesh name="Structure_Nodes" from="Structure_Solver" />
            <use-mesh name="Structure_CellCenters" from="Structure_Solver" />
            <write-data mesh="Fluid_CellCenters" name="Stresses" />
            <read-data mesh="Fluid_Nodes" name="Displacements" />
            <mapping:petrbf-thin-plate-splines direction="write" from="Fluid_CellCenters" to="Structure_CellCenters" constraint="conservative" timing="initial"/>
            <mapping:petrbf-thin-plate-splines direction="read" from="Structure_Nodes" to="Fluid_Nodes" constraint="consistent" timing="initial"/>
            <master:mpi-single />
        </participant>

        <participant name="Structure_Solver">
            <use-mesh name="Structure_Nodes" provide="yes"/>
            <use-mesh name="Structure_CellCenters" provide="yes"/>
            <write-data mesh="Structure_Nodes" name="Displacements" />
            <read-data mesh="Structure_CellCenters" name="Stresses" />
            <master:mpi-single />
        </participant>

        <m2n:sockets exchange-directory="../" from="Fluid_Solver" to="Structure_Solver" />
        <!-- OSX: <m2n:sockets network="lo0" exchange-directory="../" from="Fluid_Solver" to="Structure_Solver" />-->
        <!-- supermuc: <m2n:sockets network="ib0" exchange-directory="../" from="Fluid_Solver" to="Structure_Solver" />-->

        <coupling-scheme:serial-implicit>
            <timestep-length value="1.0e-4" />
            <max-timesteps value="100" />
            <participants first="Fluid_Solver" second="Structure_Solver" />
            <exchange data="Stresses" from="Fluid_Solver" mesh="Structure_CellCenters" to="Structure_Solver" />
            <exchange data="Displacements" from="Structure_Solver" mesh="Structure_Nodes" to="Fluid_Solver" />
            <relative-convergence-measure limit="1.0e-3" data="Displacements" mesh="Structure_Nodes" suffices="0" />
            <max-iterations value="200" />
            <extrapolation-order value="2" />

            <post-processing:IQN-ILS>
                <data mesh="Structure_Nodes" name="Displacements" />
                <initial-relaxation value="0.001" />
                <max-used-iterations value="200" />
                <timesteps-reused value="2" />
                <filter type="QR1" limit="1e-8" />
            </post-processing:IQN-ILS>

        </coupling-scheme:serial-implicit>

    </solver-interface>

</precice-configuration>

scons with python version != 2.7

The following code in SConstruct should be changed

# Creates a symlink that always points to the latest build
symlink = env.Command(
    target = "Symlink",
    source = None,
    action = "ln -fns {} {}".format(os.path.split(buildpath)[-1], os.path.join(os.path.split(buildpath)[0], "last"))
)

to

# Creates a symlink that always points to the latest build
symlink = env.Command(
    target = "Symlink",
    source = None,
    action = "ln -fns {0} {1}".format(os.path.split(buildpath)[-1], os.path.join(os.path.split(buildpath)[0], "last"))
)

This change should with every python version including python 3.0.

Changeset contributed by @thijsgillebaart

Strange results from EventTimings

I got strange timings for the BroadcastFilterDecomposition::feedback

  if (utils::MasterSlave::_slaveMode) {
    utils::MasterSlave::_communication->send(numberOfVertices,0);
    if (numberOfVertices!=0) {
      utils::MasterSlave::_communication->send(filteredVertexPositions.data(),numberOfVertices,0);
    }
  }
  else { // Master
    seed.getVertexDistribution()[0] = filteredVertexPositions;
    for (int rankSlave = 1; rankSlave < utils::MasterSlave::_size; rankSlave++){
      int numberOfVertices = -1;
      utils::MasterSlave::_communication->receive(numberOfVertices,rankSlave);
      std::vector<int> slaveVertexIDs(numberOfVertices,-1);
      if (numberOfVertices!=0) {
        Event e("geo::feedback, receive data " + std::to_string(rankSlave));
        utils::MasterSlave::_communication->receive(slaveVertexIDs.data(),numberOfVertices,rankSlave);
        e.stop();
      }
      seed.getVertexDistribution()[rankSlave] = slaveVertexIDs;
    }
  }

i get: 32* 100ms and 96* 0ms

if i change the line to:

    Event e("geo::feedback, receive data " + std::to_string(rankSlave) + " " + std::string(5));

i get 128* 0ms (which makes much more sense)

@floli any idea what happens?

Change MasterSlave com to MPIDirectCom

... without accept/request.
advantage: we can do slave-slave com (e.g. for the cyclic com). also, much nicer
disadvantage: master-slave over sockets no longer possible

assertion thrown for fluid-structure-acoustics interaction

When using the PETSc based RBF interpolation, an assertion is thrown

assertion in file src/utils/PointerVector.hpp, line 90 failed: index < _content.size()
ateles: src/utils/PointerVector.hpp:90: CONTENT_T& precice::utils::ptr_vector<CONTENT_T>::operator [with CONTENT_T = precice::mesh::Vertex; std::size_t = lon
g unsigned int]: Assertion false' failed. assertion in file src/utils/PointerVector.hpp, line 90 failed: index < _content.size() ateles: src/utils/PointerVector.hpp:90: CONTENT_T& precice::utils::ptr_vector<CONTENT_T>::operator[](std::size_t) [with CONTENT_T = precice::mesh::Vertex; std::size_t = lon g unsigned int]: Assertionfalse' failed

This is the same problem I send an email last week about. Not crucial for the papers, but I guess it needs to be fixed.

Improve the quality of the source code with cppcheck

So, I dared to run the cppcheck (http://cppcheck.sourceforge.net/) tool to perform static code analysis. A large number of hints by the tool are given which can be used to improve the quality of the source code. Note that the output probably includes a number of false positives, but I think it is worth looking into them.

With the following call cppcheck raised some possible errors in the code:

cppcheck --force --language=c++ . 2> cppcheck_error.log

Contents of cppcheck_error.log:

[m2n/PointToPointCommunication.cpp:483]: (error) Instance of 'Event' object is destroyed immediately.
[m2n/PointToPointCommunication.cpp:656]: (error) Instance of 'Event' object is destroyed immediately.
[mapping/config/MappingConfiguration.cpp:617]: (error) Memory leak: arg
[tarch/plotter/griddata/regular/CartesianGridArrayWriter.h:49]: (style) Class 'DataSet' is unsafe, 'DataSet::_data' can leak by wrong usage.

I used the following call in precice/src to do the analysis which includes warnings:

cppcheck --force --enable=all --language=c++ . 2> cppcheck.log

This is the contents of cppcheck.log

[utils/GeometryComputations.hpp:366]: (style) The scope of the variable 'diff' can be reduced.
[utils/GeometryComputations.hpp:364]: (warning) Return value of function abs() is not used.
[tarch/la/DynamicVector.h:80]: (performance) Function parameter 'stdvector' should be passed by reference.
[mesh/Mesh.hpp:212]: (performance) Function parameter 'subIDName' should be passed by reference.
[action/config/ActionConfiguration.cpp:168] -> [action/config/ActionConfiguration.cpp:169]: (performance) Variable 'doc' is reassigned a value before the old one has been used.
[action/config/ActionConfiguration.cpp:186] -> [action/config/ActionConfiguration.cpp:187]: (performance) Variable 'doc' is reassigned a value before the old one has been used.
[com/MPIDirectCommunication.hpp:138]: (style) Unused private function: 'MPIDirectCommunication::getGroupID'
[com/MPIPortsCommunication.cpp:25]: (warning) Member variable 'MPIPortsCommunication::_portName' is not initialized in the constructor.
[com/tests/CommunicateMeshTest.cpp:67]: (warning) Redundant code: Found a statement that begins with numeric constant.
[com/tests/FileCommunicationTest.hpp:49]: (style) Unused private function: 'FileCommunicationTest::testSimpleSendReceive'
[com/tests/FileCommunicationTest.hpp:54]: (style) Unused private function: 'FileCommunicationTest::testMultipleExchanges'
[com/tests/MPIDirectCommunicationTest.cpp:88]: (style) Variable 'nameProcess0' is assigned a value that is never used.
[com/tests/MPIDirectCommunicationTest.cpp:89]: (style) Variable 'nameProcess1' is assigned a value that is never used.
[com/tests/MPIDirectCommunicationTest.cpp:90]: (style) Variable 'nameProcess2' is assigned a value that is never used.
[com/tests/MPIDirectCommunicationTest.cpp:46]: (warning) Redundant code: Found a statement that begins with numeric constant.
[cplscheme/BaseCouplingScheme.cpp:74]: (warning) Member variable 'BaseCouplingScheme::_couplingMode' is not initialized in the constructor.
[cplscheme/CompositionalCouplingScheme.cpp:67]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/CompositionalCouplingScheme.cpp:80]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/CompositionalCouplingScheme.cpp:144]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/CompositionalCouplingScheme.cpp:158]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/CompositionalCouplingScheme.cpp:474]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/CompositionalCouplingScheme.cpp:484]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/CompositionalCouplingScheme.cpp:498]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/CompositionalCouplingScheme.cpp:539]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/MultiCouplingScheme.cpp:153]: (style) The scope of the variable 'convergence' can be reduced.
[cplscheme/ParallelCouplingScheme.cpp:225]: (style) The scope of the variable 'convergence' can be reduced.
[cplscheme/SerialCouplingScheme.cpp:198]: (style) The scope of the variable 'convergence' can be reduced.
[cplscheme/SerialCouplingScheme.cpp:165]: (style) Variable 'values' is assigned a value that is never used.
[cplscheme/impl/ParallelMatrixOperations.hpp:329] -> [cplscheme/impl/ParallelMatrixOperations.hpp:330]: (performance) Variable 'summarizedBlocks' is reassigned a value before the old one has been used.
[cplscheme/impl/ParallelMatrixOperations.hpp:295]: (style) The scope of the variable 'local_row' can be reduced.
[cplscheme/impl/BaseQNPostProcessing.cpp:155]: (style) Variable 'unknowns' is assigned a value that is never used.
[cplscheme/impl/BaseQNPostProcessing.cpp:638]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/impl/HierarchicalAitkenPostProcessing.cpp:206]: (style) Variable 'treatedEntries' is assigned a value that is never used.
[cplscheme/impl/ParallelMatrixOperations.cpp:136]: (style) The scope of the variable 'local_row' can be reduced.
[cplscheme/impl/ParallelMatrixOperations.cpp:284]: (style) The scope of the variable 'local_row' can be reduced.
[cplscheme/impl/QRFactorization.cpp:304] -> [cplscheme/impl/QRFactorization.cpp:312]: (performance) Variable 'rho0' is reassigned a value before the old one has been used.
[cplscheme/impl/QRFactorization.cpp:418]: (style) The scope of the variable 'local_k' can be reduced.
[cplscheme/impl/QRFactorization.cpp:419]: (style) The scope of the variable 'local_uk' can be reduced.
[cplscheme/impl/QRFactorization.cpp:420]: (style) The scope of the variable 'global_uk' can be reduced.
[cplscheme/impl/QRFactorization.cpp:244]: (style) Variable 'err' is assigned a value that is never used.
[cplscheme/tests/DummyCouplingScheme.hpp:92]: (style) Consecutive return, break, continue, goto or throw statements are unnecessary.
[cplscheme/tests/ExplicitCouplingSchemeTest.cpp:52]: (warning) Redundant code: Found a statement that begins with numeric constant.
[cplscheme/tests/ParallelImplicitCouplingSchemeTest.cpp:69]: (warning) Redundant code: Found a statement that begins with numeric constant.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:550]: (style) The scope of the variable 'initialStepsizeData0' can be reduced.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:551]: (style) The scope of the variable 'stepsizeData0' can be reduced.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:904]: (style) The scope of the variable 'initialStepsizeData0' can be reduced.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:905]: (style) The scope of the variable 'stepsizeData0' can be reduced.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:982]: (style) Variable 'stepsizeData0' is assigned a value that is never used.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:67]: (warning) Redundant code: Found a statement that begins with numeric constant.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:608]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:683]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:961]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[cplscheme/tests/SerialImplicitCouplingSchemeTest.cpp:1050]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[geometry/Bubble.cpp:49]: (style) The scope of the variable 'x' can be reduced.
[geometry/Bubble.cpp:49]: (style) The scope of the variable 'y' can be reduced.
[geometry/Bubble.hpp:88]: (style) Unused private function: 'Bubble::getVertex'
[geometry/Bubble.hpp:94]: (style) Unused private function: 'Bubble::getEdge'
[geometry/Bubble.hpp:100]: (style) Unused private function: 'Bubble::getNumberLongitudinalElements'
[geometry/CommunicatedGeometry.cpp:58]: (performance) Possible inefficient checking for '_receivers' emptiness.
[io/impl/VRML10Parser.hpp:408]: (style) Variable 'valid' is assigned a value that is never used.
[geometry/Sphere.cpp:45]: (style) The scope of the variable 'latitude' can be reduced.
[geometry/Sphere.cpp:45]: (style) The scope of the variable 'currentRadius' can be reduced.
[geometry/Sphere.hpp:87]: (style) Unused private function: 'Sphere::getNumberLongitudinalElements'
[geometry/impl/BroadcastFilterDecomposition.cpp:36]: (style) Unused variable: boundingVertexDistribution
[io/ImportVRML.cpp:102]: (style) Unused variable: indices
[io/TXTTableWriter.cpp:79]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[io/TXTTableWriter.cpp:98]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[io/TXTTableWriter.cpp:119]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[io/TXTTableWriter.cpp:140]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[io/impl/VRML10Parser.hpp:395]: (style) Variable 'preciceMethodName' is assigned a value that is never used.
[io/tests/ExportAndReimportVRMLTest.hpp:49]: (style) Unused private function: 'ExportAndReimportVRMLTest::testReimportDriftRatchet'
[m2n/PointToPointCommunication.cpp:483]: (error) Instance of 'Event' object is destroyed immediately.
[m2n/PointToPointCommunication.cpp:656]: (error) Instance of 'Event' object is destroyed immediately.
[m2n/PointToPointCommunication.cpp:390]: (performance) Possible inefficient checking for 'communicationMap' emptiness.
[m2n/PointToPointCommunication.cpp:549]: (performance) Possible inefficient checking for 'communicationMap' emptiness.
[m2n/PointToPointCommunication.cpp:712]: (performance) Possible inefficient checking for 'communicationMap' emptiness.
[m2n/tests/PointToPointCommunicationTest.cpp:186]: (warning) Return value of function rand() is not used.
[m2n/tests/PointToPointCommunicationTest.cpp:201]: (warning) Return value of function rand() is not used.
[mapping/config/MappingConfiguration.cpp:617]: (error) Memory leak: arg
[mapping/petnum.cpp:134] -> [mapping/petnum.cpp:135]: (performance) Variable 'ierr' is reassigned a value before the old one has been used.
[mapping/petnum.cpp:192] -> [mapping/petnum.cpp:193]: (performance) Variable 'ierr' is reassigned a value before the old one has been used.
[mapping/tests/NearestNeighborMappingTest.cpp:53]: (warning) Redundant code: Found a statement that begins with numeric constant.
[mesh/Mesh.cpp:218]: (performance) Function parameter 'subIDName' should be passed by reference.
[mesh/tests/MeshTest.cpp:291] -> [mesh/tests/MeshTest.cpp:292]: (performance) Variable 'coords2' is reassigned a value before the old one has been used.
[mesh/tests/MeshTest.cpp:252]: (warning) Redundant code: Found a statement that begins with numeric constant.
[mesh/tests/MeshTest.cpp:253]: (warning) Redundant code: Found a statement that begins with numeric constant.
[mesh/tests/MeshTest.cpp:254]: (warning) Redundant code: Found a statement that begins with numeric constant.
[mesh/tests/MeshTest.cpp:345]: (warning) Redundant code: Found a statement that begins with numeric constant.
[mesh/tests/MeshTest.cpp:346]: (warning) Redundant code: Found a statement that begins with numeric constant.
[mesh/tests/MeshTest.cpp:347]: (warning) Redundant code: Found a statement that begins with numeric constant.
[precice/MeshHandle.hpp:96]: (style) 'class EdgeIterator' does not have a copy constructor which is recommended since the class contains a pointer to allocated memory.
[precice/MeshHandle.hpp:152]: (style) 'class TriangleIterator' does not have a copy constructor which is recommended since the class contains a pointer to allocated memory.
[precice/MeshHandle.cpp:81]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/MeshHandle.cpp:87]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/MeshHandle.cpp:165]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/MeshHandle.cpp:240]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/VoxelPosition.cpp:98]: (warning) 'operator=' should check for assignment to self to avoid problems with dynamic memory.
[precice/config/SolverInterfaceConfiguration.cpp:145]: (style) The scope of the variable 'participantFound' can be reduced.
[precice/config/SolverInterfaceConfiguration.cpp:161]: (style) Variable 'participantFound' is assigned a value that is never used.
[precice/impl/RequestManager.cpp:717]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/impl/RequestManager.cpp:718]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/impl/RequestManager.cpp:1128]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/impl/RequestManager.cpp:1129]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/impl/RequestManager.cpp:1151]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/impl/RequestManager.cpp:1152]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:283]: (style) The scope of the variable 'i' can be reduced.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:1742]: (style) The scope of the variable 'vertexID' can be reduced.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:582]: (style) Variable 'time' is assigned a value that is never used.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:693]: (style) Variable 'time' is assigned a value that is never used.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:46]: (warning) Redundant code: Found a statement that begins with numeric constant.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:294]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:300]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:307]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:329]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:337]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:343]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:449]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:456]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:475]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:513]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:524]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:532]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:629]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:631]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:633]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:635]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:672]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:674]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:676]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:678]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:719]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/couplingmode/SolverInterfaceTest.cpp:740]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1279] -> [precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1284]: (performance) Variable 'voxelPos' is reassigned a value before the old one has been used.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:966]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:970]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:974]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:978]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:984]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:987]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:990]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:993]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1038]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1042]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1046]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1050]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1056]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1059]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1062]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1065]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1068]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1075]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1079]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1118]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1122]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1126]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1130]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1136]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1139]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1142]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1145]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1148]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1155]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/geometrymode/SolverInterfaceTestGeometry.cpp:1159]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:44]: (warning) Redundant code: Found a statement that begins with numeric constant.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:138]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:139]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:140]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:150]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:151]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:164]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:165]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:166]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:167]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:249]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:250]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:251]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:261]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:262]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:275]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:276]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:277]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[precice/tests/servermode/SolverInterfaceTestRemote.cpp:278]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[query/FindVoxelContent.cpp:413]: (style) The scope of the variable 'triangleMin' can be reduced.
[query/FindVoxelContent.cpp:414]: (style) The scope of the variable 'triangleMax' can be reduced.
[query/FindVoxelContent.cpp:85]: (warning) Return value of function abs() is not used.
[query/FindVoxelContent.cpp:87]: (warning) Return value of function abs() is not used.
[query/FindVoxelContent.cpp:938]: (warning) Return value of function abs() is not used.
[query/FindVoxelContent.cpp:940]: (warning) Return value of function abs() is not used.
[query/FindVoxelContent.cpp:992]: (warning) Return value of function abs() is not used.
[query/tests/GeometryTestScenarios.cpp:80] -> [query/tests/GeometryTestScenarios.cpp:84]: (performance) Variable 'distance' is reassigned a value before the old one has been used.
[spacetree/impl/DynamicTraversal.hpp:486]: (warning) Return value of function abs() is not used.
[spacetree/impl/DynamicTraversal.hpp:504]: (warning) Return value of function abs() is not used.
[spacetree/impl/DynamicTraversal.hpp:522]: (warning) Return value of function abs() is not used.
[spacetree/impl/DynamicTraversal.hpp:157]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[spacetree/impl/DynamicTraversal.hpp:158]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[spacetree/impl/DynamicTraversal.hpp:257]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[spacetree/impl/DynamicTraversal.hpp:258]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[spacetree/impl/StaticTraversal.hpp:604]: (warning) Return value of function abs() is not used.
[spacetree/impl/StaticTraversal.hpp:622]: (warning) Return value of function abs() is not used.
[spacetree/impl/StaticTraversal.hpp:640]: (warning) Return value of function abs() is not used.
[spacetree/impl/StaticTraversal.hpp:327]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[spacetree/impl/StaticTraversal.hpp:328]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[spacetree/impl/StaticTraversal.hpp:329]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[spacetree/impl/StaticTraversal.hpp:330]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[spacetree/impl/OctreeCell.hpp:28]: (style) 'class OctreeCell' does not have a copy constructor which is recommended since the class contains a pointer to allocated memory.
[spacetree/impl/PeanotreeCell2D.hpp:33]: (style) 'class PeanotreeCell2D' does not have a copy constructor which is recommended since the class contains a pointer to allocated memory.
[spacetree/impl/PeanotreeCell3D.hpp:50]: (style) 'class PeanotreeCell3D' does not have a copy constructor which is recommended since the class contains a pointer to allocated memory.
[tarch/argument/ArgumentSetFabric.cpp:68]: (style) Unused variable: ss
[tarch/irr/String.h:347]: (style) Array index 'i' is used before limits check.
[tarch/irr/String.h:361]: (style) Array index 'i' is used before limits check.
[tarch/irr/CXMLReaderImpl.h:811]: (style) Array index 'i' is used before limits check.
[tarch/la/tests/VectorTest.cpp:211] -> [tarch/la/tests/VectorTest.cpp:222]: (performance) Variable 'a' is reassigned a value before the old one has been used.
[tarch/la/tests/VectorTest.cpp:212] -> [tarch/la/tests/VectorTest.cpp:223]: (performance) Variable 'b' is reassigned a value before the old one has been used.
[tarch/la/tests/VectorTest.cpp:213] -> [tarch/la/tests/VectorTest.cpp:224]: (performance) Variable 'c' is reassigned a value before the old one has been used.
[tarch/la/tests/VectorTest.cpp:214] -> [tarch/la/tests/VectorTest.cpp:225]: (performance) Variable 'd' is reassigned a value before the old one has been used.
[tarch/la/tests/VectorTest.cpp:285]: (warning) Return value of function abs() is not used.
[tarch/la/tests/VectorTest.cpp:289]: (warning) Return value of function abs() is not used.
[tarch/la/tests/VectorTest.cpp:730]: (style) Unused variable: vectors
[tarch/logging/CommandLineLogger.cpp:139]: (warning) Member variable 'CommandLineLogger::_logTimeStamp' is not assigned a value in 'CommandLineLogger::operator='.
[tarch/logging/CommandLineLogger.cpp:139]: (warning) Member variable 'CommandLineLogger::_logTimeStampHumanReadable' is not assigned a value in 'CommandLineLogger::operator='.
[tarch/logging/CommandLineLogger.cpp:139]: (warning) Member variable 'CommandLineLogger::_logMachineName' is not assigned a value in 'CommandLineLogger::operator='.
[tarch/logging/CommandLineLogger.cpp:139]: (warning) Member variable 'CommandLineLogger::_logMessageType' is not assigned a value in 'CommandLineLogger::operator='.
[tarch/logging/CommandLineLogger.cpp:139]: (warning) Member variable 'CommandLineLogger::_logTrace' is not assigned a value in 'CommandLineLogger::operator='.
[tarch/logging/CommandLineLogger.cpp:139]: (warning) Member variable 'CommandLineLogger::_outputStream' is not assigned a value in 'CommandLineLogger::operator='.
[tarch/logging/CommandLineLogger.cpp:544]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[tarch/logging/configurations/LogOutputFormatConfiguration.cpp:11]: (warning) Member variable 'LogOutputFormatConfiguration::_logTimeStamp' is not initialized in the constructor.
[tarch/logging/configurations/LogOutputFormatConfiguration.cpp:11]: (warning) Member variable 'LogOutputFormatConfiguration::_logTimeStampHumanReadable' is not initialized in the constructor.
[tarch/logging/configurations/LogOutputFormatConfiguration.cpp:11]: (warning) Member variable 'LogOutputFormatConfiguration::_logMachineName' is not initialized in the constructor.
[tarch/logging/configurations/LogOutputFormatConfiguration.cpp:11]: (warning) Member variable 'LogOutputFormatConfiguration::_logMessageType' is not initialized in the constructor.
[tarch/logging/configurations/LogOutputFormatConfiguration.cpp:11]: (warning) Member variable 'LogOutputFormatConfiguration::_logTrace' is not initialized in the constructor.
[tarch/plotter/griddata/regular/CartesianGridArrayWriter.h:46]: (style) 'struct DataSet' does not have a copy constructor which is recommended since the class contains a pointer to allocated memory.
[tarch/plotter/griddata/regular/CartesianGridArrayWriter.h:49]: (style) Class 'DataSet' is unsafe, 'DataSet::_data' can leak by wrong usage.
[tarch/plotter/griddata/regular/CartesianGridArrayWriter.cpp:83]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[tarch/plotter/griddata/regular/CartesianGridArrayWriter.cpp:91]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[tarch/tests/TestCase.cpp:9]: (warning) Member variable 'TestCase::_error' is not initialized in the constructor.
[tarch/tests/TestCase.cpp:27]: (warning) Member variable 'TestCase::_error' is not initialized in the constructor.
[utils/EventTimings.cpp:33]: (performance) Variable 'name' is assigned in constructor body. Consider performing initialization in initialization list.
[utils/EventTimings.cpp:180]: (style) Obsolete function 'asctime' called. It is recommended to use the function 'strftime' instead.
[utils/Parallel.cpp:69]: (style) The scope of the variable 'severalGroups' can be reduced.
[utils/Publisher.cpp:1]: (information) Skipping configuration 'Parallel' since the value of 'Parallel' is unknown. Use -D if you want to check it. You can use -U to skip it explicitly.
[utils/Tracer.cpp:20]: (style) Variable 'preciceMethodName' is assigned a value that is never used.
[utils/Tracer.cpp:26]: (style) Variable 'preciceMethodName' is assigned a value that is never used.
[utils/tests/HelpersTest.cpp:39]: (style) Unused variable: doubleVector
[utils/tests/HelpersTest.cpp:49]: (style) Unused variable: intVector
[utils/tests/HelpersTest.cpp:60]: (style) Unused variable: vectorVector
[utils/tests/HelpersTest.cpp:71]: (style) Unused variable: doubleList
[utils/tests/HelpersTest.cpp:85]: (style) Unused variable: intList
[utils/tests/HelpersTest.cpp:99]: (style) Unused variable: vectorList
[utils/tests/HelpersTest.cpp:114]: (style) Unused variable: aMap
[utils/tests/HelpersTest.cpp:78]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[utils/tests/HelpersTest.cpp:80]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[utils/tests/HelpersTest.cpp:82]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[utils/tests/HelpersTest.cpp:92]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[utils/tests/HelpersTest.cpp:94]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[utils/tests/HelpersTest.cpp:96]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[utils/tests/HelpersTest.cpp:106]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[utils/tests/HelpersTest.cpp:108]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[utils/tests/HelpersTest.cpp:110]: (performance) Prefer prefix ++/-- operators for non-primitive types.
[utils/xml/XMLTag.cpp:565]: (style) Unused variable: subtagIter
[utils/xml/XMLTag.cpp:280]: (style) Throwing a copy of the caught exception instead of rethrowing the original exception.
[utils/xml/XMLTag.cpp:276]: (style) Exception should be caught by reference.
[utils/xml/XMLTag.cpp:773]: (style) Exception should be caught by reference.
[tarch/argument/ArgumentSet.cpp:30]: (style) The function 'addArgument' is never used.
[tarch/argument/ArgumentSetFabric.cpp:123]: (style) The function 'addArgumentSet' is never used.
[tarch/xmlwriter/XMLWriter.cpp:168]: (style) The function 'addComment' is never used.
[geometry/config/GeometryConfiguration.cpp:487]: (style) The function 'addGeometry' is never used.
[precice/config/ParticipantConfiguration.cpp:576]: (style) The function 'addParticipant' is never used.
[utils/EventTimings.cpp:70]: (style) The function 'addProp' is never used.
[mapping/petnum.cpp:96]: (style) The function 'arange' is never used.
[tarch/xmlwriter/XMLWriter.cpp:111]: (style) The function 'closeAlltags' is never used.
[tarch/xmlwriter/XMLWriter.cpp:99]: (style) The function 'closeLasttag' is never used.
[query/FindVoxelContent.cpp:885]: (style) The function 'computeIntersection' is never used.
[precice/VoxelPosition.cpp:129]: (style) The function 'contentHandle' is never used.
[tarch/multicore/BooleanSemaphore.cpp:31]: (style) The function 'continueWithTask' is never used.
[tarch/plotter/griddata/regular/CartesianGridArrayWriter.cpp:110]: (style) The function 'createCellDataWriter' is never used.
[tarch/xmlwriter/XMLWriter.cpp:124]: (style) The function 'createChild' is never used.
[mesh/Mesh.cpp:173]: (style) The function 'createPropertyContainer' is never used.
[tarch/xmlwriter/XMLWriter.cpp:53]: (style) The function 'createTag' is never used.
[tarch/xmlwriter/XMLWriter.cpp:76]: (style) The function 'createTagWithInformation' is never used.
[precice/Constants.cpp:46]: (style) The function 'dataVelocities' is never used.
[mapping/petnum.cpp:112]: (style) The function 'fill_with_randoms' is never used.
[tarch/argument/ArgumentSet.cpp:66]: (style) The function 'getArgumentAsCharPointer' is never used.
[tarch/argument/ArgumentSetFabric.cpp:103]: (style) The function 'getArgumentSet' is never used.
[tarch/argument/ArgumentSet.cpp:70]: (style) The function 'getArgumentSetName' is never used.
[cplscheme/impl/BaseQNPostProcessing.cpp:574]: (style) The function 'getDeletedColumns' is never used.
[mesh/Vertex.cpp:121]: (style) The function 'getGlobalIndex' is never used.
[com/MPIDirectCommunication.cpp:107]: (style) The function 'getGroupID' is never used.
[tarch/services/ServiceRepository.cpp:41]: (style) The function 'getListOfRegisteredServices' is never used.
[utils/Parallel.cpp:186]: (style) The function 'getLocalProcessRank' is never used.
[tarch/plotter/griddata/regular/CartesianGridArrayWriter_CellDataWriter.cpp:54]: (style) The function 'getMaxValue' is never used.
[tarch/plotter/griddata/regular/CartesianGridArrayWriter_CellDataWriter.cpp:49]: (style) The function 'getMinValue' is never used.
[geometry/Bubble.cpp:121]: (style) The function 'getNumberLongitudinalElements' is never used.
[tarch/multicore/cobra/Core.cpp:42]: (style) The function 'getNumberOfThreads' is never used.
[mesh/config/DataConfiguration.cpp:103]: (style) The function 'getRecentlyConfiguredData' is never used.
[tarch/multicore/cobra/Core.cpp:53]: (style) The function 'getScheduler' is never used.
[tarch/tests/TestCase.cpp:38]: (style) The function 'getTestCaseName' is never used.
[action/config/ActionConfiguration.cpp:328]: (style) The function 'getUsedMeshID' is never used.
[cplscheme/BaseCouplingScheme.cpp:634]: (style) The function 'getValidDigits' is never used.
[cplscheme/BaseCouplingScheme.cpp:332]: (style) The function 'getVertexOffset' is never used.
[tarch/logging/configurations/LogOutputFormatConfiguration.cpp:115]: (style) The function 'hasParsed' is never used.
[tarch/logging/CommandLineLogger.cpp:308]: (style) The function 'indent' is never used.
[tarch/logging/Log.cpp:57]: (style) The function 'infoMasterOnly' is never used.
[tarch/plotter/griddata/regular/CartesianGridArrayWriter.cpp:74]: (style) The function 'isOpen' is never used.
[mesh/Vertex.cpp:129]: (style) The function 'isOwner' is never used.
[utils/GeometryComputations.cpp:8]: (style) The function 'lineIntersection' is never used.
[cplscheme/impl/BroydenPostProcessing.cpp:145]: (style) The function 'performPPSecondaryData' is never used.
[tarch/plotter/griddata/unstructured/vtk/VTKTextFileWriter_CellWriter.cpp:86]: (style) The function 'plotLine' is never used.
[tarch/plotter/griddata/unstructured/vtk/VTKTextFileWriter_CellWriter.cpp:25]: (style) The function 'plotPoint' is never used.
[tarch/plotter/griddata/unstructured/vtk/VTKTextFileWriter_CellWriter.cpp:104]: (style) The function 'plotTriangle' is never used.
[cplscheme/impl/QRFactorization.cpp:674]: (style) The function 'popFront' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:106]: (style) The function 'precice_fastest_action_required_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:76]: (style) The function 'precice_fastest_advance_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:21]: (style) The function 'precice_fastest_create_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:91]: (style) The function 'precice_fastest_finalize_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:136]: (style) The function 'precice_fastest_fulfilled_action_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:173]: (style) The function 'precice_fastest_get_data_id_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:154]: (style) The function 'precice_fastest_get_mesh_id_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:48]: (style) The function 'precice_fastest_initialize_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:63]: (style) The function 'precice_fastest_initialize_data_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:334]: (style) The function 'precice_fastest_read_bsdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:299]: (style) The function 'precice_fastest_read_bvdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:352]: (style) The function 'precice_fastest_read_sdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:317]: (style) The function 'precice_fastest_read_vdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:194]: (style) The function 'precice_fastest_set_vertex_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:211]: (style) The function 'precice_fastest_set_vertices_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:264]: (style) The function 'precice_fastest_write_bsdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:229]: (style) The function 'precice_fastest_write_bvdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:282]: (style) The function 'precice_fastest_write_sdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFASTEST.cpp:247]: (style) The function 'precice_fastest_write_vdata_' is never used.
[precice/adapters/c/Constants.cpp:20]: (style) The function 'precicec_actionReadIterationCheckpoint' is never used.
[precice/adapters/c/Constants.cpp:30]: (style) The function 'precicec_actionReadSimulationCheckpoint' is never used.
[precice/adapters/c/Constants.cpp:15]: (style) The function 'precicec_actionWriteIterationCheckpoint' is never used.
[precice/adapters/c/Constants.cpp:25]: (style) The function 'precicec_actionWriteSimulationCheckpoint' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:36]: (style) The function 'precicec_advance' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:16]: (style) The function 'precicec_createSolverInterface' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:296]: (style) The function 'precicec_exportMesh' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:42]: (style) The function 'precicec_finalize' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:100]: (style) The function 'precicec_fulfilledAction' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:121]: (style) The function 'precicec_getDataID' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:48]: (style) The function 'precicec_getDimensions' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:107]: (style) The function 'precicec_getMeshID' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:160]: (style) The function 'precicec_getMeshVertexSize' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:138]: (style) The function 'precicec_getMeshVertices' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:114]: (style) The function 'precicec_hasData' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:30]: (style) The function 'precicec_initialize' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:90]: (style) The function 'precicec_isActionRequired' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:54]: (style) The function 'precicec_isCouplingOngoing' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:63]: (style) The function 'precicec_isCouplingTimestepComplete' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:72]: (style) The function 'precicec_isReadDataAvailable' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:81]: (style) The function 'precicec_isWriteDataRequired' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:290]: (style) The function 'precicec_mapReadDataTo' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:284]: (style) The function 'precicec_mapWriteDataFrom' is never used.
[precice/adapters/c/Constants.cpp:10]: (style) The function 'precicec_nameConfiguration' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:263]: (style) The function 'precicec_readBlockScalarData' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:242]: (style) The function 'precicec_readBlockVectorData' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:274]: (style) The function 'precicec_readScalarData' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:253]: (style) The function 'precicec_readVectorData' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:168]: (style) The function 'precicec_setMeshEdge' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:178]: (style) The function 'precicec_setMeshTriangle' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:189]: (style) The function 'precicec_setMeshTriangleWithEdges' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:128]: (style) The function 'precicec_setMeshVertex' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:149]: (style) The function 'precicec_setMeshVertices' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:221]: (style) The function 'precicec_writeBlockScalarData' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:200]: (style) The function 'precicec_writeBlockVectorData' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:232]: (style) The function 'precicec_writeScalarData' is never used.
[precice/adapters/c/SolverInterfaceC.cpp:211]: (style) The function 'precicec_writeVectorData' is never used.
[precice/adapters/fortran/ConstantsFortran.cpp:46]: (style) The function 'precicef_action_read_iter_checkp_' is never used.
[precice/adapters/fortran/ConstantsFortran.cpp:71]: (style) The function 'precicef_action_read_sim_checkp_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:120]: (style) The function 'precicef_action_required_' is never used.
[precice/adapters/fortran/ConstantsFortran.cpp:35]: (style) The function 'precicef_action_write_initial_data_' is never used.
[precice/adapters/fortran/ConstantsFortran.cpp:23]: (style) The function 'precicef_action_write_iter_checkp_' is never used.
[precice/adapters/fortran/ConstantsFortran.cpp:59]: (style) The function 'precicef_action_write_sim_checkp_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:57]: (style) The function 'precicef_advance_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:19]: (style) The function 'precicef_create_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:355]: (style) The function 'precicef_export_mesh_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:65]: (style) The function 'precicef_finalize_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:146]: (style) The function 'precicef_fulfilled_action_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:187]: (style) The function 'precicef_get_data_id_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:72]: (style) The function 'precicef_get_dims_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:157]: (style) The function 'precicef_get_mesh_id_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:169]: (style) The function 'precicef_has_data_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:43]: (style) The function 'precicef_initialize_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:51]: (style) The function 'precicef_initialize_data_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:347]: (style) The function 'precicef_map_read_data_to_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:339]: (style) The function 'precicef_map_write_data_from_' is never used.
[precice/adapters/fortran/ConstantsFortran.cpp:11]: (style) The function 'precicef_name_config_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:80]: (style) The function 'precicef_ongoing_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:318]: (style) The function 'precicef_read_bsdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:297]: (style) The function 'precicef_read_bvdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:107]: (style) The function 'precicef_read_data_available_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:329]: (style) The function 'precicef_read_sdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:308]: (style) The function 'precicef_read_vdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:222]: (style) The function 'precicef_set_edge_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:233]: (style) The function 'precicef_set_triangle_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:244]: (style) The function 'precicef_set_triangle_we_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:201]: (style) The function 'precicef_set_vertex_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:211]: (style) The function 'precicef_set_vertices_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:276]: (style) The function 'precicef_write_bsdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:255]: (style) The function 'precicef_write_bvdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:93]: (style) The function 'precicef_write_data_required_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:287]: (style) The function 'precicef_write_sdata_' is never used.
[precice/adapters/fortran/SolverInterfaceFortran.cpp:266]: (style) The function 'precicef_write_vdata_' is never used.
[m2n/PointToPointCommunication.cpp:230]: (style) The function 'printCommunicationPartnerCountStats' is never used.
[tarch/logging/CommandLineLogger.cpp:538]: (style) The function 'printFilterListToWarningDevice' is never used.
[m2n/PointToPointCommunication.cpp:281]: (style) The function 'printLocalIndexCountStats' is never used.
[cplscheme/impl/QRFactorization.cpp:653]: (style) The function 'pushBack' is never used.
[tarch/configuration/ConfigurationRegistry.cpp:151]: (style) The function 'readString' is never used.
[query/ExportVTKVoxelQueries.cpp:93]: (style) The function 'resetQueries' is never used.
[tarch/multicore/BooleanSemaphore.cpp:27]: (style) The function 'sendCurrentTaskToBack' is never used.
[mesh/Vertex.cpp:125]: (style) The function 'setGlobalIndex' is never used.
[mesh/Vertex.cpp:133]: (style) The function 'setOwner' is never used.
[mapping/petnum.cpp:238]: (style) The function 'set_column' is never used.
[io/SimulationStateIO.cpp:27]: (style) The function 'standardFileExtension' is never used.
[precice/impl/SolverInterfaceImpl.cpp:2328]: (style) The function 'syncTimestep' is never used.
[com/tests/FileCommunicationTest.cpp:142]: (style) The function 'testMultipleExchanges' is never used.
[io/tests/ExportAndReimportVRMLTest.cpp:164]: (style) The function 'testReimportDriftRatchet' is never used.
[com/tests/FileCommunicationTest.cpp:39]: (style) The function 'testSimpleSendReceive' is never used.
[utils/GeometryComputations.cpp:89]: (style) The function 'tetraVolume' is never used.
[mapping/petnum.cpp:156]: (style) The function 'view' is never used.
[tarch/logging/Log.cpp:71]: (style) The function 'warningMasterOnly' is never used.
[tarch/configuration/ConfigurationRegistry.cpp:105]: (style) The function 'writeDummyConfigFile' is never used.
[tarch/plotter/griddata/regular/cca/CCAGridArrayWriter.cpp:68]: (style) The function 'writeToCellArray' is never used.
[tarch/plotter/griddata/regular/cca/CCAGridArrayWriter.cpp:47]: (style) The function 'writeToVertexArray' is never used.
(information) Cppcheck cannot find all the include files (use --check-config for details)

Factor out numerical helper functions

This is a request for discussion.

In order to get rid of the tarch package I propose to factor out numerical helper function to a new numerics package.

This includes functions like:

  • tarch::la::norm2
  • tarch:la::dot
  • Maybe the updatedQR stuff. That needs to discussed with @scheufks.
  • Functions like tarch::la::equals

Stuff in that package should really concentrate on the numerics and should be independent as far as possible.

PETSc RBF interpolation: pre allocation error

test case: beamInCrossFlow
https://github.com/davidsblom/FOAM-FSI/tree/master/tutorials/fsi/beamInCrossFlow

fluid log file:
http://pastebin.com/bFyJeLDR

fluid log file for each CPU core:
http://pastebin.com/eh7LZ90w
http://pastebin.com/haP615Mg
http://pastebin.com/7ncej8ba

precice commit:

commit 64a517cb12ad9da159e65070cba9f56af04c6c3a                                                                                                                                                                    
Merge: a041d7d 0998416                                                                                                                                                                                             
Author: Klaudius Scheufele <[email protected]>                                                                                                                                                        
Date:   Mon Apr 18 15:57:23 2016 +0200

    Merge branch 'develop' of https://github.com/precice/precice into develop

error:

[2]PETSC ERROR: MatSeqAIJSetPreallocation_SeqAIJ() line 3567 in /home/davidblom/repositories/openfoam/davidblom-3.1-ext/src/thirdParty/petsc-3.6.4/src/mat/impls/aij/seq/aij.c
    nnz cannot be greater than row length: local row 0 value 51 rowlength 16

precice config:

<?xml version="1.0"?>                                                                                                                                                                                                                                                                                                                                                                       

<precice-configuration>                                                                                                                                                                                                                                                                                                                                                                     

    <log-filter component="" target="debug" switch="on" />                                                                                                                                                                                                                                                                                                                                  
    <log-filter component="" target="info" switch="on" />                                                                                                                                                                                                                                                                                                                                   
    <log-output column-separator=" | " log-time-stamp="no" log-time-stamp-human-readable="yes" log-machine-name="no" log-message-type="no" log-trace="yes" />                                                                                                                                                                                                                               

    <solver-interface dimensions="3">                                                                                                                                                                                                                                                                                                                                                       

        <data:vector name="Stresses" />                                                                                                                                                                                                                                                                                                                                                     
        <data:vector name="Displacements" />

        <mesh name="Fluid_Nodes">
            <use-data name="Displacements" />
        </mesh>

        <mesh name="Fluid_CellCenters">
            <use-data name="Stresses" />
        </mesh>

        <mesh name="Structure_Nodes">
            <use-data name="Displacements" />
        </mesh>

        <mesh name="Structure_CellCenters">
            <use-data name="Stresses" />
        </mesh>

        <participant name="Fluid_Solver">
            <use-mesh name="Fluid_Nodes" provide="yes" />
            <use-mesh name="Fluid_CellCenters" provide="yes" />
            <use-mesh name="Structure_Nodes" from="Structure_Solver" />
            <use-mesh name="Structure_CellCenters" from="Structure_Solver" />
            <write-data mesh="Fluid_CellCenters" name="Stresses" />
            <read-data mesh="Fluid_Nodes" name="Displacements" />
            <mapping:petrbf-thin-plate-splines direction="write" from="Fluid_CellCenters" to="Structure_CellCenters" constraint="conservative" timing="initial"/>
            <mapping:petrbf-thin-plate-splines direction="read" from="Structure_Nodes" to="Fluid_Nodes" constraint="consistent" timing="initial"/>
            <master:mpi-single />
        </participant>

        <participant name="Structure_Solver">
            <use-mesh name="Structure_Nodes" provide="yes"/>
            <use-mesh name="Structure_CellCenters" provide="yes"/>
            <write-data mesh="Structure_Nodes" name="Displacements" />
            <read-data mesh="Structure_CellCenters" name="Stresses" />
            <master:mpi-single />
        </participant>

        <m2n:sockets exchange-directory="../" from="Fluid_Solver" to="Structure_Solver" />

        <coupling-scheme:serial-implicit>
            <timestep-length value="1.0e-1" />
            <max-timesteps value="100" />
            <participants first="Fluid_Solver" second="Structure_Solver" />
            <exchange data="Stresses" from="Fluid_Solver" mesh="Structure_CellCenters" to="Structure_Solver" />
            <exchange data="Displacements" from="Structure_Solver" mesh="Structure_Nodes" to="Fluid_Solver" />
            <relative-convergence-measure limit="1.0e-3" data="Displacements" mesh="Structure_Nodes" suffices="0" />
            <max-iterations value="200" />
            <extrapolation-order value="2" />

            <post-processing:IQN-ILS>
                <data mesh="Structure_Nodes" name="Displacements" />
                <initial-relaxation value="0.001" />
                <max-used-iterations value="200" />
                <timesteps-reused value="2" />
                <filter type="QR1" limit="1e-8" />
            </post-processing:IQN-ILS>

        </coupling-scheme:serial-implicit>

    </solver-interface>

</precice-configuration>

clean up parallel initialization

the position of MPI_Initialize and similar things is a mess at the moment and results in more and more problems. this an attempt to clean all that up.

EventTimings.log separate log-outputs

I would like to have a separate log-file for each participant for coupling. It would be easier for the user to have different log-outputs for each domain.

Assertion shown to user terminal output

@uekerman

Running elastic1dtube example with PostProc: IQN-ILS fails. Assertion shown to user's output as:

too many iterations in orthogonalize, termination failed assertion in file src/cplscheme/impl/QRFactorization.cpp, line 228 failed: u(_cols-1) == rho parameter u.tail(1): 0.00000000000000000000e+00 parameter rho: 9.98982626785207548416e-06 StructureSolver: src/cplscheme/impl/QRFactorization.cpp:228: void precice::cplscheme::impl::QRFactorization::insertColumn(int, precice::cplscheme::impl::QRFactorization::EigenVector&): Assertion 'false' failed.

Deadlock problems for compositional cplschemes

Moved from Munich GitLab. Original creator: @uekerman

The unit tests give a deadlock for the 4 tests in src/cplscheme/tests/CompositionalCouplingSchemeTest.cpp::run() this happens, however, only on some machines, including Benjamin's laptop, or helium in Stuttgart. On atsccs31 everything works fine.

3 field coupling with Ateles

I am trying to run a 3 field coupling with ateles, having a navier-stokes (ns), a euler (ee) and a lineareuler (le) domain, each coupling is one way and parallel explicit

What should happen

Ateles_ns : write rho_ns, v_ns, p_ns
Ateles_ee: read rho_ns, v_ns, p_ns
write rho_ee, v_ee, p_ee
Ateles_le: read rho_ee, v_ee, p_ee

What happens (running all domains with 2 mpirank) :

Ateles_ns --> rechnet 2 iterationen, non-physical state (failing is fine, can be due to ateles)
Ateles_ee --> stop direct after start of simulation loop
Ateles_le --> starts computing, run for more than 200 iteration, then I stoped it manually

I have no idea, why the euler domain does not read from precice the NS values and why the linear euler domain is running at all.
I copied the testcase into the shared folder: shared_exafsa_work/ns_ee_le including the output files ns.out, ee.out and le.out, which have already debug statements from ateles ( what is read and wrote to precice) as well as precice debug statements.

assertion thrown nearest neighbor mapping

Currently looking into the weak scalability of the FSAI setup. I've run into an assertion in the fluid solver.

FOAM-FSI build: 3.2-4afb6f0b77be

Selecting dynamicFvMesh dynamicMotionSolverFvMesh
Selecting motion solver: RBFMeshMotionSolver
Radial Basis Function interpolation: Selecting RBF function: TPS
RBF mesh deformation settings:
    interpolation function = TPS
    interpolation polynomial term = 0
    interpolation cpu formulation = 0
    coarsening = 1
        coarsening tolerance = 1.000000000000e-03
        coarsening reselection tolerance = 1.000000000000e-01
        coarsening two-point selection = 0
Selecting thermodynamics package hPsiThermo<pureMixture<constTransport<specieThermo<
hConstThermo<perfectGas>>>>>
Selecting RAS turbulence model laminar
 | 10:13:47     | precice::impl::SolverInterfaceImpl::configure()         | [PRECICE
] Run in coupling mode
 | 10:13:47     | precice::impl::SolverInterfaceImpl::initializeMasterSlaveCom.() | 
Setting up communication to slaves
 | 10:13:49     | precice::impl::SolverInterfaceImpl::initialize()        | Setting 
up master communication to coupling partner/s 
 | 10:13:49     | precice::impl::SolverInterfaceImpl::initialize()        | Coupling
 partner/s are connected 
 | 10:13:49     | precice::geometry::CommunicatedGeometry::sendMesh()     | Gather m
esh Fluid_Acoustics
 | 10:13:49     | precice::geometry::CommunicatedGeometry::sendMesh()     | Send glo
bal mesh Fluid_Acoustics
 | 10:13:49     | precice::geometry::CommunicatedGeometry::receiveMesh()  | Receive 
global mesh Structure_CellCenters
 | 10:13:49     | precice::geometry::BroadcastFilterDecomposition::broadcast() | Bro
adcast mesh Structure_CellCenters
 | 10:13:49     | precice::geometry::BroadcastFilterDecomposition::filter() | Filter
 mesh Structure_CellCenters
 | 10:13:49     | precice::geometry::BroadcastFilterDecomposition::feedback() | Feed
back mesh Structure_CellCenters
assertion in file src/mapping/NearestNeighborMapping.cpp, line 54 failed: find.hasFo
und()
fsiFluidFoam: src/mapping/NearestNeighborMapping.cpp:54: virtual void precice::mappi
ng::NearestNeighborMapping::computeMapping(): Assertion `false' failed.

Error while setting up connection between solvers running parallel master-slave mode

I recently tried to run precice for the 3dTube scenario in parallel using the master-slave mode.
Apparently, the connection between the solvers fails with the following error-message:

 | 13:16:47     | precice::impl::SolverInterfaceImpl::configure()         | [PRECICE] Run in coupling mode
 | 13:16:47     | precice::impl::SolverInterfaceImpl::configure()         | [PRECICE] Run in coupling mode
 | 13:16:47     | precice::impl::SolverInterfaceImpl::configure()         | [PRECICE] Run in coupling mode
 | 13:16:47     | precice::impl::SolverInterfaceImpl::initialize()        | Setting up master communication to coupling partner/s 
 | 13:16:47     | precice::impl::SolverInterfaceImpl::initialize()        | Setting up master communication to coupling partner/s 
 | 13:16:47     | precice::impl::SolverInterfaceImpl::initialize()        | Setting up master communication to coupling partner/s 
terminate called after throwing an instance of 'boost::filesystem::filesystem_error'
  what():  boost::filesystem::remove: No such file or directory: "../.Fluid_Solver-Structure_Solver.address"
 | 13:16:47     | precice::com::SocketCommunication::acceptConnection()   | (0)  [PRECICE] ERROR: Accepting connection at port 37889 failed: boost::filesystem::rename: No such file or directory: "../.Fluid_Solver-Structure_Solver.address~", "../.Fluid_Solver-Structure_Solver.address"

Indeed, there are no *.address files in the scenario directory. Also, if *.address files are created manually before the simulation is executed, they appear to be deleted during the initialization process.
The relevant configuration in the preCICE.xml is given below:

   <participant name="Fluid_Solver">
            <master:mpi-single/>
            <use-mesh name="Fluid_Nodes" provide="yes" />
            <use-mesh name="Fluid_CellCenters" provide="yes" />
            <use-mesh name="Structure_Nodes" from="Structure_Solver" />
            <use-mesh name="Structure_CellCenters" from="Structure_Solver" />
            <write-data mesh="Fluid_CellCenters" name="Stresses" />
            <read-data mesh="Fluid_Nodes" name="Displacements" />
            <mapping:nearest-projection direction="write" from="Fluid_CellCenters" to="Structure_CellCenters" constraint="conservative" timing="initial"/>
            <mapping:nearest-projection direction="read" from="Structure_Nodes" to="Fluid_Nodes" constraint="consistent" timing="initial"/>
        </participant>

        <participant name="Structure_Solver">
            <use-mesh name="Structure_Nodes" provide="yes"/>
            <use-mesh name="Structure_CellCenters" provide="yes"/>
            <write-data mesh="Structure_Nodes" name="Displacements" />
            <read-data mesh="Structure_CellCenters" name="Stresses" />
        </participant>

        <m2n:sockets port="0" from="Fluid_Solver" to="Structure_Solver"  exchange-directory="../"/>

Let me know if you can reproduce this behaviour and whether this is a bug in preCICE or due to wrong configuration/execution.
@Haroogan @floli @uekerman

Assertion in FileCommunication

I'm running with config

    <m2n:files from="A" to="B" />

    <participant name="A">
      <use-mesh name="MeshA" provide="yes" />
      <use-mesh name="MeshB" provide="no" from="B" />
      <write-data name="Data" mesh="MeshA" />
      <mapping:rbf-gaussian shape-parameter="1" constraint="consistent" direction="write" from="MeshA" to="MeshB" x-dead="false" y-dead="true" z-dead="false" />
       </participant>

    <participant name="B">
      <use-mesh name="MeshB" provide="yes" />
      <read-data name="Data" mesh="MeshB" />
      <!-- <export:vtk timestep-interval="1" directory="output/"/> -->
      <!-- <server:mpi-single /> -->
    </participant>

mpi=on, petsc=on, but client is not beging run with mpirun.

% ./pmpi A 
MPI rank 0 of 1
[PRECICE] Run in coupling mode
[...]
Setting up master communication to coupling partner/s 
Coupling partner/s are connected 
Receive global mesh MeshB
assertion in file src/com/FileCommunication.cpp, line 384 failed: _receiveFile.is_open()
pmpi: src/com/FileCommunication.cpp:384: virtual void precice::com::FileCommunication::receive(int &, int): Assertion `false' failed.
Run finished at Fri Jul 17 10:42:41 2015
Global runtime = 16ms / 0s

non-deterministic behavior of IMVJ PP due to error in Preconditioner

The IMVJ method shows a non-deterministic behavior, i.e., non-deterministic convergence histories in parallel master-slave mode (for both, serial and vectorial scheme). This problem is not observed for the serial case, where the solvers run on a single processor, neither for the ILS method.
Some tests have shown, that the non-deterministic behavior is somehow induced by an error in the preconditioner. Unkommenting all statements cotresponding to the precondititoner solves the problem.
Note: This problem even exists for the serial system where the preconditioner always ought to be a no-op, i.e., identity.
TODO: Some further testing to localize the error (most probably some initialization problem - however, it is only present for the parallel case.)

@uekerman

boost asio::write, full buffer leads to deadlock

Verena reported the following problem: if 2 meshes need to be exchanged at initialization (e.g. 2 consistent mappings in master mode), we get a deadlock for a too big mesh size.
After some research, I found that this due to the non-blocking behavior of boost asio::write as soon as the send buffer is full. For small meshes this does not happen (e.g. FSI cases @davidsblom did on SuperMUC) , but for a moderate size this is already a problem. Verena used 3e6 vertices (on the coupling interface).
A possible workaround would be to somehow increase the buffer size. Some short investigation showed that this typically is only possible with admin rights. We could try this on the MAC Cluster (Roland offered to help) or we could ask SuperMUC support.
A real solution would be to separate the mesh communication from the mesh decomposition in the implementation and use a blocking variant. This is some work and might overlap with the local mesh communication @floli. I am not sure if it is worth the hassle at the moment, but we should keep an eye on this.
Please note, that simply changing the MeshContext ordering in SolverInterfaceImpl::initialize is not enough as we need all provided meshes first for the preliminary mappings.

Tests fail with tcp_peer_send_blocking

Tests fail on asaru (Florian's laptop) as of 9bf1aba

% mpirun -n 4  ../build/last/binprecice test ../.ci-test-config.xml   ../src
[...]
 | precice::cplscheme::BaseCouplingScheme::measureConvergence | All converged
 | precice::cplscheme::BaseCouplingScheme::timestepCompleted() | Timestep completed
 | precice::cplscheme::BaseCouplingScheme::timestepCompleted() | Timestep completed
[asaru:14196] [[48539,0],0] tcp_peer_send_blocking: send() to socket 39 failed: Broken pipe (32)
[asaru:14196] [[48539,0],0] tcp_peer_send_blocking: send() to socket 39 failed: Broken pipe (32)
[asaru:14196] [[48539,0],0] tcp_peer_send_blocking: send() to socket 39 failed: Broken pipe (32)
[...message repeated quite often ...]

First bad commit according to git bisect is cd79684 from @scheufks

% git bisect log
git bisect start
# bad: [9bf1aba81988a38b73b299039465fbd8ba9c20d2] Merge branch 'develop' into parallelRBF_i1
git bisect bad 9bf1aba81988a38b73b299039465fbd8ba9c20d2
# good: [4d5c886377bc9d44ce7709026257ab0f78f39f61] CI: Enable unit tests for MPI=on
git bisect good 4d5c886377bc9d44ce7709026257ab0f78f39f61
# skip: [7c7e3b506bc28579891f92f35b5e863fe3171a1a] fixed initialization of convergence measures isCoarse attribute in case no PP is defined
git bisect skip 7c7e3b506bc28579891f92f35b5e863fe3171a1a
# bad: [782c4b9cd63f66903736aa93c5f882c2089686ac] parallel matrix multiplication for IMVJ is encapsulated in class ParallelMatrixoperations. Implementation provided for Tarch and Eigen, all tests are added and IMVJ yields idendical results.
git bisect bad 782c4b9cd63f66903736aa93c5f882c2089686ac
# good: [ca6e93416b74458b4c6ce85646cde7cbb9ef5705] Merge branch 'PP_master-slave' into develop
git bisect good ca6e93416b74458b4c6ce85646cde7cbb9ef5705
# good: [5db0f87659c410fb8185d8958818b985e013e290] Bugfix in debug output.
git bisect good 5db0f87659c410fb8185d8958818b985e013e290
# bad: [cd796844e2cf7ab19d89de6f77a27f6c0515769b] fixed some bugs with iqn-imvj and empty procs
git bisect bad cd796844e2cf7ab19d89de6f77a27f6c0515769b
# skip: [6048b7ba73c9d7ddd9990e63537850d87ef6aa93] implemented cyclicComm (not ready). Previous to develop merge
git bisect skip 6048b7ba73c9d7ddd9990e63537850d87ef6aa93
# skip: [f31c69d6e4546650641fd6a2c1af985d8b23a976] cyclic communication, but only with MPIPorts
git bisect skip f31c69d6e4546650641fd6a2c1af985d8b23a976
# skip: [333d87d5b2c8fb82459e17ee8e38f7d32f06eb94] Merge branch 'develop' into updatedQR
git bisect skip 333d87d5b2c8fb82459e17ee8e38f7d32f06eb94
# good: [670aeeb0a848b068dbb9ea5c3546227b053a0883] IQN-IMVJ master-slave without double check computation. Cleaned, working impl with MPIPorts and cyclic comm.
git bisect good 670aeeb0a848b068dbb9ea5c3546227b053a0883
# good: [dd7c08b9713d78b7396acd0aa3c3d62c1b34e4ca] merged updatedQR, i. e., master-slave implementation of IQN-IMVJ into PP_master-slave
git bisect good dd7c08b9713d78b7396acd0aa3c3d62c1b34e4ca
# first bad commit: [cd796844e2cf7ab19d89de6f77a27f6c0515769b] fixed some bugs with iqn-imvj and empty procs

Uncommenting PostProcessingMasterSlaveTest::testVIQNIMVJpp fixes the problem.

VTK export should show warning if directory does not exist

      <export:vtk timestep-interval="1" directory="vtkA" normals="0"/>

vtkA does not exist, no output is written.

A WARN should be issued if the target directory does not exist.

Alternatively the directory should be created.

Export was tested using a serial case.

Hanging at initialize when both participant do mapping

When I use precice, and both participants do the data mapping ( read consistent), often the simulation hangs here:

Initialize preCICE
 | precice::impl::SolverInterfaceImpl::initialize()        | Setting up master communication to coupling partner/s
 | precice::impl::SolverInterfaceImpl::initialize()        | Coupling partner/s are connected
 | precice::geometry::CommunicatedGeometry::sendMesh()     | Gather mesh AcousticSurface_euler
 | precice::geometry::CommunicatedGeometry::sendMesh()     | Send global mesh AcousticSurface_euler
 | precice::geometry::CommunicatedGeometry::receiveMesh()  | Receive global mesh AcousticSurface_acoustic
 | precice::geometry::BroadcastFilterDecomposition::broadcast() | Broadcast mesh AcousticSurface_acoustic
 | precice::geometry::BroadcastFilterDecomposition::filter() | Filter mesh AcousticSurface_acoustic
 | precice::geometry::BroadcastFilterDecomposition::feedback() | Feedback mesh AcousticSurface_acoustic
 | precice::impl::SolverInterfaceImpl::initialize()        | Setting up slaves communication to coupling partner/s
 | precice::impl::SolverInterfaceImpl::initialize()        | Slaves are connected
 | precice::impl::SolverInterfaceImpl::initialize()        | it 1 of 1 | dt# 1 of 200000000 | t 0 | dt 1e-05 | max dt 1e-05 | ongoing yes | dt complete no | write-initial-data |

and the acoustic domain:

 Initialize preCICE
 | precice::impl::SolverInterfaceImpl::initialize()        | Setting up master communication to coupling partner/s
 | precice::impl::SolverInterfaceImpl::initialize()        | Coupling partner/s are connected
 | precice::geometry::CommunicatedGeometry::sendMesh()     | Gather mesh AcousticSurface_acoustic
 | precice::geometry::CommunicatedGeometry::sendMesh()     | Send global mesh AcousticSurface_acoustic
 | precice::geometry::CommunicatedGeometry::receiveMesh()  | Receive global mesh AcousticSurface_euler
 | precice::geometry::BroadcastFilterDecomposition::broadcast() | Broadcast mesh AcousticSurface_euler
 | precice::geometry::BroadcastFilterDecomposition::filter() | Filter mesh AcousticSurface_euler
 | precice::geometry::BroadcastFilterDecomposition::feedback() | Feedback mesh AcousticSurface_euler
 | precice::impl::SolverInterfaceImpl::initialize()        | Setting up slaves communication to coupling partner/s
 | precice::impl::SolverInterfaceImpl::initialize()        | Slaves are connected
 | precice::impl::SolverInterfaceImpl::initialize()        | it 1 of 1 | dt# 1 of 200000000 | t 0 | dt 1e-05 | max dt 1e-05 | ongoing yes | dt complete no | write-initial-data |

It is a rather small testcase,2d with 2*1500 points at the interfaces, matching grids.

Using debug flags, this is a output where it stops:

precice::com::SocketCommunication::acceptConnectionAsServer() | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
 | precice::com::SocketCommunication::getRemoteCommunicatorSize() | (72) Entering  (file:src/utils/Tracer.cpp,line:21)
 | precice::com::SocketCommunication::getRemoteCommunicatorSize() | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
 | precice::com::SocketCommunication::receive(int)         | (72) Entering rankSender=0 (file:src/utils/Tracer.cpp,line:21)
 | precice::com::SocketCommunication::receive(int)         | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
 | precice::m2n::PointToPointCommunication::acceptConnection() | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
 | precice::m2n::M2N::acceptSlavesConnection()             | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
 | precice::cplscheme::ParallelCouplingScheme::initialize() | (72) Entering startTime=0, startTimestep=1 (file:src/utils/Tracer.cpp,line:21)
 | precice::cplscheme::ParallelCouplingScheme::initialize() | (72) Leaving (file:src/utils/Tracer.cpp,line:27)
 | precice::utils::Parallel::synchronizeProcesses()        | (72) Entering  (file:src/utils/Tracer.cpp,line:21)

I figured out that for jobs where both participants are < 64 processes, it is running fine. The strange part is, that same executable, same input files from the solver, same precice config and same job script, it is sometimes running.

I worked a lot with Mohammed Shaheen (IBM support at LRZ) on this, but from the machine support, he could not fine any problem.

I am pretty sure that the problem is due to the data mapping on one participant. When I change the read consistent to write conservative of the other participant, I never have a problem at that point.

This is quite urgent, since I want to run simulations :) and the data mapping on one participant is really slow ( --> issue 43) .

Coloring for Logging

It would be very helpful if the logging output would be colored, e.g. the (info-) message itself different from the timestamp or the function. There should be a config option to switch this on and off because with simple editors (vim), the color codes destroy the readability. Default should be, however, with color.

floating point exception after `isCouplingTimestepComplete()`

test case: gauss pulse, fluid-acoustics coupling from openfoam to ateles.

openfoam uses 4 cores, ateles, uses 2 cores.

fluid debug output on pastebin:
http://pastebin.com/bSSzspkf

ateles debug output on pastebin:
http://pastebin.com/k3M664sC

fluid output without debug info:

/*---------------------------------------------------------------------------*\
| =========                 |                                                 |
| \\      /  F ield         | foam-extend: Open Source CFD                    |
|  \\    /   O peration     | Version:     3.2                                |
|   \\  /    A nd           | Web:         http://www.foam-extend.org         |
|    \\/     M anipulation  | For copyright notice see file Copyright         |
\*---------------------------------------------------------------------------*/
Build    : 3.2-77225f292d3c
Exec     : fsiFluidFoam -parallel
Date     : Apr 25 2016
Time     : 13:41:27
Host     : tud276993
PID      : 30260
CtrlDict : "/home/davidblom/Downloads/gaussPulse/fluid/system/controlDict"
Case     : /home/davidblom/Downloads/gaussPulse/fluid
nProcs   : 4
Slaves : 
3
(
tud276993.30261
tud276993.30262
tud276993.30265
)

Pstream initialized with:
    nProcsSimpleSum   : 16
    commsType         : blocking
SigFpe   : Enabling floating point exception trapping (FOAM_SIGFPE).

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

FOAM-FSI build: 3.2-3c209c7ce1cb

--> FOAM Warning : 
    From function dlLibraryTable::open(const fileName& functionLibName)
    in file db/dlLibraryTable/dlLibraryTable.C at line 124
    could not load libRBFMeshRigidMeshMotionSolver.so: cannot open shared object file: No such file or directory
Selecting dynamicFvMesh dynamicMotionSolverFvMesh
Selecting motion solver: RBFMeshMotionSolver
Radial Basis Function interpolation: Selecting RBF function: TPS
RBF mesh deformation settings:
    interpolation function = TPS
    interpolation polynomial term = 0
    interpolation cpu formulation = 0
    coarsening = 1
        coarsening tolerance = 1.00000000000000004792e-04
        coarsening reselection tolerance = 1.00000000000000005551e-01
        coarsening two-point selection = 0
Selecting thermodynamics package hPsiThermo<pureMixture<constTransport<specieThermo<hConstThermo<perfectGas>>>>>
Selecting RAS turbulence model laminar
 | precice::impl::SolverInterfaceImpl::configure()         | [PRECICE] Run in coupling mode
 | precice::impl::SolverInterfaceImpl::initializeMasterSlaveCom.() | Setting up communication to slaves
Fluid-Acoustics interface: 1600 points
 | precice::impl::SolverInterfaceImpl::initialize()        | Setting up master communication to coupling partner/s 
 | precice::impl::SolverInterfaceImpl::initialize()        | Coupling partner/s are connected 
 | precice::geometry::CommunicatedGeometry::sendMesh()     | Gather mesh Fluid_Acoustics
 | precice::geometry::CommunicatedGeometry::sendMesh()     | Send global mesh Fluid_Acoustics
 | precice::impl::SolverInterfaceImpl::initialize()        | Setting up slaves communication to coupling partner/s 
 | precice::impl::SolverInterfaceImpl::initialize()        | Slaves are connected
 | precice::impl::SolverInterfaceImpl::initialize()        | it 1 of 1 | dt# 1 of 900 | t 0 | dt 1e-05 | max dt 1e-05 | ongoing yes | dt complete no | write-initial-data | 
ExecutionTime = 1.90000000000000002220e-01 s  ClockTime = 0 s


Time = 1e-05

Time = 1e-05, iteration = 1
Solve fluid domain
RBF interpolation coarsening: selected 10/1722 points, 2-norm(error) = 6.81642023792279433468e-01, max(error) = 1.49780574190370299238e+00, tol = 1.00000000000000004792e-04
timing mesh deformation = 3.42098412999999990092e-01s
DILUPBiCG:  Solving for h, Initial residual = 2.78484356578988829517e-04, Final residual = 3.41663103722144800524e-24, No Iterations 1
BiCGStab:  Solving for Up, Initial residual = (2.82028011290960134677e-01 2.92624295930971645944e-01 2.92624295930971649820e-01 1.00282986918937358699e-06), Final residual = (1.67981650515470707463e-06 3.67328778472242319623e-06 3.67328778472235743889e-06 1.92239995113824926514e-08), No Iterations 1
DILUPBiCG:  Solving for h, Initial residual = 9.99999863477133412869e-01, Final residual = 3.02359734280827498409e-14, No Iterations 1
BiCGStab:  Solving for Up, Initial residual = (3.03624542003550974456e-06 6.64064704272517772459e-06 6.64064704270564503108e-06 8.98792612886358158101e-06), Final residual = (2.81932197831041266414e-10 2.45263280305393457464e-10 2.45262898387666687035e-10 2.55390807679839808111e-10), No Iterations 2
DILUPBiCG:  Solving for h, Initial residual = 1.22758473572417956445e-01, Final residual = 4.69395641933803856816e-16, No Iterations 1
BiCGStab:  Solving for Up, Initial residual = (2.88990266949013410995e-10 8.30741082498606213032e-09 8.30741210537268868938e-09 2.59031345986879262187e-06), Final residual = (4.69509057582642974806e-10 1.15855374631720114779e-09 1.15855374596005206405e-09 2.74332926827072201134e-10), No Iterations 1
DILUPBiCG:  Solving for h, Initial residual = 3.00557936044426224835e-02, Final residual = 1.09332009161956894981e-16, No Iterations 1
BiCGStab:  Solving for Up, Initial residual = (4.69533768812290084477e-10 8.79653215893829388882e-09 8.79653217679796757929e-09 7.39499238736618041057e-07), Final residual = (9.22223579726122582393e-14 7.51855986283240608543e-13 7.51855986843817350419e-13 6.26523923579625549748e-14), No Iterations 2
DILUPBiCG:  Solving for h, Initial residual = 8.25988257045241437594e-03, Final residual = 2.05521731676520882996e-17, No Iterations 1
BiCGStab:  Solving for Up, Initial residual = (2.19433609491772946467e-13 8.31886065546379431572e-09 8.31885975715776861359e-09 2.11596144908098709578e-07), Final residual = (3.22932656762629590569e-14 1.69516030597709069571e-12 1.69516030659791931589e-12 1.11036567489219202545e-14), No Iterations 2
time step continuity errors : sum local = 0.00000000000000000000e+00, global = 0.00000000000000000000e+00, cumulative = 0.00000000000000000000e+00
 | precice::impl::SolverInterfaceImpl::advance()           | Iteration #1
 | precice::impl::SolverInterfaceImpl::advance()           | it 1 of 1 | dt# 2 of 900 | t 1e-05 | dt 1e-05 | max dt 1e-05 | ongoing yes | dt complete yes | 
[tud276993:30260] *** Process received signal ***
[tud276993:30260] Signal: Floating point exception (8)
[tud276993:30260] Signal code:  (-6)
[tud276993:30260] Failing at address: 0x3e800007634
[tud276993:30260] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf8d0) [0x7f6c070db8d0]
[tud276993:30260] [ 1] /lib/x86_64-linux-gnu/libpthread.so.0(raise+0x2b) [0x7f6c070db79b]
[tud276993:30260] [ 2] /lib/x86_64-linux-gnu/libpthread.so.0(+0xf8d0) [0x7f6c070db8d0]
[tud276993:30260] [ 3] fsiFluidFoam() [0x40ac99]
[tud276993:30260] [ 4] fsiFluidFoam() [0x407cc1]
[tud276993:30260] [ 5] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f6c06d42b45]
[tud276993:30260] [ 6] fsiFluidFoam() [0x4085fe]
[tud276993:30260] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 30260 on node tud276993 exited on signal 8 (Floating point exception).
--------------------------------------------------------------------------

precice configuration:

<?xml version="1.0"?>

<precice-configuration>

   <log-filter target="info" component="" switch="on" />
   <log-filter target="debug" component="" switch="on" />

    <log-output column-separator=" | " log-time-stamp="no"
                log-time-stamp-human-readable="no" log-machine-name="no"
                log-message-type="no" log-trace="yes"/>

   <solver-interface dimensions="3" restart-mode="off" geometry-mode="off">

       <data:scalar name="Acoustics_Density"/>
       <data:scalar name="Acoustics_Velocity_X"/>
       <data:scalar name="Acoustics_Velocity_Y"/>
       <data:scalar name="Acoustics_Velocity_Z"/>
       <data:scalar name="Acoustics_Pressure"/>

       <mesh name="Fluid_Acoustics">
         <use-data name="Acoustics_Density"/>
         <use-data name="Acoustics_Velocity_X"/>
         <use-data name="Acoustics_Velocity_Y"/>
         <use-data name="Acoustics_Velocity_Z"/>
         <use-data name="Acoustics_Pressure"/>
       </mesh>

       <mesh name="AcousticSurface_Ateles">
         <use-data name="Acoustics_Density"/>
         <use-data name="Acoustics_Velocity_X"/>
         <use-data name="Acoustics_Velocity_Y"/>
         <use-data name="Acoustics_Velocity_Z"/>
         <use-data name="Acoustics_Pressure"/>
       </mesh>

       <participant name="Fluid_Solver">
         <use-mesh name="Fluid_Acoustics" provide="yes"/>
         <write-data mesh="Fluid_Acoustics" name="Acoustics_Density"/>
         <write-data mesh="Fluid_Acoustics" name="Acoustics_Velocity_X"/>
         <write-data mesh="Fluid_Acoustics" name="Acoustics_Velocity_Y"/>
         <write-data mesh="Fluid_Acoustics" name="Acoustics_Velocity_Z"/>
         <write-data mesh="Fluid_Acoustics" name="Acoustics_Pressure"/>
         <master:mpi-single />
       </participant>

       <participant name="Ateles_acoustic">
         <use-mesh name="AcousticSurface_Ateles" provide="yes"/>
         <use-mesh name="Fluid_Acoustics" from="Fluid_Solver"/>
         <read-data name="Acoustics_Density" mesh="AcousticSurface_Ateles"/>
         <read-data name="Acoustics_Velocity_X" mesh="AcousticSurface_Ateles"/>
         <read-data name="Acoustics_Velocity_Y" mesh="AcousticSurface_Ateles"/>
         <read-data name="Acoustics_Velocity_Z" mesh="AcousticSurface_Ateles"/>
         <read-data name="Acoustics_Pressure" mesh="AcousticSurface_Ateles"/>
         <mapping:nearest-neighbor direction="read" from="Fluid_Acoustics" to="AcousticSurface_Ateles" constraint="consistent" timing="initial"/>
         <!--<mapping:petrbf-thin-plate-splines direction="read" from="Fluid_Acoustics" to="AcousticSurface_Ateles" constraint="consistent" timing="initial"/>-->
         <master:mpi-single />
       </participant>

      <m2n:sockets exchange-directory="../" from="Fluid_Solver" to="Ateles_acoustic" distribution-type="gather-scatter"/>

      <coupling-scheme:parallel-explicit>
        <participants first="Fluid_Solver" second="Ateles_acoustic"/>
        <max-timesteps value="900"/>
        <timestep-length value="1e-5"/>
        <exchange data="Acoustics_Density" from="Fluid_Solver" to="Ateles_acoustic" mesh="Fluid_Acoustics" initialize="yes"/>
        <exchange data="Acoustics_Velocity_X" from="Fluid_Solver" to="Ateles_acoustic" mesh="Fluid_Acoustics" initialize="yes"/>
        <exchange data="Acoustics_Velocity_Y" from="Fluid_Solver" to="Ateles_acoustic" mesh="Fluid_Acoustics" initialize="yes"/>
        <exchange data="Acoustics_Velocity_Z" from="Fluid_Solver" to="Ateles_acoustic" mesh="Fluid_Acoustics" initialize="yes"/>
        <exchange data="Acoustics_Pressure" from="Fluid_Solver" to="Ateles_acoustic" mesh="Fluid_Acoustics" initialize="yes"/>
      </coupling-scheme:parallel-explicit>

   </solver-interface>

</precice-configuration>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.