Coder Social home page Coder Social logo

grinsfem / grins Goto Github PK

View Code? Open in Web Editor NEW
48.0 13.0 39.0 363.34 MB

Multiphysics Finite Element package built on libMesh

Home Page: http://grinsfem.github.io

License: Other

Shell 1.88% Python 0.19% HTML 0.01% C++ 93.17% Makefile 1.73% M4 3.02%
libmesh fem multiphysics amr hpc adjoints sensitivity-analysis quantities-of-interest

grins's Introduction

GRINS

General Reacting Incompressible Navier-Stokes (GRINS) was initiated to house common modeling work centered around using the incompressible and variable-density (low-Mach) Navier-Stokes equations utilizing the libMesh finite element library, including both MPI and MPI+threads parallelism, as provided by libMesh. GRINS has now become a tool for rapidly developing formulations and algorithms for the solution of complex multiphysics applications. GRINS originally lived within the PECOS center at the Institute for Computational Engineering and Sciences (ICES) at The University of Texas at Austin.

We encourage pull requests for new features, bug fixes, etc. For questions regarding development, we have a grins-devel Google group setup. For user related questions, please use the grins-users group.

Dependencies

Requirements

In addition to a C++17 compiler, GRINS requires an up-to-date installation of the libMesh finite element library.

libMesh

GRINS development both drives and is driven by libMesh development. Thus, the required minimum master hash of libMesh may change in GRINS master. The current required libMesh master hash is f27eba2 PR #3270, as of GRINS PR #620. GRINS release 0.5.0 can use libMesh versions as old as 0.9.4. Subsequent to the 0.5.0 release requires at least libMesh 1.0.0.

Optional Packages

To enable the reacting low Mach Navier-Stokes physics class, GRINS must be compiled with an external chemistry library. Both Cantera and Antioch are fully supported.

The current required minimum hash for using Antioch is libantioch/antioch@e17822d (libantioch/antioch#265).

Building GRINS

GRINS uses an Autotools build system, so typical GNU build commands are used. We support, and encourage, out-of-source builds (so-called VPATH builds). However, in-source builds are supported.

  1. cd grins-clone
  2. ./bootstrap (generate configure script)
  3. cd ../ & mkdir build & cd build
  4. ../grins-clone/configure --prefix=/path/to/install --with-libmesh=/path/to/libMesh (for more options, do ../grins-clone/configure --help)
  5. make (note parallel builds are supported)
  6. make check (note parallel-tests are supported)
  7. make install

LD_LIBRARY_PATH

If you've compiled libMesh with PETSc or other external libraries and have compiled GRINS with Antioch, Cantera, or other external libraries, you will need to add them to your LD_LIBRARY_PATH as we do not use -rpath when linking to the libraries.

METHOD

By default, GRINS leverages the METHOD environment variable (described here) in order to retrieve the CXXFLAGS variable from the libMesh installation (if METHOD is not present, the default is "opt"). Note that unlike libMesh, GRINS currently only supports building one METHOD at a time. Hence, we use METHOD and not METHODS. For example


./configure METHOD=devel

is valid.

The user can define their own CXXFLAGS variable by passing


--disable-libmesh-flags CXXFLAGS="your flags here"

to configure.

Examples

Upon running make install, there are several examples in the /path/to/install/examples directory. Each example can be run with the local run.sh script. You may set the environment variable GRINS_RUN to run with more than one processor, e.g. export GRINS_RUN="mpiexec -np 4". Additionally, you can set the environment variable GRINS_SOLVER_OPTIONS to pass solver options, e.g. to use MUMPS through PETSc (if you built libMesh with PETSc and built PETSc with MUMPS), export GRINS_SOLVER_OPTIONS="-ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps".

grins's People

Contributors

abhishekmshra avatar bboutkov avatar cahaynes avatar gdmcbain avatar jwpeterson avatar klbudzin avatar koomie avatar nicholasmalaya avatar onkarsahni avatar pbauman avatar roystgnr avatar tradowsk avatar vikramvgarg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grins's Issues

Investigate cavity benchmark failure for different thermodynamic pressure formulation

The cavity benchmark is currently using the following equation to solve for the thermodynamic pressure: {{latex(P_{th} = P_0\left(\int_{\Omega}\frac{1}{T_0};dx\right)\left(\int_{\Omega}\frac{1}{T}\right)^{-1})}} This is what Becker and Braack use. And it works fine for us. However, if instead the "full" formulation is used (where there's mass flux across the boundary allowed), then the cavity benchmark doesn't converge. Figure out why.

Note that an input option needs to be added to switch between the formulations - right now I'm relying on commenting/recompiling for this (if the thermodynamic pressure needs to be computed that is).

Implement CompositeQoI

Have a need now for handling multiple QoI's at once.

  • CompositeQoI should derive from DifferentiableQoI and the CompositeQoI object is what we pass to FEMSystem::attach_qoi
  • CompositeQoI should stash an std::vector<QoIBase*>
  • Each of the CompositeQoI methods should, in turn, loop over the vector of QoIBase* and call the appropriate method.
  • Need to have something like an add_qoi method to supply a new QoIBase object.

This might ought to go into libMesh. Will run it by @roystgnr once done.

Reacting low Mach stabilization

Title says it all. Need to implement a stabilization scheme. Existing literature, that I have found so far, does steady only. Start with that.

Generalize CatalyticWall boundary condition to handle different number of reactants and products

We assume the same number of reactants and products and use that assumption in several places. We need to remove it. The main thing is that means we have to rely on the user supplying the gamma value for each of the terms. This becomes error prone in the single reactant, single product case since the gamma there is specified in terms of one of the reactions and you use "minus" to get the value of the other corresponding guy. Would be good to have that functionality automatically since I already screwed up trying to convert the gamma by hand in initial testing and lost mass conservation...

Explore use of Boost::units

This actually might not be as nasty to use as I thought. Thinking we could use it in the core residual and Jacobian parts to enforce correct units.

Parsing could be something like getting mu = 0.5 and then mu_unit = 'kg/m-s' and then build up a list of units that are understood and internally use SI in the code. Then, we get compile time conversion and can input whatever units we wish. I'm sure more sophisticated parsing could be done, but that would be a easy start.

Not sure how it will play with other types though. For example, mu_phi[i][qp]_dphi[j][qp] - would dphi have to be declared unitless or will it do The Right Thing? Also, this was quoted on the Boost::odeint page "Using Boost.Units works nicely but compilation can be very time and memory consuming. For example the unit test for the usage of Boost.Units in odeint take up to 4 GB of memory at compilation."

Stampede parallel restart failures

I can run a GRINS-sov run from initialization perfectly fine, but restarts result in a,

Assembling the System
  Nonlinear solver DIVERGED at step 0 with residual Not-a-Number

error.

This is only for mpirun np>1. Serial jobs are restarting correctly. I've been playing around with this quite a bit, and I'm just hitting my head on the wall here: I do think this is a bug of some kind.

I've tried to pair this down to a very simple example that can be run from the development queue. My initial.in takes only a few seconds to run with a single mpi-task:

# Options related to all Physics
[Physics]

enabled_physics = 'IncompressibleNavierStokes IncompressibleNavierStokesAdjointStabilization HeatTransfer HeatTransferAdjointStabilization BoussinesqBuoyancy BoussinesqBuoyancyAdjointStabilization VelocityPenalty'

# Options for Incompressible Navier-Stokes physics
[./IncompressibleNavierStokes]

V_FE_family = 'LAGRANGE'
P_FE_family = 'LAGRANGE'

V_order = 'FIRST'
P_order = 'FIRST'

rho = '1.77'
# 'gold standard'
#mu = '1.846e-5'

mu = '1.846e-1'

# Boundary ids:
# k = bottom -> 0
# k = top    -> 5
# j = bottom -> 1
# j = top    -> 3
# i = bottom -> 4
# i = top    -> 2

bc_ids = '0 5'
bc_types = 'no_slip parsed_dirichlet'
bc_variables = 'na w'
bc_values = 'na 0.8'

pin_pressure = true
pin_location = '0.0 0.0 0.0'
pin_value = '0.0'

ic_ids = '0'
ic_types = 'parsed'
ic_variables = 'w'
ic_values = '(abs(x)<=2)*(abs(y)<=2)*0.001'

[../HeatTransfer]

T_FE_family = 'LAGRANGE'
T_order = 'FIRST'

# Boundary ids:
# k = bottom -> 0
# k = top    -> 5
# j = bottom -> 1
# j = top    -> 3
# i = bottom -> 4
# i = top    -> 2

bc_ids = '0 1 2 3 4'
bc_types = 'parsed_dirichlet parsed_dirichlet parsed_dirichlet parsed_dirichlet parsed_dirichlet'
bc_variables = 'T T T T T'
bc_values = '{340.0+(abs(x)<=2)*(abs(y)<=2)*30} {300.0} {300.0} {300.0} {300.0}'

ic_ids = '0'
ic_types = 'constant'
ic_variables = 'T'
ic_values = '300.0'

rho = '1.77'
Cp = '1004.9'

# 'gold standard'
#k = '2.624e-2'

k = '2.624'

[../BoussinesqBuoyancy]

# Reference temperature
T_ref = '300' #[K]

rho_ref = '1.77'

beta_T = '0.003333333333'

# Gravity vector
g = '0.0 0.0 -9.81' #[m/s^2]

[../VelocityPenalty]

# alpha := pi/3 radians
# tanalpha = sqrt(3)
# p := .864 meters
# r := sqrt(x^2+y^2)
# theta := atan2(y,x)
# gamma := atan((p/r-1)*tanalpha)
# phi := 3*pi/2+theta-gamma-alpha

#
# hybrid vane
#
penalty_function = '{r := sqrt(x^2+y^2); theta := atan2(y,x); gamma60 := pi/2 + theta -asin(1.47*sin(pi/3)/r); gamma10 := pi/2 + theta -asin(1.47*sin(pi/18)/r) ; 0.5*(tanh(-9*z + 0.1145)+1.0)*(r<1)*(r>0.5) * 1e2 * cos(gamma10) + 0.5*(tanh(9*z - 0.1145)+1.0)*(r<1)*(r>0.5)*(z<0.85) * 1e2 * cos(gamma60)}{r := sqrt(x^2+y^2); theta := atan2(y,x); gamma60 := pi/2 + theta -asin(1.47*sin(pi/3)/r); gamma10 := pi/2 + theta -asin(1.47*sin(pi/18)/r); 0.5*(tanh(-9*z + 0.1145)+1.0)*(r<1)*(r>0.5) * 1e2 * sin(gamma10) + 0.5*(tanh(9*z - 0.1145)+1.0)*(r<1)*(r>0.5)*(z<0.85) * 1e2 * sin(gamma60)}{0}'

[]

[Stabilization]

tau_constant_vel = '1.0'
tau_factor_vel = '1.0'

tau_constant_T = '1.0'
tau_factor_T = '1.0'

[]

[restart-options]

[]

# Mesh related options
[mesh-options]
mesh_option = create_3D_mesh
element_type = HEX8 
redistribute = '{x}{y}{z*tanh(2*z)}'

domain_x1_min = -3.0
domain_x1_max = 3.0
domain_x2_min = -3.0
domain_x2_max = 3.0
domain_x3_min = 0.0
domain_x3_max = 2.0

mesh_nx1 = '15' 
mesh_nx2 = '15' 
mesh_nx3 = '15'

# Options for time solvers
[unsteady-solver]
transient = 'true' 
theta = 1.0
n_timesteps = '10'
deltat = '0.6'
backtrack_deltat=2

#Linear and nonlinear solver options
[linear-nonlinear-solver]
max_nonlinear_iterations =  30
max_linear_iterations = 5000
continue_after_backtrack_failure='true'
relative_residual_tolerance = '1.0e-10'
verify_analytic_jacobians = 0.0
initial_linear_tolerance = 1.0e-10
use_numerical_jacobians_only = 'false'
require_residual_reduction = 'true'

# Visualization options
[vis-options]
output_vis = true
timesteps_per_vis = 10
vis_output_file_prefix = 'convection_cell' 
output_residual = 'false' 
output_format = 'mesh_only ExodusII xda'

# Options for print info to the screen
[screen-options]

system_name = 'ConvectionCell'

print_equation_system_info = true
print_mesh_info = true
print_log_info = true
solver_verbose = true
solver_quiet = false

print_element_jacobians = 'false'

[../VariableNames]
Temperature = 'T'
u_velocity = 'u'
v_velocity = 'v'
w_velocity = 'w'
pressure = 'p'
[]

Then, I restart with:


# Options related to all Physics
[Physics]

enabled_physics = 'IncompressibleNavierStokes IncompressibleNavierStokesAdjointStabilization HeatTransfer HeatTransferAdjointStabilization BoussinesqBuoyancy BoussinesqBuoyancyAdjointStabilization VelocityPenalty'

# Options for Incompressible Navier-Stokes physics
[./IncompressibleNavierStokes]

V_FE_family = 'LAGRANGE'
P_FE_family = 'LAGRANGE'

V_order = 'FIRST'
P_order = 'FIRST'

rho = '1.77'
# 'gold standard'
#mu = '1.846e-5'

mu = '1.846e-1'

# Boundary ids:
# k = bottom -> 0
# k = top    -> 5
# j = bottom -> 1
# j = top    -> 3
# i = bottom -> 4
# i = top    -> 2

bc_ids = '0 5'
bc_types = 'no_slip parsed_dirichlet'
bc_variables = 'na w'
bc_values = 'na 0.8'

pin_pressure = true
pin_location = '0.0 0.0 0.0'
pin_value = '0.0'

ic_ids = '0'
ic_types = 'parsed'
ic_variables = 'w'
ic_values = '(abs(x)<=2)*(abs(y)<=2)*0.001'

[../HeatTransfer]

T_FE_family = 'LAGRANGE'
T_order = 'FIRST'

# Boundary ids:
# k = bottom -> 0
# k = top    -> 5
# j = bottom -> 1
# j = top    -> 3
# i = bottom -> 4
# i = top    -> 2


bc_ids = '0 1 2 3 4'
bc_types = 'parsed_dirichlet parsed_dirichlet parsed_dirichlet parsed_dirichlet parsed_dirichlet'
bc_variables = 'T T T T T'
bc_values = '{340.0+(abs(x)<=2)*(abs(y)<=2)*30} {300.0} {300.0} {300.0} {300.0}'

ic_ids = '0'
ic_types = 'constant'
ic_variables = 'T'
ic_values = '300.0'

rho = '1.77'
Cp = '1004.9'

# 'gold standard'
#k = '2.624e-2'

k = '2.624'

[../BoussinesqBuoyancy]

# Reference temperature
T_ref = '300' #[K]

rho_ref = '1.77'

beta_T = '0.003333333333'

# Gravity vector
g = '0.0 0.0 -9.81' #[m/s^2]

[../VelocityPenalty]

# alpha := pi/3 radians
# tanalpha = sqrt(3)
# p := .864 meters
# r := sqrt(x^2+y^2)
# theta := atan2(y,x)
# gamma := atan((p/r-1)*tanalpha)
# phi := 3*pi/2+theta-gamma-alpha

#
# hybrid vane
#
penalty_function = '{r := sqrt(x^2+y^2); theta := atan2(y,x); gamma60 := pi/2 + theta -asin(1.47*sin(pi/3)/r); gamma10 := pi/2 + theta -asin(1.47*sin(pi/18)/r) ; 0.5*(tanh(-9*z + 0.1145)+1.0)*(r<1)*(r>0.5) * 1e2 * cos(gamma10) + 0.5*(tanh(9*z - 0.1145)+1.0)*(r<1)*(r>0.5)*(z<0.85) * 1e2 * cos(gamma60)}{r := sqrt(x^2+y^2); theta := atan2(y,x); gamma60 := pi/2 + theta -asin(1.47*sin(pi/3)/r); gamma10 := pi/2 + theta -asin(1.47*sin(pi/18)/r); 0.5*(tanh(-9*z + 0.1145)+1.0)*(r<1)*(r>0.5) * 1e2 * sin(gamma10) + 0.5*(tanh(9*z - 0.1145)+1.0)*(r<1)*(r>0.5)*(z<0.85) * 1e2 * sin(gamma60)}{0}'

[]

[Stabilization]

tau_constant_vel = '1.0'
tau_factor_vel = '1.0'

tau_constant_T = '1.0'
tau_factor_T = '1.0'


[]

[restart-options]

#
# initial spin up
#
#restart_file = 'output/spim_up_256/convection_cell.99.xdr'
restart_file = '/scratch/00000/npm7/laboratory/hybrid/convection_cell.9.xda'

[]



# Mesh related options
[mesh-options]
mesh_option = 'read_mesh_from_file'

#
# initial spin up
#
#mesh_filename = 'output/spin_up_256/convection_cell_mesh.xda'
#mesh_filename = 'convection_cell_mesh.xda'
mesh_filename = '/scratch/00000/npm7/laboratory/hybrid/convection_cell.9.exo'

#
# second refinement
#
#locally_h_refine='(sqrt(x^2+y^2)<0.5)*1'

# Options for time solvers
[unsteady-solver]
transient = 'true' 
theta = 1.0
n_timesteps = '10'
deltat = '0.6'
backtrack_deltat=2

#Linear and nonlinear solver options
[linear-nonlinear-solver]
max_nonlinear_iterations =  30
max_linear_iterations = 5000
continue_after_backtrack_failure='true'

relative_residual_tolerance = '1.0e-10'

verify_analytic_jacobians = 0.0

initial_linear_tolerance = 1.0e-10

use_numerical_jacobians_only = 'false'

require_residual_reduction = 'true'

# Visualization options
[vis-options]
output_vis = true

timesteps_per_vis = 10

vis_output_file_prefix = 'convection_cell' 

output_residual = 'false' 

#output_format = 'ExodusII'
output_format = 'mesh_only ExodusII xda'

# Options for print info to the screen
[screen-options]

system_name = 'ConvectionCell'

print_equation_system_info = true
print_mesh_info = true
print_log_info = true
solver_verbose = true
solver_quiet = false

print_element_jacobians = 'false'

[../VariableNames]

Temperature = 'T'
u_velocity = 'u'
v_velocity = 'v'
w_velocity = 'w'
pressure = 'p'

[]

If you look at the diff logs, you can see that nothing varies aside from defining the mesh and the restart files.

A few other, misc. comments:

  • xda and xdr output formats both fail
  • this fails regardless of the mesh reading: I've tried both using _mesh.xda as well as the output exodus formats
  • this is failing even without the local h refinement, as well as no global h refinement
  • This is made even stranger by the fact that I have restarted in parallel successfully previously on stampede, for a different problem (sov with the high resolution vanes, not the modeled version).

I'm hoping someone can point me towards the pertinent restart file locations in GRINS, so I can start playing around with the source and trying to debug this guy. Any ideas are also appreciated.

building grins on lonestar

I'm having compilation errors with GRINS on lonestar.

I have the following modules loaded:

login2$ module list
Currently Loaded Modules:
  1) TACC    2) TACC-paths    3) Linux    4) cluster    5) cluster-paths    6) intel/11.1    7) mvapich2/1.6    8) gzip/1.3.12    9) tar/1.22    10) autotools/1.1    11) git/1.7.9.6    12) subversion/1.6.15    13) boost/1.49.0

The following is the error:

../src/bc_handling/src/heat_transfer_bc_handling.C(196): error: no instance of overloaded function "GRINS::BoundaryConditions::apply_neumann_axisymmetric" matches the argument list                                                         
            argument types are: (GRINS::AssemblyContext, GRINS::VariableIndex, double, const libMesh::Point)                                                                                                                                 
            object type is: const GRINS::BoundaryConditions                                                                                                                                                                                  
                _bound_conds.apply_neumann_axisymmetric( context, _temp_vars.T_var(), -1.0,                                                                                                                                                  
                             ^                                                                           

Now, looking at the source, I'm not finding the following method. In ' heat_transfer_bc_handling.h' I see the following:

                                              
    virtual void user_apply_neumann_bcs( AssemblyContext& context,                                                                                                                                                                           
                                         const GRINS::CachedValues& cache,                                                                                                                                                                   
                                         const bool request_jacobian,                                                                                                                                                                        
                                         const GRINS::BoundaryID bc_id,                                                                                                                                                                      
                                         const GRINS::BCType bc_type ) const;   

But I'm having trouble seeing what is causing this mismatch. I'm guessing that intel11 on lonestar is unhappy making a cast between the input we are providing, whereas intel 12+ (on stampede) is alright with it, but it is not clear which variable is even causing the problem.

The last element we are passing,

const typename libMesh::TensorTools::IncrementRank::type& value )

looks suspicious to me. Any ideas?

MASA integration

This is long overdue. Simplest thing is to make a MASA Physics class and then can setup the right equations etc. from input. Then element_time_derivative, etc. can add the right stuff to the residual.

Introduce extensible way to set initial guess

Right now, I'm having to write new programs and, although simple, it's a little annoying. Ought to have defaults setup for cases where the every variable equals zero to start doesn't cut it. Then allow the user to override via factory.

Consider using PETSc specific options to project out NULL space

This would be an alternative to pinning the pressure with a penalty term. We would, of course, still keep the existing penalty version around. Would only apply for using PETSc (which we use most of the time anyway) and only for Krylov solvers, but may be worth it for conditioning at some point and wouldn't be too hard to add.

Refactor quantity caching

Thanks to @roystgnr and @dmcdougall for criticism.

Points that were discussed:

  • Can get rid of global enum by using a factory type thing (e.g. how libMesh handles adding variables) where we feed a string and get back an integer. Then, each physics can locally worry about the integer to quantity map.
  • Can redo CachedQuantities class simiarly to how Parameters is handled in EquationSystems where we override a "get()" method and downcast to the right type for differentiating between vectors, gradients, etc.
  • Can refactor the compute_quantities from being a massive switch statement to smaller list for each particular quantity type and using a map between the quantity and quantity type to figure out the right type to call
  • Should do a getter/setter type API for storing values. Right now, the heinously stupid design by me is copying vectors and clocked about 5% of the runtime on my laptop...
  • I'd like to have this done for the 0.4.0 release.

Investigate VMS/Boussinesq weirdness

As first mentioned #78:

The VMS part of the Boussinesq term in the momentum equations does really weird things in the convective_cell example; changes the time scale of the solution, different flow features, etc. So I've commented it out until we can better understand what's going on.

Fix locally_h_refine + restart case

See #120. The locally_h_refine option may generate multiple refinement passes that break when restarting from a previous solution. Right now we emit a warning and force only one refinement pass. If we are restarting, then we need to reinit the dofs after each refinement pass. It seems heavy handed to do EquationSystems::reinit - can we get away with a lower level reinit call? If not, we need to update the API of the MeshBuilder::do_mesh_refinement_from_input function to take the EquationSystems object. However, that case would also muck with the mesh_builder call, before there is an EquationSystems object.

@roystgnr, is there any benefit to doing the mesh refinement before setting up everything else? In other words, do we save significant time by doing the mesh refinement operations before the EquationSystems::init call (in the case where we don't have restart)? If not, then I think we just always do the MeshBuilder::do_mesh_refinement_from_input at the end and then update the API to take EquationSystems and reinit after every refinement.

I'm certainly open to other suggestions.

Get rid of axisymmetric-specific Physics classes

I've started with some of the Physics classes, but there's one or two still lingering. Instead of having a separate Physics class, just have the class check if it's axisymmetric and then add in the correct terms. Ultimately, it would be nice to have libMesh do the right thing for quadrature and derivatives, but that's for another day.

Ensure header function declaration matches source definition

So we saw a place where a variable was declared as i in the header declaration, but was component in the definition. Doxygen takes the header version so it made the source code confusing, so make a pass and make the header and source match.

Clean up code duplication in low Mach Navier-Stokes physics classes

In particular, the GRINS::LowMachNavierStokesBase and GRINS::ReactingLowMachNavierStokesBase classes. Problem is I used a different templating scheme for the former. Need to update the former to use the templating scheme of the latter for the material properties. Then should be able to have some commonality between those to put into a base class.

Better encapsulate variable name/indices

There's starting to be a fair amount of duplicated code through the Physics classes for just handling caching variable names and initing the variable indices. Thinking of something like classes for class of variables, e.g. velocity and then have relevant classes use those classes internally. Maybe singletons. Need to think about it more, but I think it's worth pursuing at some point.

Investigate sign of PSPG term

As first mentioned in #78:

I'd swear that the current sign of the Boussinesq PSPG term is wrong (this applies to Adjoint stabilization as well) . But if I flip it to the "correct" version, the convective_cell solution is just wrong (energy gets sapped out above the hot zone...). As is, it gives solutions that look reasonable. Really need to understand what's going on there. A good test might be to pull out the PSPG terms from all the different stabilizations and use Taylor-Hood and see what happens. Note that if you pull out Boussinesq stabilization all together, you get different flow characteristics than with (which isn't necessarily surprising since you may lose consistency of the residual in the stabilization terms).

Correct use of fixed_ solutions

I caught a miscalculated tau in IncompressibleNavierStokesAdjointStabilization::element_time_derivative(); there may be others.

The code should work correctly as-is for backward Euler but we need to fine tooth comb it before trying to crank up the time integrator order.

Fix GRINS when link_all_deplibs=no

After much tearing out our hair with Vikram, I discovered that Debian (and thus Ubuntu) patches libtool so as to disable link_all_deplibs. (apparently this makes it easier to swap out ABI-compatible libraries without relinking?)

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=702737

However, we're not going to try to explicitly include all our dependencies' dependencies, then watch GRINS break again every time a dependency changes to call a new third party library from an inline function. For now using the libMesh contrib/ libtool is an adequate workaround, but we need to figure out how to reconfigure that setting in a system-supplied libtool when necessary.

Refactor HeatTransferSource to VolumeSource

Or some such name. It occurred to me that we can probably make this much more generic by reading the variable names we want to associate the source with and be able to add the source to that residual.

Extend Vorticity QoI to handle 3D

Currently only handles 2D solutions. Need to add some options for how this would be computed in 3D (l2 norm of integrated average, L2 norm of norm of curl, inner product of curl with user-specified vector?). Low priority ATM.

How to handle screen/terminal output

Right now, there are several different screen-options for controlling when stuff gets output/printed to the terminal. E.g. there are separate options for printing the enable physics, the enable QoI's, EquationSystem info, Mesh info, ... I think I've decided that this is stupid. Because, really, I either want 1. Complete silence (no output), 2. Informative output, and maybe 3. OMFG what's going on output. Also, the vast proliferation of options is a documentation nightmare.

What are thoughts on this? I feel like the majority of the print/verbose/quiet options could be replace with [screen-options] output_verbosity = {0,1,2} or something similar. @roystgnr, @nicholasmalaya, @coreymbryant, I know y'all are starting to/have been using grins, so feel to register your opinion. Anyone else watching this is also welcome to register an opinion.

Fix configuring against OpenMP-enabled libMesh

On stampede, I have to set "CXXFLAGS=-fopenmp" on my configure line; otherwise the attempt in libmesh_new.m4 to test linking against libMesh fails with undefined symbols. And although using this workaround is easy, figuring out this workaround was hell I don't want anyone else to go through.

Ironically, that flag is set by default in the libMesh CXXFLAGS, and we use the libMesh CXXFLAGS by default, but we don't actually query those flags until outside the test for libMesh.

I think the right fix here is to use the libMesh CXXFLAGS in the test inside libmesh_new.m4; but other thoughts are welcome.

Optimization Pass on Stabilization Classes

OMW I didn't even bother trying to optimize on the first pass for coding. There are lots of redundantly computed quantities that are going to make this thing slow as hell. Fix up by passing around the quantities computed at quadrature points (grad_T,etc). Will require redoing the internal API, but no biggie.

Refactor SimulationBuilder

I like having the builder modules in SimulationBuilder because it keeps the Simulation API simple and makes it easy to extend We should just provide (const) accessors to each of the builder modules instead of redoing every call to the builder module. Nothing should be cached in the builder classes anyway (and if there is, we should fix it or have a damn good reason for it being there) so we can make everything const.

Refactor common BCHandling functionality into "helper" objects

Right now, I'm relying on inheritance for some common functionality, but there's also code duplication in places. It's getting a bit ugly, frankly. I think, perhaps, a better solution is to use helper objects. So, create a "fluids" helper objects, and "energy" helper object, etc. to put common functionality in one place, then each associated physics object can use these helper objects to encapsulate the common functionality.

So, in other words, prefer composition over inheritance.

Refactor QoI instantation in GRINS::Simulation

Right now, GRINS::Simulation is caching a shared_ptr to a GRINS::QoIBase object, but when we attach the qoi to the GRINS::MultiphysicsSystem, ownership transfers. So, we shouldn't be storing a GRINS::QoIBase pointer in GRINS::Simulation, but rather just building QoI object and handing it to the libMesh::System straight away. I've been bitten twice by this inconsistency now so we need to clean it up.

Update input checking

Many places throughout the code, I check input values with some logic for the mandatory input options. However, I just learned about GetPot::have_variable. This would be a much better way to handle those checks. Update throughout the code to use this.

make distcheck failing

On my Linux box, when I build up the source code and then run make distcheck, it seems that the libMesh path I'd set in configure isn't propagating to make distcheck. Tried setting LIBMESH_DIR and same problem - make distcheck not finding libMesh. Need to see if this is an autotools problem, Linux problem, or what. Note manually copying the tar ball, untarring, configure; make; make check; make install all works for me (as of #7)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.