Coder Social home page Coder Social logo

gridap / gridap.jl Goto Github PK

View Code? Open in Web Editor NEW
654.0 21.0 95.0 19.9 MB

Grid-based approximation of partial differential equations in Julia

License: MIT License

Julia 100.00%
julia pdes partial-differential-equations finite-elements numerical-methods gridap

gridap.jl's Issues

How to compute the "change of surface" for facet integration in arbitrary dimensions?

Given the D-1 tangent vectors (i.e. the Jacobian of a geomap from D-1 to D dimensions) how to compute the scalar value representing the corresponding "change of surface" ?

In D=3, I am taking the 2-norm of the cross product of the D-1 tangent vectors

In D=2, I am taking the 2 norm of the tangent vector

I even have a formula for D=4....

But, how to do it in general? is there any way?

Geometry

New API for geometry-related objects available

Problems with coords

I am having trouble with the coordinates extracted from a trinagulation.

@fverdugo can you take a look at FESpaceTests l.42

D=2
nparts1d = 2
nparts = nparts1d*ones(Int64,D)
nparts_t = tuple(nparts...)
grid = CartesianGrid(partition=nparts_t,domain=(0,1,0,1),order=1) # domain, and order are optional
trian = triangulation(grid)
coords = cellcoordinates(trian)
size(coords)
IndexStyle(coords)
coords.lid_to_gid
coords.gid_to_val
coords.cv
length(coords)
isa(coords, AbstractArray)

provide reasonable values. On the other hand, coords is wrong, it only iterates till cell 2 for a 2x2 trian. Weird behaviour, it is a sub-type of AbstractArray, I can get all indices OK, but when I iterate, it does not work... The iterate for a AbstractArray is in Julia...

Field values should not be AbstractArrays

Some hacks where introduced in CellValues to circumvent the fact the VectorValue and TensorValue are in fact arrays. This would be simplified if those types are not arrays.

  • provide a new implementation of the types in FieldValues so that they do not extend Abstract Array. This will be done by composing an static array instead of defining constants from static arrays. This work can be isolated in a package that defines the types and the associated operations.

Problem with fun that provides Int used to create a Map

We have a problem if we define a function that returns an Int and then we use it to create a map MapFromBroadcastUnaryOp.

fun(x::Point{2}) = 1
reffe = fesp.reffe
dofb = reffe.dofbasis
trian = fesp.trian
phi = geomap(trian)
uphys = fun ∘ phi

We have to fix it.

Develop a new NodesArray for general polytopes

@santiagobadia

It seems that there is a problem in NodesArray for simplices polytopes:

using Gridap
const t = TET_AXIS
polytope = Polytope((t,t)) # This is a triangle
orders=[1,1]
na = NodesArray(polytope,orders)
length(na.coordinates) == 4 # Wrong? Should be 3, right?

The length of na.coordinates seems to be wrong. Could you confirm that this is a bug?

Future developments

Interpolator (Combine field, dofbasis, geomap)
FE Space (global numbering, Dirichlet boundary conditions, multiphysics)
Mesh (Structured mesh, Dirichlet boundary, queries, ... GMesh.jl)
FE Operator
Nonlinear operators with Automatic Differentiation
Assembler
VTK IO
Linear algebra (direct solver, CG, PETSc.jl)

Some notes by @santiagobadia to be discussed

CODE COMMENTS:

  • Can we do a trick similar to what is being done for gradient, for the derivative of an operator with respect to the variable, i.e.,

import Gridap: D
a(u,v) = ...
da(u,v,du) = ...
D(::typeof(a)) = da

Here we are assuming that D is in this case d()/du, where u is always the unknown of our FE problem. We must probably create a function that only depends on one unknown.

a(u,v,w,...) = ...

Given v*,w*,...

aa(u) = a(u,v*,w*,...)

and know

D(aa) is the well-defined total derivative.

Does it have sense in our framework? I have to think about it.

  • We could now replace the hand-written definition of D for a given a with its automatic differentiation.

  • Which are the pre-requisites (rigidity) in the nonlinear FE expressions? What should be respected when defining the nonlinear terms, etc?

  • We should keep a consistent way to define derivatives. E.g.,

D(a(u)) = Da(u,du)

i.e., the direction always the last one, whereas the point in which evaluated the first one.

  • There are many ways to define FE terms. E.g., AffineFETerm. Is that really needed? Can we define a one-way implementation, less noisy. And probably unify linear and nonlinear terms, using the previous trick. Is there a performance motivation?

  • Don't you think that the multivalue wrapper complicates things? E.g., when creating a cell array with an integer per cell, when we index the array what we get is not the integer but a multivalue. Is that correct? Then, the user must extract the integer from the multivalue wrapper.

  • What is the motivation for C and DFiniteElementSpace? What is the difference with respect to ConformingFESpace?

FURTHER DEVELOPMENTS:

  • Multi-field solvers, e.g., Navier-Stokes with Qp / Q-p-1
  • Interface-coupling, e.g., FSI interaction
  • Time-depedent problem with time-stepping (all in driver)
  • High order tets
  • Non-nodal FEs (RT, Nedelec)
  • Hybridized FEs (HDG, ...)

DOCUMENTATION/TUTORIALS:

  • Create tutorials w/ Jupyter notebooks
  • First, high level FE solvers, using the ones in tests
  • Second, low level tutorials about rationale of types in Gridap, e.g., cell-wise structs.
  • We can also create folders with markdown text that it is automatically uploaded at Github
  • Last step, generate documentation with built-in Julia tools, check it compiles, etc

PRESENTATIONS AND PAPERS:

Proceeding about FEMPAR and Gridap, focused on Gridap for Canberra meeting. Proceedings must be sent by mid-August.

CLEANING:

Which parts of the code are dirty and require some maintenance?

  • Polytope and node array structures. Eliminate old node array and use always the new one, now missing the new one for heterogenous order in space dims. Clarify the APIs of polytope, etc.

Homogenize node_and_comp_to_dof

The types LagrangianDOFBasis and CLagrangianFESpace uses different strategies to represent the relation between nodes, components, and dofs (see the field node_and_comp_to_dof in both structs).

It would be nice to homogenize things. I would adopt the strategy of CLagrangianFESpace

Performance improvements in generation of FESpaces

Some improvements in the performance of FESpaces are needed, in particular:

  • Improve efficiency of low level routines that generate dofs, do the interpolation etc (i.e. introduce function barriers and avoid fine grained allocations). Fixed since PR #54
  • Improve efficiency of the interpolation on Dirichlet boundary. (i.e., only loop over boundary cells, some times several tags can have the same function associated to them)

UnstructuredGrids is an unregistered package

As discussed in JuliaLang/Pkg.jl#681, a package cannot depend on a package that depends on an unregistered package.

Now Numa depends on the unregistered package UnstructuredGrids.

Thus, we need to register UnstructuredGrids (either in the General register or in our own register, first option preferred)

Once this is done, delete the hack included in .travis.yml in order to allow the usage of the unregistered package UnstructuredGrids

Add way to compute reaction forces

Issue motivated by discussion with @santiagobadia on 2019-06-27

In some applications (structural mechanics, surface-coupled problems) it will needed to compute reaction forces. To this end we can introduce a new factory DirichletFESpace function that returns a fe space where the Dirichlet dofs are free and the rest are constrained to zero (i.e., to opposite than in TestFESpace). When using this fe space as the test space in a FEOperator the resulting residual vector will be related with the reaction forces.

ConstantCellArray out of bounds

In the current implementation, ConstantCellArrays provides the same array even when exceding its length. Is this the behavior we want?

Extend FE assemble for multiple terms

Needed to solve problems with weak forms that include terms that are integrated over different domains.

This is a subtask needed in issue #29

Work being done in branch extend_assembler_to_multiple_fields

Dof ordering for vector/tensor-valued fields

This issue is only relevant for vector/tensor-valued fields

Currently, we are using different orderings for the dofs in different places of the code:

  1. LagrangianDOFBasis uses "node major" ordering (i.e., first component for all nodes, second component for all nodes, etc.)
  2. CLagrangianFESpace and DLagrangianFESpace use "component major" ordering. (i.e., all components for first node, then all components for second node, etc).
  3. ConformingFESpace: I am not sure what ordering is being used. To ckeck.

Perhaps it is a good idea to unify things and use the same criterion everywhere.

I would choose "component major" since it is quite standard, and some AMG solvers require that the DOFs are ordered in this way.

I am wandering, why LagrangianDOFBasis uses "node major" ordering. Is there any reason?

What to do with `cellsize`?

We have three options:

  1. Remove function cellsize from CellValue and CellArray since now the CachedArray is re-sizable, and therefore cellsize it is not longer needed (i.e. we don't need to preallocate to a maximum size since we can resize on the fly)

  2. Keep it and cellsize means the true maximum size. For some concrete implementations it would be difficult / expensive to answer this query. (now we follow this approach in the code)

  3. Keep it and cellsize means an upper bound of the maxmum size. This is less restrictive, but one has to be careful when computing cellsize of CellArrayFromBinaryBroadCastOperation

I am for option 1, but I am not sure if cellsize can be useful in some context in the future

To extend CellFieldFromComposeExtended to CellBases

The functionality provided by CellFieldFromComposeExtended has to be extended to support CellBasis objects.

For instance, this would be required to remove the commented part in line:

jac(u,v,du) = a(u,v,du) # + varinner(v,ν(du)*grad(u))

Essentially, one needs to extend CellFieldFromComposeExtended to CellMapFromComposeExtended, which in turn will require to extend FieldFromComposeExtended to MapFromComposeExtended.

EDIT

As a part of this re factoring, it would be useful that we can create CellFields giving more than one auxiliary cell field. (useful for Multi-field problems). E.g.,

νfun(x,u1,u2) = (u2*u1+1.0)*x[1]
ν(u1,u2) = CellField(trian,νfun,u1,u2)

Refactoring of CellValues and Fields

Place to keep miscellaneous comments about the ongoing re-factoring of CellValues and Co.

The re-factoring is taking place in the branch refactoring_cell_values in folders src/REFACTORED and test/REFACTORED

Misc. improvements for FESpaces, Assemblers, and FEOperators

Design improvements:

  • Improve the expressivity of the abstract interface of FEOperator so that an efficient non-linear FE solver can be implemented only in terms of this abstract interface.
  • Allow Neuman boundary conditions, and terms that are integrated over surfaces
  • The abstract interface of Assembler assumes 1 field, 1 term (this is ok for the moment)
  • Extension to multi-fields

Efficiency improvements:

  • Improve efficiency of low level routines that generate dofs, do the interpolation etc (i.e. introduce function barriers and avoid fine grained allocations).
  • Improve efficiency of the interpolation on Dirichlet boundary. (i.e., only loop over boundary cells, some times several tags can have the same function associated to them)

CellField structs and FE interpolation

I am thinking about the implementation of FE interpolation of functions.

We already have a Field type, which is somehow a field in the physical space.

Thus, we need to create a CellFieldFromAnalyticalFieldWithGeomap that, given an AnalyticalField and a Geomap, is able to evaluate the field in a set of points in the reference cell.

@fverdugo Do we agree on this?

Next, we should use it in evaluate for DofBasis in RefFE.jl. I am not sure the current version is OK, since the restriction of the previous object to a cell should be a Field, otherwise it would not work. I guess it is acceptable.

As far as I can see, there is not any struct now that I could use for these purposes.

Polytope to Grid conversion working only for particular cases

The conversion currently assumes that all faces on the newly created grid have the same extrusion code. That is, it only works for hexs and tets at the moment, not for wedges, pyramids, etc. If one tries to perform the conversion for a case that violates this assumption, an error is issued by a @notimplemented

Wrong numerical integration

It seems that there is a problem with the numerical integration

D=2
nparts1d = 2
nparts = nparts1d*ones(Int64,D)
nparts_t = tuple(nparts...)
grid = CartesianGrid(partition=nparts_t,domain=(0,1,0,1),order=1) # domain, and order are optional
trian = triangulation(grid) # Generates the Triangulation associated with this grid
quad = quadrature(trian,order=2)
fun(x::Point{2}) = 1.0
ksca = integrate(fun, trian, quad)
# ksca = 0.25

where it should provide ksca = 1.0

Dupplicated Polytopes.jl and PolytopesTests.jl files

@santiagobadia I have just realized that we have duplicated files:
src/RefFEs/Polytopes.jl
src/Geometry/Polytopes.jl
on the one hand, and
test/RefFEsTests/PolytopesTests.jl
test/PolytopesTests.jl
on the other hand.

The ones that we want to keep are:
src/RefFEs/Polytopes.jl
test/RefFEsTests/PolytopesTests.jl
They are the ones being used and tested in the library.

The other two:
src/Geometry/Polytopes.jl
test/PolytopesTests.jl
seem to be there by accident, they are not being used and need to be deleted.

Please, make sure that you are working on the good ones and that you have not relevant changes in the wrong ones.

Thanks!

Speed up compilation time in complex lazy expression trees on CellMaps

Issue motivated by discussion with @santiagobadia on 2019-06-27

The definition of bilinear and linear forms in the drivers leads to objects that represent complex operation trees. At the moment, in order to allow efficient iteration / indexing of these objects we use a lot of type parameters to achieve type stability. In practical cases, the name of such types can expand the order of 1000 characters, which makes difficult to understand the code and can lead to a considerable compilation overhead (to be better quantified).

Luckily, iteration in the resulting CellMap object is not needed. Only a global call to evaluate with a given CellPoints object is needed. Thus we don't really need type stability. Allowing type instabilty in the concerned types will simplify things a lot.

We have two options:

  1. Remove type parameters from ALL the objects representing operations between CellMap objects.

  2. Remove type parameters only in the results from operations between FEBasis and FEFunctions since they are the involved in the evaluation of linear and bilinear forms.

To be discussed.

Comments after 25th April meeting

Merge CellValue and CellArray, since CellArray = CellValue{T<:AbstractArray}, and only implement for CellArray those methods in which efficiency is important, and replace CellData by CellValue

Replace range_sizeby a return_size that comes from the Point{D} array in the evaluation method

Create CellFieldFromExtended to represent functions in the reference space

Create CellInterpolator <: CellVector that composes a CellFieldFromExtended and CellValue{D<:DOFBasis} (constant at the beginning)

Non-continuous FEFunctions for high order FESpaces

@santiagobadia I have realized that high order fe spaces are not working properly. E.g.:

model = CartesianDiscreteModel(domain=(0.0,1.0,0.0,1.0), partition=(2,2))
order = 2
tags = [1,2,3,4,6,5]
fespace = ConformingFESpace(Float64,model,order,tags)
fun(x) = x[1] + x[2]
uh = interpolate(fespace,fun)
writevtk(trian,"trian", nref=4, cellfields=["uh"=>uh])

leads to a non-continuous FEFunction (here the model has 2x2 cells but I am visualizing with a finer mesh)

non-continuous

Everything OK for the same example with order = 1

continuous

perhaps the problem is in the generation of the cell_eqclass.

Further developments in Polytope and RefFE for high order FEM

  1. For the moment I do not have a NodesArray. It is quite simple to create one, but I am not sure it is needed. In fact, it was not in RefFE. Adding it should be straighforward.

  2. For the moment, I keep the old one because for anisotropic order the new version is not working yet, we consider same order in all dimensions. On the other hand, the generation of monomials is only working for n-cubes and n-tets. We should probably think about changes in the polynomial machinery to include other cases.

As a result, there are two missing parts:

  • Create heterogeneous order nodes on n-cubes using the new machinery and eliminate the old NodesArray

  • Create a new NodesArray if needed. The only thing I have not implemented is the set of nodes in the closure of a n-face. Is that needed? It seems that it is not used in the code but I guess it will be needed in facet integration for non-conforming DG methods

  • Think about more general monomial generation for high order FEM

I still need to define:

  • Provide a general concept of orientation that will be general for whatever dimension or polytope Since this is a topic by its own, I (@fverdugo ) have created a specific issue for it (see issue #51 )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.