Coder Social home page Coder Social logo

dolfin-adjoint / pyadjoint Goto Github PK

View Code? Open in Web Editor NEW
83.0 83.0 33.0 41.54 MB

The algorithmic differentation tool pyadjoint and add-ons.

License: GNU Lesser General Public License v3.0

Dockerfile 0.44% Shell 0.20% Python 99.08% TeX 0.18% GLSL 0.09% Makefile 0.01%

pyadjoint's Introduction

The algorithmic differentation tool for DOLFIN/FEniCS

The full documentation is available here

Installation

First install FEniCS Then install dolfin-adjoint with:

python3 -m pip install git+https://github.com/dolfin-adjoint/dolfin-adjoint.git@main

or using the Pypi-package for the latest stable release

python3 -m pip install dolfin-adjoint

Reporting bugs

If you found a bug, create an issue

Contributing

We love pull requests from everyone.

Fork, then clone the repository:

git clone https://github.com/{your_username}/dolfin-adjoint.git

Make sure the tests pass:

python3 -m pytest tests

Make your change. Add tests for your change. Make the tests pass:

python3 -m pytest tests

Push to your fork and submit a pull request. At this point you're waiting on us. We may suggest some changes or improvements or alternatives.

Some things that will increase the chance that your pull request is accepted:

  • Write tests.
  • Add Python doc-strings that follow the Google Style.
  • Write good commit and pull request message.

License

This software is licensed under the GNU LGPL v3.

pyadjoint's People

Contributors

angus-g avatar apaganini avatar colinjcotter avatar connorjward avatar danielskatz avatar dham avatar diffeoinvariant avatar finsberg avatar florianwechsung avatar funsim avatar gbruer15 avatar ig-dolci avatar igorbaratta avatar jdbetteridge avatar jhale avatar johannesring avatar jorgensd avatar jrmaddison avatar jwallwork23 avatar khhelland avatar mariuscausemann avatar medinaeder avatar minrk avatar nboulle avatar nbouziani avatar pefarrell avatar ptrunschke avatar sebastkm avatar stephankramer avatar wence- avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyadjoint's Issues

optimization problem by use IPOPTSolver,the range of guessed value is very close to the target value.

Hi,

I am working on optimization of material parameters. The objective is to minimize the displacement or (volume) difference between experimental measurements and the output of the finite element simulation to obtain the hyperelastic material parameters which are the similar as my target values.

My problem is: my target values of material parameter are a =1.291 and b=9.726, the result can be converged when the ranges of given guessed values are very close to the target values. (for example, the range of a can be accept is from 0.5 to 1.5, if larger than 1.5, like 3, the solver will be crashed.)
Everything works fine until to the optimization part, I am not sure if my forward problem has problem or the setting of parameters in optimization part is wrong.
I will be very grateful for all your suggestions and any help!!

below is my small benchmark code.

`

import dolfin as df
import dolfin_adjoint as da
import json
try:
    from dolfin_adjoint import (
        Constant,
        DirichletBC,
        Expression,
        UnitCubeMesh,
        interpolate,
        Mesh,
    )
except ImportError:
    from dolfin import (
        UnitCubeMesh,
        Expression,
        Constant,
        DirichletBC,
        interpolate,
        Mesh,
    )

import pulse

from collections import deque
import time
pulse.iterate.logger.setLevel(10)

# Create mesh
N = 6
mesh = da.UnitCubeMesh(N, N, N)
# Create subdomains
class Free(df.SubDomain):
    def inside(self, x, on_boundary):
        return x[0] > (1.0 - df.DOLFIN_EPS) and on_boundary


class Fixed(df.SubDomain):
    def inside(self, x, on_boundary):
        return x[0] < df.DOLFIN_EPS and on_boundary

def foward(mesh,param_a,param_b):
    # Create a facet fuction in order to mark the subdomains
    ffun = df.MeshFunction("size_t", mesh, 2)
    ffun.set_all(0)

    # Mark the first subdomain with value 1
    fixed = Fixed()
    fixed_marker = 1
    fixed.mark(ffun, fixed_marker)

    # Mark the second subdomain with value 2
    free = Free()
    free_marker = 2
    free.mark(ffun, free_marker)

    # Create a cell function (but we are not using it)
    cfun = df.MeshFunction("size_t", mesh, 3)
    cfun.set_all(0)


    # Collect the functions containing the markers
    marker_functions = pulse.MarkerFunctions(ffun=ffun, cfun=cfun)
    # Create mictrotructure
    V_f = df.VectorFunctionSpace(mesh, "CG", 1)

    # Fibers
    f0 = interpolate(Expression(("1.0", "0.0", "0.0"), degree=1), V_f)
    # Sheets
    s0 = interpolate(Expression(("0.0", "1.0", "0.0"), degree=1), V_f)
    # Fiber-sheet normal
    n0 = interpolate(Expression(("0.0", "0.0", "1.0"), degree=1), V_f)


    # Collect the mictrotructure
    microstructure = pulse.Microstructure(f0=f0, s0=s0, n0=n0)
    # Create the geometry
    geometry = pulse.Geometry(
        mesh=mesh,
        marker_functions=marker_functions,
        microstructure=microstructure,
    )

    material_parameters = dict(
        a=param_a,
        a_f=5.0,
        b=param_b,
        b_f=5.0,
        a_s=0.0,
        b_s=0.0,
        a_fs=0.0,
        b_fs=0.0,
    )


    # Select model for active contraction
    active_model = pulse.ActiveModels.active_strain
    # active_model = "active_stress"

    # Set the activation
    activation = Constant(0.0)

    # Create material

    material = pulse.HolzapfelOgden(
        active_model=active_model,
        parameters=material_parameters,
        f0=geometry.f0,
        s0=geometry.s0,
        n0=geometry.n0,
        activation=activation,
        T_ref=1.0,
        )

    # Make Dirichlet boundary conditions
    def dirichlet_bc(W):
        V = W if W.sub(0).num_sub_spaces() == 0 else W.sub(0)
        return DirichletBC(V, Constant((0.0, 0.0, 0.0)), fixed)

    lvp = da.Constant(0.0)
    # Make Neumann boundary conditions
    neumann_bc = pulse.NeumannBC(traction=lvp, marker=free_marker)

    # Collect Boundary Conditions
    bcs = pulse.BoundaryConditions(dirichlet=(dirichlet_bc,), neumann=(neumann_bc,))

    # Create problem
    problem = pulse.MechanicsProblem(geometry, material, bcs)


    pulse.iterate.iterate(problem, lvp, da.Constant(2.0),max_nr_crash=100, max_iters=100)
    u, p = problem.state.split(deepcopy=True)
    return u

parameter_a = da.Constant(1.291)
parameter_b = da.Constant(9.726)
guessed_parameter_a = da.Constant(3.0)
guessed_parameter_b = da.Constant(15.0)
u_syn = foward(mesh,parameter_a,parameter_b)
V = df.VectorFunctionSpace(mesh, "CG", 2)
u_synthetic = da.project(u_syn, V)
u_crl = foward(mesh,guessed_parameter_a,guessed_parameter_b)
def cost_function(u_model, u_data):
    norm = lambda f: da.assemble(df.inner(f, f) * df.dx)
    return norm(u_model - u_data)

J = cost_function(u_crl,u_synthetic)
print(J)

controls = [da.Control(guessed_parameter_a), da.Control(guessed_parameter_b)]
cost_func_values = []
control_values = []
def eval_cb(j, m):
    """Callback function"""
    cost_func_values.append(j)
    control_values.append(m)

reduced_functional = da.ReducedFunctional(
    J,
    controls,
   eval_cb_post=eval_cb,
)


problem = da.MinimizationProblem(reduced_functional, bounds=[(0, 4.0),(0, 16.0)])
parameters = {
    #"tolerance": 1e-30,
    "limited_memory_initialization": "scalar2",
    "maximum_iterations": 20,
}
solver = da.IPOPTSolver(problem, parameters=parameters)

optimal_control = solver.solve()
for i in optimal_control:
    print(i.values())

`
error message:

List of options:

                                Name   Value                # times used
               hessian_approximation = limited-memory            7
       limited_memory_initialization = scalar2                   1
                            max_iter = 20                        1
                         print_level = 6                         2

This is Ipopt version 3.14.4, running with linear solver MUMPS 5.2.1.

Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 0

Hessian approximation will be done in the space of all 2 x variables.

List of options:

                                Name   Value                # times used
               hessian_approximation = limited-memory            7
       limited_memory_initialization = scalar2                   1
                            max_iter = 20                        1
                         print_level = 6                         2

This is Ipopt version 3.14.4, running with linear solver MUMPS 5.2.1.

Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 0

Hessian approximation will be done in the space of all 2 x variables.

Scaling parameter for objective function = 1.000000e+00
objective scaling factor = 1
No x scaling provided
No c scaling provided
No d scaling provided
Initial values of x sufficiently inside the bounds.
Initial values of s sufficiently inside the bounds.
Total number of variables............................: 2
variables with only lower bounds: 0
variables with lower and upper bounds: 2
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0


*** Summary of Iteration: 0:


iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 0.0000000e+00 0.00e+00 6.55e-04 0.0 0.00e+00 - 0.00e+00 0.00e+00 0


*** Beginning Iteration 0 from the following point:


Current barrier parameter mu = 1.0000000000000000e+00
Current fraction-to-the-boundary parameter tau = 0.0000000000000000e+00

||curr_x||_inf = 1.5000000000000000e+01
||curr_s||_inf = 0.0000000000000000e+00
||curr_y_c||_inf = 0.0000000000000000e+00
||curr_y_d||_inf = 0.0000000000000000e+00
||curr_z_L||_inf = 1.0000000000000000e+00
||curr_z_U||_inf = 1.0000000000000000e+00
||curr_v_L||_inf = 0.0000000000000000e+00
||curr_v_U||_inf = 0.0000000000000000e+00

***Current NLP Values for Iteration 0:

                               (scaled)                 (unscaled)

Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 6.5483061078119853e-04 6.5483061078119853e-04
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 1.5000000010000001e+01 1.5000000010000001e+01
Overall NLP error.......: 1.5000000010000001e+01 1.5000000010000001e+01

Number of Iterations....: 0

                               (scaled)                 (unscaled)

Objective...............: 0.0000000000000000e+00 0.0000000000000000e+00
Dual infeasibility......: 6.5483061078119853e-04 6.5483061078119853e-04
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 1.5000000010000001e+01 1.5000000010000001e+01
Overall NLP error.......: 1.5000000010000001e+01 1.5000000010000001e+01

Number of objective function evaluations = 1
Number of objective gradient evaluations = 1
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 0
Total seconds in IPOPT = 350.550

EXIT: Stopping optimization at current point as requested by user.

RuntimeError Traceback (most recent call last)
Untitled-1.ipynb Cell 5 in <cell line: 24>()
17 parameters = {
18 #"tolerance": 1e-30,
19 "limited_memory_initialization": "scalar2",
20 "maximum_iterations": 20,
21 }
22 solver = da.IPOPTSolver(problem, parameters=parameters)
---> 24 optimal_control = solver.solve()
25 for i in optimal_control:
26 print(i.values())

File ~/.local/lib/python3.9/site-packages/pyadjoint/optimization/ipopt_solver.py:199, in IPOPTSolver.solve(self)
197 """Solve the optimization problem and return the optimized controls."""
198 guess = self.rfn.get_controls()
--> 199 results = self.ipopt_problem.solve(guess)
200 new_params = [control.copy_data() for control in self.rfn.controls]
201 self.rfn.set_local(new_params, results[0])

File ~/anaconda3/envs/fenicsjoint/lib/python3.9/site-packages/cyipopt/cython/ipopt_wrapper.pyx:642, in ipopt_wrapper.Problem.solve()

File ~/anaconda3/envs/fenicsjoint/lib/python3.9/site-packages/cyipopt/cython/ipopt_wrapper.pyx:676, in ipopt_wrapper.objective_cb()

File ~/.local/lib/python3.9/site-packages/pyadjoint/reduced_functional_numpy.py:36, in ReducedFunctionalNumPy.call(self, m_array)
...


*** DOLFIN version: 2019.1.0
*** Git changeset: 7f46aeb0b296da5bbb1fb0845822a72ab9b09c55

{
"name": "RuntimeError",
"message": "\n\n*** -------------------------------------------------------------------------\n*** DOLFIN encountered an error. If you are not able to resolve this issue\n*** using the information listed below, you can ask for help at\n***\n*** [email protected]\n***\n*** Remember to include the error message listed below and, if possible,\n*** include a minimal running example to reproduce the error.\n***\n*** -------------------------------------------------------------------------\n*** Error: Unable to solve nonlinear system with NewtonSolver.\n*** Reason: Newton solver did not converge because maximum number of iterations reached.\n*** Where: This error was encountered inside NewtonSolver.cpp.\n*** Process: 0\n*** \n*** DOLFIN version: 2019.1.0\n*** Git changeset: 7f46aeb0b296da5bbb1fb0845822a72ab9b09c55\n*** -------------------------------------------------------------------------\n",
"stack": "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[0;31mRuntimeError\u001b[0m Traceback (most recent call last)\n\u001b[1;32mUntitled-1.ipynb Cell 5\u001b[0m in \u001b[0;36m<cell line: 24>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 17\u001b[0m parameters \u001b[39m=\u001b[39m {\n\u001b[1;32m 18\u001b[0m \u001b[39m#"tolerance": 1e-30,\u001b[39;00m\n\u001b[1;32m 19\u001b[0m \u001b[39m"\u001b[39m\u001b[39mlimited_memory_initialization\u001b[39m\u001b[39m"\u001b[39m: \u001b[39m"\u001b[39m\u001b[39mscalar2\u001b[39m\u001b[39m"\u001b[39m,\n\u001b[1;32m 20\u001b[0m \u001b[39m"\u001b[39m\u001b[39mmaximum_iterations\u001b[39m\u001b[39m"\u001b[39m: \u001b[39m20\u001b[39m,\n\u001b[1;32m 21\u001b[0m }\n\u001b[1;32m 22\u001b[0m solver \u001b[39m=\u001b[39m da\u001b[39m.\u001b[39mIPOPTSolver(problem, parameters\u001b[39m=\u001b[39mparameters)\n\u001b[0;32m---> 24\u001b[0m optimal_control \u001b[39m=\u001b[39m solver\u001b[39m.\u001b[39;49msolve()\n\u001b[1;32m 25\u001b[0m \u001b[39mfor\u001b[39;00m i \u001b[39min\u001b[39;00m optimal_control:\n\u001b[1;32m 26\u001b[0m \u001b[39mprint\u001b[39m(i\u001b[39m.\u001b[39mvalues())\n\nFile \u001b[0;32m~/.local/lib/python3.9/site-packages/pyadjoint/optimization/ipopt_solver.py:199\u001b[0m, in \u001b[0;36mIPOPTSolver.solve\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 197\u001b[0m \u001b[39m"""Solve the optimization problem and return the optimized controls."""\u001b[39;00m\n\u001b[1;32m 198\u001b[0m guess \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mrfn\u001b[39m.\u001b[39mget_controls()\n\u001b[0;32m--> 199\u001b[0m results \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mipopt_problem\u001b[39m.\u001b[39;49msolve(guess)\n\u001b[1;32m 200\u001b[0m new_params \u001b[39m=\u001b[39m [control\u001b[39m.\u001b[39mcopy_data() \u001b[39mfor\u001b[39;00m control \u001b[39min\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mrfn\u001b[39m.\u001b[39mcontrols]\n\u001b[1;32m 201\u001b[0m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mrfn\u001b[39m.\u001b[39mset_local(new_params, results[\u001b[39m0\u001b[39m])\n\nFile \u001b[0;32m~/anaconda3/envs/fenicsjoint/lib/python3.9/site-packages/cyipopt/cython/ipopt_wrapper.pyx:642\u001b[0m, in \u001b[0;36mipopt_wrapper.Problem.solve\u001b[0;34m()\u001b[0m\n\nFile \u001b[0;32m~/anaconda3/envs/fenicsjoint/lib/python3.9/site-packages/cyipopt/cython/ipopt_wrapper.pyx:676\u001b[0m, in \u001b[0;36mipopt_wrapper.objective_cb\u001b[0;34m()\u001b[0m\n\nFile \u001b[0;32m~/.local/lib/python3.9/site-packages/pyadjoint/reduced_functional_numpy.py:36\u001b[0m, in \u001b[0;36mReducedFunctionalNumPy.call\u001b[0;34m(self, m_array)\u001b[0m\n\u001b[1;32m 31\u001b[0m \u001b[39m"""An implementation of the reduced functional evaluation\u001b[39;00m\n\u001b[1;32m 32\u001b[0m \u001b[39m that accepts the control values as an array of scalars\u001b[39;00m\n\u001b[1;32m 33\u001b[0m \n\u001b[1;32m 34\u001b[0m \u001b[39m"""\u001b[39;00m\n\u001b[1;32m 35\u001b[0m m_copies \u001b[39m=\u001b[39m [control\u001b[39m.\u001b[39mcopy_data() \u001b[39mfor\u001b[39;00m control \u001b[39min\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mcontrols]\n\u001b[0;32m---> 36\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mrf\u001b[39m.\u001b[39;49m\u001b[39m__call__\u001b[39;49m(\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mset_local(m_copies, m_array))\n\nFile \u001b[0;32m~/.local/lib/python3.9/site-packages/pyadjoint/tape.py:69\u001b[0m, in \u001b[0;36mno_annotations..wrapper\u001b[0;34m(args, kwargs)\u001b[0m\n\u001b[1;32m 66\u001b[0m \u001b[39m@wraps\u001b[39m(function)\n\u001b[1;32m 67\u001b[0m \u001b[39mdef\u001b[39;00m \u001b[39mwrapper\u001b[39m(\u001b[39m\u001b[39margs, \u001b[39m\u001b[39m\u001b[39m\u001b[39mkwargs):\n\u001b[1;32m 68\u001b[0m \u001b[39mwith\u001b[39;00m stop_annotating():\n\u001b[0;32m---> 69\u001b[0m \u001b[39mreturn\u001b[39;00m function(\u001b[39m*\u001b[39;49margs, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49mkwargs)\n\nFile \u001b[0;32m~/.local/lib/python3.9/site-packages/pyadjoint/reduced_functional.py:140\u001b[0m, in \u001b[0;36mReducedFunctional.call\u001b[0;34m(self, values)\u001b[0m\n\u001b[1;32m 136\u001b[0m \u001b[39mwith\u001b[39;00m stop_annotating():\n\u001b[1;32m 137\u001b[0m \u001b[39mfor\u001b[39;00m i \u001b[39min\u001b[39;00m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mtape\u001b[39m.\u001b[39m_bar(\u001b[39m"\u001b[39m\u001b[39mEvaluating functional\u001b[39m\u001b[39m"\u001b[39m)\u001b[39m.\u001b[39miter(\n\u001b[1;32m 138\u001b[0m \u001b[39mrange\u001b[39m(\u001b[39mlen\u001b[39m(blocks))\n\u001b[1;32m 139\u001b[0m ):\n\u001b[0;32m--> 140\u001b[0m blocks[i]\u001b[39m.\u001b[39;49mrecompute()\n\u001b[1;32m 142\u001b[0m func_value \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mscale \u001b[39m*\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mfunctional\u001b[39m.\u001b[39mblock_variable\u001b[39m.\u001b[39mcheckpoint\n\u001b[1;32m 144\u001b[0m \u001b[39m# Call callback\u001b[39;00m\n\nFile \u001b[0;32m~/.local/lib/python3.9/site-packages/pyadjoint/block.py:350\u001b[0m, in \u001b[0;36mBlock.recompute\u001b[0;34m(self, markings)\u001b[0m\n\u001b[1;32m 347\u001b[0m prepared \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mprepare_recompute_component(inputs, relevant_outputs)\n\u001b[1;32m 349\u001b[0m \u001b[39mfor\u001b[39;00m idx, out \u001b[39min\u001b[39;00m relevant_outputs:\n\u001b[0;32m--> 350\u001b[0m output \u001b[39m=\u001b[39m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49mrecompute_component(inputs,\n\u001b[1;32m 351\u001b[0m out,\n\u001b[1;32m 352\u001b[0m idx,\n\u001b[1;32m 353\u001b[0m prepared)\n\u001b[1;32m 354\u001b[0m \u001b[39mif\u001b[39;00m output \u001b[39mis\u001b[39;00m \u001b[39mnot\u001b[39;00m \u001b[39mNone\u001b[39;00m:\n\u001b[1;32m 355\u001b[0m out\u001b[39m.\u001b[39mcheckpoint \u001b[39m=\u001b[39m output\n\nFile \u001b[0;32m~/.local/lib/python3.9/site-packages/dolfin_adjoint_common/blocks/solving.py:469\u001b[0m, in \u001b[0;36mGenericSolveBlock.recompute_component\u001b[0;34m(self, inputs, block_variable, idx, prepared)\u001b[0m\n\u001b[1;32m 467\u001b[0m func \u001b[39m=\u001b[39m prepared[\u001b[39m2\u001b[39m]\n\u001b[1;32m 468\u001b[0m bcs \u001b[39m=\u001b[39m prepared[\u001b[39m3\u001b[39m]\n\u001b[0;32m--> 469\u001b[0m \u001b[39mreturn\u001b[39;00m \u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49m_forward_solve(lhs, rhs, func, bcs)\n\nFile \u001b[0;32m~/.local/lib/python3.9/site-packages/fenics_adjoint/blocks/variational_solver.py:81\u001b[0m, in \u001b[0;36mNonlinearVariationalSolveBlock._forward_solve\u001b[0;34m(self, lhs, rhs, func, bcs, kwargs)\u001b[0m\n\u001b[1;32m 79\u001b[0m solver \u001b[39m=\u001b[39m \u001b[39mself\u001b[39m\u001b[39m.\u001b[39mbackend\u001b[39m.\u001b[39mNonlinearVariationalSolver(problem, \u001b[39m\u001b[39m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39msolver_args, \u001b[39m\u001b[39m\u001b[39m*\u001b[39m\u001b[39mself\u001b[39m\u001b[39m.\u001b[39msolver_kwargs)\n\u001b[1;32m 80\u001b[0m solver\u001b[39m.\u001b[39mparameters\u001b[39m.\u001b[39mupdate(\u001b[39mself\u001b[39m\u001b[39m.\u001b[39msolver_params)\n\u001b[0;32m---> 81\u001b[0m solver\u001b[39m.\u001b[39;49msolve(\u001b[39m*\u001b[39;49m\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49msolve_args, \u001b[39m*\u001b[39;49m\u001b[39m*\u001b[39;49m\u001b[39mself\u001b[39;49m\u001b[39m.\u001b[39;49msolve_kwargs)\n\u001b[1;32m 82\u001b[0m \u001b[39mreturn\u001b[39;00m func\n\n\u001b[0;31mRuntimeError\u001b[0m: \n\n*** -------------------------------------------------------------------------\n*** DOLFIN encountered an error. If you are not able to resolve this issue\n*** using the information listed below, you can ask for help at\n***\n*** [email protected]\n***\n*** Remember to include the error message listed below and, if possible,\n*** include a minimal running example to reproduce the error.\n***\n*** -------------------------------------------------------------------------\n*** Error: Unable to solve nonlinear system with NewtonSolver.\n*** Reason: Newton solver did not converge because maximum number of iterations reached.\n*** Where: This error was encountered inside NewtonSolver.cpp.\n*** Process: 0\n*** \n*** DOLFIN version: 2019.1.0\n*** Git changeset: 7f46aeb0b296da5bbb1fb0845822a72ab9b09c55\n*** -------------------------------------------------------------------------\n"
}

Unintuitive taping behaviour

I have always found it confusing that the enabling of taping is handled using an integer rather than a boolean. It means that one can have what I believe to be very unintuitive behaviour such as the following:

from pyadjoint.tape import continue_annotation, annotate_tape, stop_annotating

continue_annotation()
with stop_annotating():
    assert not annotate_tape()  # raises an error

What this MFE does is start annotation twice (one happens at import) and only stop it once. Which, since an integer is used, means that annotation is still considered as active inside the stop_annotating context manager.

Is this a bug or a feature?

No graph is built for visualization

To debug my dolfin-adjoint code I want to visualize the compute graph.

I receive this error after calling tape.visualize():

File "/home/anaconda3/envs/masters_thesis/lib/python3.10/site-packages/tensorflow/python/ops/script_ops.py", line 632, in py_func_common
    result = func(*[np.array(x) for x in inp])
TypeError: Tape._tf_add_blocks.<locals>.<lambda>() takes 0 positional arguments but 2 were given

This is fixed by the following:

    try:
      result = func(*[np.array(x) for x in inp])
    except:
      result = None

inspired from the following lines of tape.py, where one sees that a lambda with zero args is eventually called with more than zero inputs:

tf.py_func(lambda: None, in_tensors, [tf.float64],
                                            name=self._valid_tf_scope_name(str(block)))

Anyhow, tensorboard now says empty graph, so that nothing is visualized. How can this be solved?

Possible conflicts with UserExpression

from dolfin import *
from dolfin_adjoint import *
from scipy.interpolate import RegularGridInterpolator



class InterpolatedParameter(UserExpression):

    def __init__(self,X,Y,image,**kwargs):
        self.X = X # A numpy array giving the X-spacing of the image
        self.Y = Y # Same for Y
        self.image = image # The image of measured material property
        super().__init__(**kwargs)
    def eval(self,values,x):
        interp_handle = RegularGridInterpolator((self.X,self.Y),self.image)
        values[0] = interp_handle(x)


def interp(mat, V):
    x,y = mat.shape[0], mat.shape[1]
    mat_interp = InterpolatedParameter(np.linspace(1,x,x),np.linspace(1,y,y),mat,degree=1)
    return interpolate(mat_interp,V)


mesh = RectangleMesh(       Point(-3,-2),
                            Point(3, 2),
                            100, 100, diagonal="crossed")

V    = FunctionSpace(mesh, 'CG', 1)

data = np.array([[0,0,0,0,0,0],
                [0,1,1,1,1,0],
                [0,1,1,1,0,0],
                [0,0,0,0,0,0]])

initial_u = interp(data, V)

The interp() function and the InterpolatedParameter() class are set up to interpolate initial condition from a 2-D numpy array. It worked fine without dolfin-adjoint, now I am attempting to use them alongside with optimization in dolfin-adjoint, I am receiving these following errors:

Traceback (most recent call last):
File "test.py", line 40, in
initial_p = interp(data, V)
File "test.py", line 25, in interp
mat_interp = InterpolatedParameter(np.linspace(1,x,x),np.linspace(1,y,y),mat,degree=1)
File "test.py", line 14, in init
self.X = X # A numpy array giving the X-spacing of the image
File "/usr/local/anaconda3/envs/fe_adjoint/lib/python3.8/site-packages/fenics_adjoint/types/expression.py", line 101, in setattr
if k == "_ad_initialized" or self._ad_initialized is False:
AttributeError: 'InterpolatedParameter' object has no attribute '_ad_initialized'

I am running both fenics and dolfin-adjoint on version 2019.1.0, under anaconda3 container. Does anyone know how to modify the code to keep this interpolation work with dolfin-adjoint? Thank you!

Increasing memory consumption during optimization when using ROL

Hi,

I have recently started to use the ROL optimization package. When using Limited-Memory BFGS as optimizer I noticed that the memory consumption is increasing steadily during optimization. This happens even though the "Maximum Secant Storage"-parameter is set. For larger problems, this leads to an out-of-memory error. I have tracked the memory usage of the example below for 500 iterations using scipy L-BFGS-B and ROL as optimizers. The results are plotted here

index

as can be seen the memory usage increases step wise for ROL while it pretty much saturates for scipy.

Any idea why this is happening?

Best regards,

Gregor

Example used:

import os
os.environ["OMP_NUM_THREADS"] = "1"
import time
from firedrake import *
from firedrake_adjoint import *

family = "CG"
degree = 1
mesh = Mesh("AirboxMesh.msh")

FS = FunctionSpace(mesh, family, degree)
VFS = VectorFunctionSpace(mesh, family, degree)

m = interpolate(Constant((0., 0., 0.)), VFS)

u = TrialFunction(FS)
v = TestFunction(FS)

a = inner(grad(u), grad(v)) * dx
L = inner(m, grad(v)) * dx(subdomain_id = 2)

bc = DirichletBC(FS, Constant(0), "on_boundary")

u = Function(FS)
solver = LinearVariationalSolver(LinearVariationalProblem(a, L, u, bc))

print("solve")
stt = time.time()
solver.solve()
print("solving took ", time.time()-stt, " seconds")

print("Define J")
htarget = interpolate(Constant((1.,0.,0.)), VFS)
J = assemble(inner(grad(u)-htarget, grad(u)-htarget)*dx(subdomain_id = 1))
mc = Control(m)
Jhat = ReducedFunctional(J, mc)

print("start Optimization")
stt = time.time()
m_opt = minimize(Jhat, options = {"maxiter": 500, "disp": True, "ftol": 1e-20, "gtol": 1e-20})
print("optimization took ", time.time()-stt, " seconds")

'''
params_dict = {
    'General': {
        'Print Verbosity': 10,
        'Secant': {
            'Type': 'Limited-Memory BFGS',
            'Maximum Secant Storage': 10
        }
    'Step': {
        'Type': 'Line Search',
        'Line Search': {
            'Descent Method': {
                'Type': 'Quasi-Newton Method'
            },
            'Curvature Condition': {
                'Type': 'Wolfe Conditions'
            },
            'Line-Search Method': {
                'Type': 'Backtracking',
            }
        }
    },
    'Status Test': {
        'Gradient Tolerance': 1e-20,
        'Step Tolerance': 1e-20,
        'Relative Step Tolerance': 1e-20,
        'Iteration Limit': 500
    }
}

solver = ROLSolver(problem, params_dict, inner_product=inner_product)
solver.checkGradient()
print("start Optimization")
stt = time.time()
m_opt = solver.solve()
print("optimization took ", time.time()-stt, " seconds")
'''

AirboxMesh.zip

Markings for tape forward traversals

Currently, markings/implicit optimization of the tape is only done when doing reverse traversal of the tape (i.e. adjoint and hessian).
Forward(recompute) and tlm are not optimized.

No recent release

Hi pyadjoint devs,

We (the CRIKit team) are preparing to release CRIKit, and we're running into an issue with the fact that there hasn't been a pyadjoint release since mid-2019. Specifically, PyPI does not allow so-called "direct dependencies", which are dependencies like git repositories. We need the latest pyadjoint development for CRIKit to work, so we've been installing via "dolfin_adjoint @ git+https://github.com/dolfin-adjoint/pyadjoint.git@master" (in the install_requires in our setup.py), but we can't do that for the PyPI upload.

Would it be possible to create another release of pyadjoint with the latest updates?

Derivative of solution with respect to parameter

Suppose I don’t want to get the derivative of a scalar functional with respect to a parameter but the derivative of the solution of (say) a time-dependent PDE at a certain time T with respect to the parameter. In my case, I need the derivative dudq of the solution of the heat equation with potential q at a fixed time T for external use (uncertainty quantification). How can I get this? I included a program to better convey my meaning.

from` fenics import *
from fenics_adjoint import *

T = 100.0     #final time = time of evaluation
N_t = 500     #number of time steps
N_s = 64      #space mesh size

# Set initial value
u_0 = Expression('0', degree=1)
# Set boundary value
u_D = Expression('0', degree=1)

# Define a 1D, n=64, uniform mesh on an interval
mesh = UnitIntervalMesh(N_s)

# Define finite element space
V = FunctionSpace(mesh, 'P', 1)

def boundary(x, on_boundary):
    return on_boundary

# Dirichlet boundary conditions
bc = DirichletBC(V, u_D, boundary)

# Create function q (potential)
q = Expression('0.1-0.05*sin(2*pi*(x[0]-0.25))', degree=1)
q = interpolate(q,V)

# Define the variational problem
# The equation is dt*u - grad[ q*grad(u)] = r
u = interpolate(u_0,V)
u_next = TrialFunction(V)
v = TestFunction(V)
r = Expression('x[0] >= 0.215 && x[0] <= 0.315 ? (4*sin(4*pi*t)+0.001*pow(t,2)) : 0', \
               degree=1, t=0)

a = u_next*v*dx + dt*dot(q*grad(u_next), grad(v))*dx 
L = (u + dt*r)*dx

# Start time stepping
t = 0
u_next = Function(V)
# Iterate (backward-Euler scheme)
for n in range(N_t):
    # Update current time
    t += dt
    # At time n, we want r = r((n+1)dt)
    r.t = t
    # Compute solution
    solve(a == L, u_next, bc)
    # Update previous solution
    u.assign(u_next)

# The following does _not_ work, but I would like something like that.
dudq = compute_gradient(u, q)

Non-deterministic error in Geometric multigrid with firedrake-adjoint

I am using the latest pyadjoint-firedrake docker image. The following code

from firedrake import *
from firedrake_adjoint import *

mesh = UnitSquareMesh(8, 8)
hierarchy = MeshHierarchy(mesh, 1)
mesh = hierarchy[-1]

V = FunctionSpace(mesh, "CG", 1)

u = TrialFunction(V)
v = TestFunction(V)

a = dot(grad(u), grad(v))*dx

bcs = DirichletBC(V, zero(), (1, 2, 3, 4))

x, y = SpatialCoordinate(mesh)

f = Constant(0.1)
L = f*v*dx
u = Function(V)

parameters = {
   "ksp_type": "cg",
   "ksp_converged_reason": None,
   "pc_type": "mg",
}
solve(a == L, u, bcs=bcs, solver_parameters=parameters)
J = assemble(u*u*dx)
m = Control(f)
Jhat = ReducedFunctional(J, m)
print("new cost function")
value = Jhat(f)
print("new derivative")
Jhat.derivative()

fails with

[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run 
[0]PETSC ERROR: to get more information on the crash.
application called MPI_Abort(MPI_COMM_WORLD, 50152059) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=50152059
:
system msg for write_line failure : Bad file descriptor

most of the times. Some times it seems to be able to converge, which it is really strange. Can somebody else reproduce it? I will try to dig in deeper to provide more details.

Assigning AdjFloat to Constant leads to incompatible adj_value

Consider the following code:

af=AdjFloat(0.1)
uc=Constant(0.0)
uc.assign(af)
mesh=UnitSquareMesh(1,1)
V=FunctionSpace(mesh,"CG",1)
u=project(uc, V)
J=assemble(u*u*dx)
rf=ReducedFunctional(J, Control(af))
rf.derivative()

This fails in fenics_adjoint with:

TypeError: float() argument must be a string or a number, not 'dolfin.cpp.la.Vector'

and similar in firedrake_adjoint. The reason is that the AdjFloat control end up with an adj_value in the form of a length-1 Vector. This is caused by the adjoint of the project creating such a Vector as the adjoint value of the Constant it depends on, and subsequently the assign simply copying this to be the adjoint value of the AdjFloat. What does work however is if you simply assign the Constant to the Function u.assign(uc). In that case the adjoint value that is created for the Constant is just a float.

So the question is: should the adj_value of a Constant 1) be a Vector (obtained from the vector of its representation as a function in R functionspace) or should the adj_value simply 2) be a float?

In the case of 1) we should fix both Function assign when it assigns a Constant to a Function, to create a Vector adj_value on the Constant, and the Constant Assign when it assigns an AdjFloat to a Constant to convert that Vector back to a float for the adj_value of the AdjFloat.
In the case of 2) we need to fix SolveBlock and AssembleBlock which are currently creating Vector adj_values for Constants.

Happy to fix myself - but I'd like to hear your opinion on the question above...

optimization without PDE constraint

Hello all,
I am trying to run an optimization problem that does not have any PDE constraint. Below is my code:

import numpy as np
import matplotlib.pyplot as plt
from dolfin import *
from dolfin_adjoint import *
import moola

n = 10
mesh = RectangleMesh(Point(-1,-1),Point(1,1), n, n)

V = FunctionSpace(mesh, "CG", 1)
u = Function(V, name = "Control")
v = TestFunction(V)
S0 = Constant(1)

bc = DirichletBC(V, 1, "on_boundary")

J = assemble((0.5*inner(grad(u), grad(u)) - u*S0)*dx)
J_hat = ReducedFunctional(J, Control(u))   #make J only function of u
m_opt = minimize(J_hat, method = "L-BFGS-B", options = {"gtol": 1e-16, "ftol": 1e-16})
                 
u.rename("u","temp")
fileY = File("temp.pvd");
fileY << u;

However, I don't know how should I apply the boundary condition of the control variable "u" to the minimization problem. Also, I am pretty sure that the above code is not giving me the correct answer because when I am trying to solve its Euler-Lagrange equation using the below code, the answers are different:

from dolfin import *
from fenics import *
import numpy as np

parameters["form_compiler"]["cpp_optimize"] = True
parameters["form_compiler"]["representation"] = "uflacs"

mesh = RectangleMesh(Point(-1,-1),Point(1,1), 10, 10)
V = FunctionSpace(mesh, "CG", 1)

du = TrialFunction(V)            # Incremental displacement
u  = Function(V) 
v  = TestFunction(V)             # Test function
S0 = Constant(1)

F =  (0.5*inner(grad(u), grad(v)) - v*S0)*dx
bc = DirichletBC(V, 1, "on_boundary")
solve(F == 0, u, bc)
 
u.rename("u","temp")
fileY = File("temp.pvd");
fileY << u;

Thanks in advance for your help and support

ipopt missing in singularity image

I am trying to run the software on an HPC Linux cluster. Singularity must be used in place of Docker for security reasons. The commands below illustrate the problem:

$ singularity pull docker://quay.io/dolfinadjoint/pyadjoint:2019.1.0
$ singularity exec pyadjoint_2019.1.0.sif python3 -c "import ipopt"
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'ipopt'

Importing the other modules works fine:

from dolfin import *
from dolfin_adjoint import *

I tried using "singularity build" but that also didn't work. Have you seen this before?

Here is some information about our system:

$ singularity --version
singularity version 3.5.2-1.1.sdl7

$ uname -a
Linux della5.princeton.edu 3.10.0-1127.10.1.el7.x86_64 #1 SMP Thu Jun 4 09:27:22 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux

$ cat /etc/os-release
NAME="Springdale Linux"
VERSION="7.8 (Verona)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.8"
PRETTY_NAME="Springdale Linux 7.8 (Verona)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:springdale:linux:7.8:GA"
HOME_URL="http://springdale.princeton.edu/"
BUG_REPORT_URL="https://springdale.math.ias.edu/"

REDHAT_BUGZILLA_PRODUCT="Springdale Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.8
REDHAT_SUPPORT_PRODUCT="Springdale Linux"
REDHAT_SUPPORT_PRODUCT_VERSION=7.8

Assignments from expressions with Constants not taped correctly

Consider the following example:

from firedrake import *
from firedrake_adjoint import *

mesh2d = UnitSquareMesh(2,2)

V = FunctionSpace(mesh2d, 'CG', 1)

average_size= Constant(1.5)
avg_fun = Function(V).assign(average_size)
settling_velocity = Function(V).assign(2*average_size)



J = assemble(settling_velocity*dx)

print(J)

rf = ReducedFunctional(J, Control(average_size))
print(rf(-0.1))
print(rf(0.1))
print(rf(1.1))

This currently prints the same unchanged value of 3.0, i.e. it is not updating the Constant in the assignment. Changing the assignment to .assign(2*avg_fun) does give the correct result. This is because in the __init__ of FunctionAssignBlock (dolfin_adjoint_common/blocks/function.py) only function are registered as depencies. This seems to be based that on the assumption that assign only takes either a single (overloaded) Function/Constant/AdjFloat, or a linear combination of Functions (see the lincom variable). This overlooks the case of a scalar UFL expression. Moreover I'm not sure it's true that only linear combinations of functions are allowed, and even those might contain Constants or Adjfloats that need updating.

Support for taping non-scalar outputs: ReducedFunction

There is stagnated work on a ReducedFunction class that is like a ReducedFunctional but works with non-scalars.

There is a stable implementation sitting in the CRIKit repo right now, but it would be better for it to be merged into the Pyadjoint repo. I'll start working on it in my pyadjoint fork, and I'd like to get it completed before the end of the year by working on it a little each week.

I'm making this issue to open up a discussion about this work. For example, is this feature desired and does it need to be implemented in a much different way than it is in CRIKit?

Main goal

Create a ReducedFunction class that represents a function $J(m)$.

  • adj_jac_action(v): computes $v \cdot \frac{dJ}{dm}$.
  • jac_action(h): computes $\frac{dJ}{dm} \cdot h$.
  • jac_matrix(): computes $\frac{dJ}{dm}$.
  • hess_action(h, v): computes $v \cdot \frac{d^2J}{dm^2} \cdot h$.
  • __call__(m): computes $J(m)$.

The existing ReducedFunctional is a special case of ReducedFunction where the output is an AdjFloat, so it can be implemented by inheriting from ReducedFunction and adding the special-case methods derivative() and hessian().

  • ReducedFunctional.derivative() == ReducedFunction.adj_jac_action(AdjFloat(1))
  • ReducedFunctional.hessian() == ReducedFunction.hess_action(h, AdjFloat(1))

Subfunctions does not work with FunctionAssigner

I have the following example which does work in pure dolfin, but which does not work with dolfin_adjoint

# subfunction_assign.py
import dolfin

# from dolfin import Function, FunctionAssigner, UnitSquareMesh
from dolfin_adjoint import Function, FunctionAssigner, UnitSquareMesh

mesh = UnitSquareMesh(3, 3)
V = dolfin.VectorFunctionSpace(mesh, "R", 0, dim=2)
v = Function(V)

W = V.sub(0).collapse()
w = Function(W)
fa = FunctionAssigner(W, V.sub(0))
print("v: ", hasattr(v, "_ad_will_add_as_dependency"))
print("v.sub(0): ", hasattr(v.sub(0), "_ad_will_add_as_dependency"))
fa.assign(w, v.sub(0))

The output from this is

$ python subfunction_assign.py
v:  True
v.sub(0):  False
Traceback (most recent call last):
  File "subfunction_assign.py", line 15, in <module>
    fa.assign(w, v.sub(0))
  File "/Users/henriknf/miniconda3/envs/pyadjoint/lib/python3.7/site-packages/fenics_adjoint/types/function_assigner.py", line 34, in assign
    block = FunctionAssignerBlock(self, inputs)
  File "/Users/henriknf/miniconda3/envs/pyadjoint/lib/python3.7/site-packages/fenics_adjoint/blocks/function_assigner.py", line 13, in __init__
    self.add_dependency(i)
  File "/Users/henriknf/miniconda3/envs/pyadjoint/lib/python3.7/site-packages/pyadjoint/block.py", line 52, in add_dependency
    dep._ad_will_add_as_dependency()
AttributeError: 'Function' object has no attribute '_ad_will_add_as_dependency'

Is this a bug, or should I do this differently?

Ipopt import error on pyadjoint:latest docker image

ibipopt.so.1 cannot be found while importing ipopt

Can be tested using poisson-topology.py , which throws an error:
Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/pyadjoint/ipopt.py”, line 11, in
import cyipopt # noqa: F401
File “/home/fenics/shared/cyipopt-master/cyipopt/init.py”, line 12, in
from ipopt_wrapper import *
ImportError: /usr/local/lib/libipopt.so.1: undefined symbol: MPI_Comm_rank

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.6/dist-packages/pyadjoint/ipopt.py”, line 13, in
raise ImportError(“You need to install cyipopt. It is recommended to install IPOPT with HSL support!”)
ImportError: You need to install cyipopt. It is recommended to install IPOPT with HSL support!

Creation of Constant from Adjfloat is not annotated

The following works as expected:

from fenics import *
from fenics_adjoint import *

mesh = UnitSquareMesh(1,1)
c = Constant(1.0)
A = assemble(c*dx(domain=mesh))
A_const = Constant(0.0)
A_const.assign(A)
J = assemble(A_const*dx(domain=mesh))
rf = ReducedFunctional(J, Control(c))
print(rf(Constant(2.)))

but this doesn't:

...
A = assemble(c*dx(domain=mesh))
A_const = Constant(A)
J = assemble(A_const*dx(domain=mesh))
...

TSFC-Interface Bug? IndexError: tuple index out of range

I am running into the following issue

Traceback (most recent call last):
  File "/home/e/Desktop/arclength/ac-second.py", line 169, in <module>
    dj = Jhat.derivative()
  File "/home/e/Software/firedrake/firedrake/src/pyadjoint/pyadjoint/reduced_functional.py", line 61, in derivative
    derivatives = compute_gradient(self.functional,
  File "/home/e/Software/firedrake/firedrake/src/pyadjoint/pyadjoint/drivers.py", line 29, in compute_gradient
    tape.evaluate_adj(markings=True)
  File "/home/e/Software/firedrake/firedrake/src/pyadjoint/pyadjoint/tape.py", line 140, in evaluate_adj
    self._blocks[i].evaluate_adj(markings=markings)
  File "/home/e/Software/firedrake/firedrake/src/pyadjoint/pyadjoint/tape.py", line 46, in wrapper
    return function(*args, **kwargs)
  File "/home/e/Software/firedrake/firedrake/src/pyadjoint/pyadjoint/block.py", line 130, in evaluate_adj
    adj_output = self.evaluate_adj_component(inputs,
  File "/home/e/Software/firedrake/firedrake/src/pyadjoint/dolfin_adjoint_common/blocks/assembly.py", line 58, in evaluate_adj_component
    output = self.compat.assemble_adjoint_value(dform)
  File "/home/e/Software/firedrake/firedrake/src/pyadjoint/dolfin_adjoint_common/compat.py", line 123, in assemble_adjoint_value
    result = backend.assemble(*args, **kwargs)
  File "PETSc/Log.pyx", line 115, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
  File "PETSc/Log.pyx", line 116, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
  File "/home/e/Software/firedrake/firedrake/src/firedrake/firedrake/adjoint/assembly.py", line 18, in wrapper
    output = assemble(*args, **kwargs)
  File "/home/e/Software/firedrake/firedrake/src/firedrake/firedrake/assemble.py", line 126, in assemble
    return assemble_form(expr, tensor, bcs, diagonal, assembly_type,
  File "PETSc/Log.pyx", line 115, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
  File "PETSc/Log.pyx", line 116, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
  File "/home/e/Software/firedrake/firedrake/src/firedrake/firedrake/assemble.py", line 168, in assemble_form
    return _assemble_vector(expr, tensor, bcs, opts)
  File "/home/e/Software/firedrake/firedrake/src/firedrake/firedrake/assemble.py", line 303, in _assemble_vector
    _assemble_expr(expr, vector, bcs, opts, _AssemblyRank.VECTOR)
  File "/home/e/Software/firedrake/firedrake/src/firedrake/firedrake/assemble.py", line 453, in _assemble_expr
    parloops = _make_parloops(*parloop_init_args)
  File "<decorator-gen-34>", line 2, in _make_parloops
  File "/home/e/Software/firedrake/firedrake/src/firedrake/firedrake/utils.py", line 73, in wrapper
    return f(*args, **kwargs)
  File "/home/e/Software/firedrake/firedrake/src/firedrake/firedrake/assemble.py", line 700, in _make_parloops
    kernels = tsfc_interface.compile_form(expr, "form", parameters=form_compiler_parameters, diagonal=diagonal)
  File "PETSc/Log.pyx", line 115, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
  File "PETSc/Log.pyx", line 116, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
  File "/home/e/Software/firedrake/firedrake/src/firedrake/firedrake/tsfc_interface.py", line 229, in compile_form
    kinfos = TSFCKernel(f, prefix, parameters,
  File "/home/e/Software/firedrake/firedrake/src/PyOP2/pyop2/caching.py", line 178, in __new__
    key = cls._cache_key(*args, **kwargs)
  File "/home/e/Software/firedrake/firedrake/src/firedrake/firedrake/tsfc_interface.py", line 121, in _cache_key
    + str(diagonal)).encode()).hexdigest(), form.ufl_domains()[0].comm
IndexError: tuple index out of range

@wence-
I have a feeling that the bug is similar to this issue that I previously closed because I didn't encounter it any more and I couldn't find the code that caused it.
firedrakeproject/tsfc#236

Is the compute_adjoint function not available in dolfin-adjoint?

Dear everyone,

I am starting to use the dolfin-adjoint to perform a sensitivity analysis. I am currently use the latest docker image (quay.io/dolfinadjoint/pyadjoint:latest). However, there is an error when I am trying to compute the adjoint variable of the problem.
The error message is "name 'compute_adjoint' is not defined". Is this function no longer defined in dolfin-adjoint? If so, is there any alternative way to calculate the adjoint variable?

Sincerely yours
Phuris

Error in custom function example normalise.py (modifies input func)

Hi,

when I experimented with custom functions and overloading, I noticed what
I would consider a bug in pyadjoint/docs/source/_static/overloading/normalise.py:

The function

def normalise(func):
vec = func.vector()
vec /= vec.norm('l2')
return Function(func.function_space(), vec)

modifies the input func, thus normalising it as well, since func.vector()
is modified without first making a deep copy.
This can have side effects:

If the input f to normalize(f) is reused after the call, e.g., if we modify
tutorial9_overloading.py as follows

g = normalise(f)

J = assemble((g+f)*dx)

then the Taylor test fails:

Running Taylor test
Computed residuals: [0.002464839972388524, 0.0011515521177464082, 0.000555068990161312, 0.0002722974398862185]
Computed convergence rates: [1.097914276807522, 1.0528407051626965, 1.0274836784598766]

This problem is not visible in the original tutorial9_overloading.py since
there only g is used after calling normalise, but not f:

J = assemble(g*dx)

Thus, I think normalise() should be changed to either

def normalise(func):
vec = func.vector().copy()
vec /= vec.norm('l2')
return Function(func.function_space(), vec)

or

def normalise(func):
vec = func.vector()
return Function(func.function_space(), vec/vec.norm('l2'))

Best,

Michael

Should the tlm_value setter use the current block-variable instead of the original?

Consider the following example:

from firedrake import *
from firedrake_adjoint import *
mesh = UnitSquareMesh(1,1)
V = FunctionSpace(mesh, "CG", 1)
u = Function(V)
h = Function(V)

h.assign(1.)

u.assign(2.)
u.tlm_value = h

J = assemble(u**2*dx)

tape = get_working_tape()
tape.evaluate_tlm()
print(J.tlm_value)

This doesn't work as expected because u.tlm_value = h sets the tlm_value of the original block-variable (the state before the assign) which does not contribute to the functional. When setting a tlm_value I think it makes more sense to say that you want the derivative wrt to the current state. This is btw also what the setter of adj_value on overloaded object does.

I can vaguely see the rationale that when u gets reused (say in a timestepping procedure) and the user by mistake sets the tlm_value at the end of the model, it could be doing what's intended - however typically there is still an initialisation step of u that you don't want included in the replay. The user should just take care when to set the tlm_value - this is exactly the same as when choosing the control.

Re the getter for tlm_value and adj_value: with adj_value it currently uses original_block_variable.adj_value to get as opposed to block_variable.adj_value which is set in the setter. I also think this is confusing actually. Again I can see the rationale, but again I think you still end up with the wrong result if there's an initialisation step and you want to retrieve the adj_value of the state after that initialisation.

"Excesive" execution time in a demo file

I have been trying to run the next sample file link...

I have "changed" the size of the problem by setting the number of fem partitions to 64 (default was 250)...

I am running using 4 cores (as the default execution call)...... It's been 11 hours and still nothing...

Does anybody gets better performance?

Running Singularity image with MPI

I have created a Singularity image from the Docker image:

singularity pull docker://quay.io/dolfinadjoint/pyadjoint:2019.1.0

I now want to run an example in parallel using MPI and Slurm. Here is my Slurm script:

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=4
#SBATCH --time=01:00:00

export SINGULARITYENV_LD_LIBRARY_PATH=/.singularity.d/libs:/usr/local/lib
srun -n 4 singularity exec pyadjoint_2019.1.0.sif python3 stokes-topology.py

However, I find that the example script is ran four times serially instead of once with the work divided among the processors. When I load an Open MPI module in the Slurm script, the code hangs but MPI.comm_world is 4 instead of 1 (as in the previous case).

stokes-topology.py uses mpi4py. Can someone tell me which MPI implementation is used for building mpi4py? I've looked at the dockerfile but it is not clear to me. We do not have MPICH on our cluster.

Error when running heat-equation.py example

When running the fenics heat-equation.py example, I encounter the following error

Traceback (most recent call last):
File "heat-equation.py", line 45, in
(times, computed_states, u) = main(guess_ic, annotate=True)
File "heat-equation.py", line 29, in main
solve(F == 0, u_next, J=derivative(F, u_next), annotate=annotate)
File "/home/user/anaconda3/envs/fenics2019jaxadjoint/lib/python3.8/site-packages/fenics_adjoint/solving.py", line 47, in solve
block = solve_block_type(*args, ad_block_tag=ad_block_tag, **sb_kwargs)
TypeError: init() missing 1 required positional argument: 'b'

See attached for the complete output and list of packages installed:

error.txt
condaenv.txt

Reduced functional based on FEniCS NewtonSolver cant use reduced functional due to parameters.to_dict()

Parameters.to_dict() does not unravel the parameter values, and can thus not be reused in recomputations.
Running tests/migration/burgers_oo/burgers_oo.py with an additional line Jhat(ic)
produces the following error:

Traceback (most recent call last):
  File "burgers_oo.py", line 69, in <module>
    Jhat(ic)
  File "/home/fenics/shared/pyadjoint/tape.py", line 46, in wrapper
    return function(*args, **kwargs)
  File "/home/fenics/shared/pyadjoint/reduced_functional.py", line 135, in __call__
    blocks[i].recompute()
  File "/home/fenics/shared/pyadjoint/block.py", line 351, in recompute
    prepared)
  File "/home/fenics/shared/fenics_adjoint/solving.py", line 558, in recompute_component
    return self._forward_solve(lhs, rhs, func, bcs, **self.forward_kwargs)
  File "/home/fenics/shared/fenics_adjoint/solving.py", line 543, in _forward_solve
    backend.solve(lhs == rhs, func, bcs, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/dolfin/fem/solving.py", line 220, in solve
    _solve_varproblem(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/dolfin/fem/solving.py", line 265, in _solve_varproblem
    solver.parameters.update(solver_parameters)
  File "/usr/local/lib/python3.6/dist-packages/dolfin/parameter/__init__.py", line 38, in update
    self[key].update(params[key])
  File "/usr/local/lib/python3.6/dist-packages/dolfin/parameter/__init__.py", line 40, in update
    self[key] = params[key]
TypeError: __setitem__(): incompatible function arguments. The following argument types are supported:
    1. (self: dolfin.cpp.parameter.Parameters, arg0: str, arg1: none) -> None
    2. (self: dolfin.cpp.parameter.Parameters, arg0: str, arg1: bool) -> None
    3. (self: dolfin.cpp.parameter.Parameters, arg0: str, arg1: str) -> None
    4. (self: dolfin.cpp.parameter.Parameters, arg0: str, arg1: int) -> None
    5. (self: dolfin.cpp.parameter.Parameters, arg0: str, arg1: float) -> None
    6. (self: dolfin.cpp.parameter.Parameters, arg0: str, arg1: dolfin.cpp.parameter.Parameters) -> None

Invoked with: <dolfin.cpp.parameter.Parameters object at 0x7f1ab69a0c00>, 'absolute_tolerance', <dolfin.cpp.parameter.Parameter object at 0x7f1aba2335a8>

compute_gradient w.r.t flow-velocity for advection equation: Errors in taylor_test and compute_gradient

Hi,

I am currently facing two errors and truly have no idea what I am doing wrong 🤔 Just started to work with dolfin-adjoint and fenics a few weeks ago and maybe I am overlooking something obvious. Any help or ideas on how to debug or fix one of the errors would be very much appreciated 🙂

The code below is adapted from the advection-diffusion undocumented demos to be a time dependent advection problem. With the initial condition u_0 = sin(2pi x) the exact solution if beta = const is u(x,t) = sin(2pi(x-beta_x t) is also computed correctly by the code.

Error 1:

I would like to add an optimization routine w.r.t to the advection velocity in x-direction beta = [beta_x, beta_y]^T = [0.5, 0]^T. (This is the simplest example just for testing purposes).

The derivative dJdbeta_x and the reducedFunctional is computed with out giving any error. However, if I now want to test the results with the taylor_test I get the following error message:

~/anaconda3/envs/dolfin_master/lib/python3.8/site-packages/dolfin/fem/solving.py in _extract_args(*args, **kwargs)
    320     for kwarg in kwargs.keys():
    321         if kwarg not in valid_kwargs:
--> 322             raise RuntimeError("Solve variational problem. Illegal keyword argument \'{}\'.".format(kwarg))
    323 
    324     # Extract equation

RuntimeError: Solve variational problem. Illegal keyword argument 'function'.

Error 2:
Eventually, I would also like to define the velocity as an Expression so that I can have beta dependent on x[0] and/or x[1].

UPDATED: I found out that when using an expression, one has to supply the derivative and dependencies.

beta_x = Expression("0.5", degree=1) 
beta_y = Expression("0.0", degree=1)
beta = Expression(("beta_x","beta_y"), degree=2, beta_x = beta_x, beta_y=beta_y)
beta.dependencies = [beta_x, beta_y]
beta.user_defined_derivatives = {beta_x: Expression("0",degree=1), beta_y : Expression("0",degree=1)}

But even if I supply these I get the same error:

NotImplementedError                       Traceback (most recent call last)
<ipython-input-13-91f6fd6fb64c> in <module>
     78 
     79 J = assemble(inner(u_old, u_old)*dx)
---> 80 dJdbeta0 = compute_gradient(J, Control(beta_x))
....
~/anaconda3/envs/dolfin_master/lib/python3.8/site-packages/pyadjoint/overloaded_type.py in _ad_convert_type(self, value, options)
    133 
    134         """
--> 135         raise NotImplementedError
    136 
    137     def _ad_create_checkpoint(self):

This is a shorted version of the code to reproduce the error message:

from dolfin import *
from fenics_adjoint import *
import matplotlib.pyplot as plt
import numpy as np

# adapted code from
# """ Steady state advection-diffusion equation,
# discontinuous formulation using full upwinding.
# Implemented in python from cpp demo by Johan Hake.
# """

m_size = 64
mesh = UnitSquareMesh(m_size, m_size)
parameters["ghost_mode"] = "shared_facet"

delta_t = 0.01
end_time = 1
time_steps = int(end_time/delta_t)

V_dg = FunctionSpace(mesh, "DG", 1)

beta_x, beta_y = Constant(0.5), Constant(0.0) # Causes Error in line taylor_test

# Eventually I would like beta not be constant but dependent on x/y 
# Causes not implemented error in compute_gradient
# beta_x = Expression("0.5",degree=1)
# beta_y = Expression("0.0", degree=1)
# beta = Expression(("beta_x","beta_y"), degree=2, beta_x = beta_x, beta_y=beta_y)
# beta.dependencies = [beta_x, beta_y]
# beta.user_defined_derivatives = {beta_x: Expression("0",degree=1), beta_y : Expression("0",degree=1)}

beta = as_vector([beta_x, beta_y])

walls   = 'near(x[1], 0) || near(x[1], 1) || near(x[0], 1) || near(x[0], 0)'
u_exact = Expression("sin(2*pi*(x[0] - beta_x*t))", degree=3, t=0, beta_x=beta_x)
bcs     = DirichletBC(V_dg, u_exact, walls, method='geometric') 

phi,v = TrialFunction(V_dg), TestFunction(V_dg)
u_new, u_old = Function(V_dg), Function(V_dg)

f = Constant(0.0)
u0 = Expression("sin(2*pi*x[0])", degree=1, t=0)

n = FacetNormal(mesh)
h = CellDiameter(mesh)
h_avg = (h('+') + h('-'))/2
un = (dot(beta, n) + abs(dot(beta, n)))/2.0

a_int = dot(grad(v), - beta*phi)*dx
a_vel = dot(jump(v), un('+')*phi('+') - un('-')*phi('-'))*dS
a_boundary = dot(v, u_exact*phi)*ds
a = a_vel + a_int + a_boundary + Constant(1/delta_t)*phi*v*dx
L = v*f*dx + Constant(1/delta_t)*u_old*v*dx

project(u0, V_dg, function=u_old)

plot(u_old)
plt.title("Initial condition at t=0")
plt.show()

t = 0

for n in range(time_steps):
        t += delta_t
        u_exact.t = t

        solve(a==L, u_new, bcs)
        u_old.assign(u_new)

plot(u_old)
plt.title("Solution at t=1")
plt.show()

# -----------------------------------
# Gradient computation 
# -----------------------------------
J = assemble(inner(u_old, u_old)*dx)
dJdbeta_x = compute_gradient(J, Control(beta_x))
Jhat = ReducedFunctional(J, Control(beta_x))
h = Constant(0.01)
conv_rate = taylor_test(Jhat, beta_x, h)

I use dolfin-adjoint 2019.1.0

Modify test rtol

I'm trying to update the solver defaults in Firedrake to use LU if nothing else is specified. We also change the default SNES linesearch.

This changes the convergence of a taylor test in test_hessian.py::test_dirichlet. I can recover it with:

diff --git a/tests/firedrake_adjoint/test_hessian.py b/tests/firedrake_adjoint/test_hessian.py
index e771462..5a581a9 100644
--- a/tests/firedrake_adjoint/test_hessian.py
+++ b/tests/firedrake_adjoint/test_hessian.py
@@ -189,7 +189,7 @@ def test_dirichlet():
     bc = DirichletBC(V, c, "on_boundary")
 
     F = inner(grad(u), grad(v)) * dx + u**4*v*dx - f**2 * v * dx
-    solve(F == 0, u, bc)
+    solve(F == 0, u, bc, solver_parameters={"snes_rtol": 1e-10})
 
     J = assemble(u ** 4 * dx)
     Jhat = ReducedFunctional(J, Control(c))

Can someone apply this (I would prepare a PR but don't have access since the migration to github).

UFL constraints not available for Firedrake

UFL constraints are only available in fenics_adjoint/ufl_constraints. The example examples/stokes-topology/stokes-topology-rol-firedrake.py does not work as is. It's probably a simple fix to move ufl_constraints.py to pyadjoint instead.

I got it working for Firedrake by removing imports from fenics and firedrake_adjoint.types, which does not seem to exist.

Use previous solutions with a functional involving time stepping

Hi,

I've been trying to reuse the previous solution of a field that is solved to compute the functional. This works fine if the field only involves one solve, but I fail to make it work with multiple time steps.

Here is a MWE:


from dolfin import *
from dolfin_adjoint import *



# Sub domain for Dirichlet boundary condition
class DirichletBoundary(SubDomain):
    def inside(self, x, on_boundary):
        return abs(x[0] - 1.0) < DOLFIN_EPS and on_boundary

# Create mesh and define function space
mesh = UnitSquareMesh(50, 50)

V = FunctionSpace(mesh, "CG", 1)

# Define boundary condition (g will increase in the time stepping)
g = Constant(0.0)
bc = DirichletBC(V, g, DirichletBoundary())

# Define control
alpha = Function(V)
alpha.assign(Constant(1.))

# Define variational problem
u = Function(V)
v = TestFunction(V)

# u kept from the previous execution of forward function
u_stored = [Function(V) for i in range(4)]

def forward(alpha):
    # Dummy problem
    f = Expression("x[0]*sin(x[1])", degree=2)
    F = inner((alpha + alpha*u**2)*grad(u), grad(v))*dx - f*v*dx
    
    J = derivative(F, u)
    
    problem = NonlinearVariationalProblem(F, u, bcs=bc, J=J)
    solver = NonlinearVariationalSolver(problem)
    
    J = 0.
    
    # 4 time steps here
    for i in range(4):
        # change BC
        g.assign(Constant(i+1))
        # u is assigned the solution from the previous run
        u.assign(u_stored[i])
        # solve for new u
        solver.solve()
        # store new solution
        u_stored[i].assign(u)
        # dummy functional depending on the results of each time step
        J += assemble(u**2*dx)
    
    return J



tape = Tape()
set_working_tape(tape)

r = forward(alpha)

RF = ReducedFunctional(r, Control(alpha))

RF.optimize_tape()

# Execute once - at this stage, u_stored values should have changed - but they didn't
print("First execution")
RF(100.)

# Execute once more - this should use the previous u_stored as initial solutions, but it doesn't (the residuals are still high in the first iterations)
print("Second execution")
RF(100.)

Look at the first iterations of the Newton solver; during the first execution, they look like this:

  Newton iteration 0: r (abs) = 7.039e+02 (tol = 1.000e-10) r (rel) = 1.000e+00 (tol = 1.000e-09)
  Newton iteration 1: r (abs) = 5.899e-03 (tol = 1.000e-10) r (rel) = 8.381e-06 (tol = 1.000e-09)
  Newton iteration 2: r (abs) = 1.711e-06 (tol = 1.000e-10) r (rel) = 2.430e-09 (tol = 1.000e-09)
  Newton iteration 3: r (abs) = 2.859e-12 (tol = 1.000e-10) r (rel) = 4.061e-15 (tol = 1.000e-09)

And during the second execution, they look the same:

  Newton iteration 0: r (abs) = 7.039e+02 (tol = 1.000e-10) r (rel) = 1.000e+00 (tol = 1.000e-09)
  Newton iteration 1: r (abs) = 5.899e-03 (tol = 1.000e-10) r (rel) = 8.381e-06 (tol = 1.000e-09)
  Newton iteration 2: r (abs) = 1.711e-06 (tol = 1.000e-10) r (rel) = 2.430e-09 (tol = 1.000e-09)
  Newton iteration 3: r (abs) = 2.859e-12 (tol = 1.000e-10) r (rel) = 4.061e-15 (tol = 1.000e-09)

If the previous solution would have been used, the residual at the first iteration should be 2.859e-12, and this is not the case. It actually uses the u_stored computed in the forward pass, but these fields do not change afterwards.

Things I've tried so far without success:

  • Using placeholders for each u_stored
  • Using controls and the update function on the u_stored fields as described in the example: Mathematical Programs with Equilibrium Constraints

Any help or pointers towards relevant documentation of examples would be appreciated!

Providing MPI communicator to mesh raises floating point exception

When I run the following code

# mwe.py
import dolfin
import dolfin_adjoint

mesh = dolfin_adjoint.Mesh(dolfin.MPI.comm_world)

I get the following output

$ python mwe.py
[1]    20822 floating point exception  python mwe.py

OS: macOS Catalina

Here is my conda environment

channels:
  - conda-forge
  - defaults
dependencies:
  - appnope=0.1.0=py38h32f6830_1002
  - backcall=0.2.0=pyh9f0ad1d_0
  - backports=1.0=py_2
  - backports.functools_lru_cache=1.6.1=py_0
  - boost-cpp=1.72.0=he5d75e3_3
  - bzip2=1.0.8=haf1e3a3_3
  - c-ares=1.16.1=haf1e3a3_3
  - ca-certificates=2020.6.20=hecda079_0
  - certifi=2020.6.20=py38h5347e94_2
  - cmake=3.18.3=hfc1b5b8_0
  - decorator=4.4.2=py_0
  - eigen=3.3.8=h879752b_0
  - expat=2.2.9=hb1e8313_2
  - fastcache=1.1.0=py38h4d0b108_2
  - fenics=2019.1.0=py38h32f6830_10
  - fenics-dijitso=2019.1.0=py_10
  - fenics-dolfin=2019.1.0=py38h1933a79_10
  - fenics-ffc=2019.1.0=py_10
  - fenics-fiat=2019.1.0=py_10
  - fenics-libdolfin=2019.1.0=ha574790_10
  - fenics-ufl=2019.1.0=py_10
  - gmp=6.2.0=h2e338ed_4
  - gmpy2=2.1.0b1=py38h113ecfb_1
  - hdf5=1.10.6=mpi_mpich_hc67fd14_1010
  - hypre=2.18.2=hc9ba2bc_1
  - icu=67.1=hb1e8313_0
  - ipython=7.19.0=py38h9bb44b7_0
  - ipython_genutils=0.2.0=py_1
  - jedi=0.17.2=py38h32f6830_1
  - krb5=1.17.1=h75d18d8_3
  - libblas=3.9.0=2_openblas
  - libcblas=3.9.0=2_openblas
  - libcurl=7.71.1=h9bf37e3_8
  - libcxx=11.0.0=h439d374_0
  - libedit=3.1.20191231=h0678c8f_2
  - libev=4.33=haf1e3a3_1
  - libffi=3.2.1=hb1e8313_1007
  - libgfortran=4.0.0=h50e675f_13
  - libgfortran4=7.5.0=h50e675f_13
  - libiconv=1.16=haf1e3a3_0
  - liblapack=3.9.0=2_openblas
  - libnghttp2=1.41.0=h7580e61_2
  - libopenblas=0.3.12=openmp_h63d9170_1
  - libssh2=1.9.0=h8a08a2b_5
  - libuv=1.40.0=h22f3db7_0
  - llvm-openmp=11.0.0=h73239a0_1
  - lz4-c=1.9.2=hb1e8313_3
  - metis=5.1.0=hb1e8313_1006
  - mpc=1.1.0=ha57cd0f_1009
  - mpfr=4.0.2=h72d8aaf_1
  - mpi=1.0=mpich
  - mpi4py=3.0.3=py38hc3a995a_2
  - mpich=3.3.2=hc856adb_3
  - mpmath=1.1.0=py_0
  - mumps-include=5.2.1=6
  - mumps-mpi=5.2.1=hcf7f05f_6
  - ncurses=6.2=hb1e8313_2
  - numpy=1.19.2=py38ha98150c_1
  - openssl=1.1.1h=haf1e3a3_0
  - parmetis=4.0.3=hbc1d92b_1005
  - parso=0.7.1=pyh9f0ad1d_0
  - petsc=3.12.4=h6ceeb6d_0
  - petsc4py=3.12.0=py38h833b260_4
  - pexpect=4.8.0=pyh9f0ad1d_2
  - pickleshare=0.7.5=py_1003
  - pip=20.2.4=py_0
  - pkg-config=0.29.2=h31203cd_1008
  - pkgconfig=1.4.0=py38h32f6830_2
  - prompt-toolkit=3.0.8=py_0
  - ptscotch=6.0.8=h8c5ff5d_1
  - ptyprocess=0.6.0=py_1001
  - pybind11=2.5.0=py38h02bb52f_1
  - pygments=2.7.2=py_0
  - python=3.8.6=hcfdab8c_0_cpython
  - python_abi=3.8=1_cp38
  - readline=8.0=h0678c8f_2
  - rhash=1.3.6=haf1e3a3_1001
  - scalapack=2.0.2=hb7119d5_1009
  - scotch=6.0.8=h703c93e_1
  - setuptools=49.6.0=py38h5347e94_2
  - six=1.15.0=pyh9f0ad1d_0
  - slepc=3.12.2=hefb7033_0
  - slepc4py=3.12.0=py38h00bfe04_1
  - sqlite=3.33.0=h960bd1c_1
  - suitesparse=5.6.0=h0e59142_0
  - superlu=5.2.2=h8e61ea7_0
  - superlu_dist=6.2.0=h31c06ef_4
  - sympy=1.6.2=py38h32f6830_1
  - tbb=2019.9=ha1b3eb9_1
  - tk=8.6.10=hb0a8c7a_1
  - traitlets=5.0.5=py_0
  - wcwidth=0.2.5=pyh9f0ad1d_2
  - wheel=0.35.1=pyh9f0ad1d_0
  - xz=5.2.5=haf1e3a3_1
  - zlib=1.2.11=h7795811_1010
  - zstd=1.4.5=h289c70a_2
  - pip:
    - alabaster==0.7.12
    - apipkg==1.5
    - appdirs==1.4.4
    - attrs==20.2.0
    - babel==2.8.0
    - black==20.8b1
    - bump2version==1.0.1
    - cached-property==1.5.2
    - cfgv==3.2.0
    - chardet==3.0.4
    - click==7.1.2
    - coverage==5.3
    - distlib==0.3.1
    - docutils==0.16
    - dolfin-adjoint==2019.1.0
    - execnet==1.7.1
    - filelock==3.0.12
    - flake8==3.8.4
    - h5py==3.0.0
    - hypothesis==5.41.0
    - identify==1.5.6
    - idna==2.10
    - imagesize==1.2.0
    - iniconfig==1.1.1
    - ipython-genutils==0.2.0
    - isort==5.6.4
    - jinja2==2.11.2
    - markupsafe==1.1.1
    - mccabe==0.6.1
    - mypy==0.790
    - mypy-extensions==0.4.3
    - nodeenv==1.5.0
    - packaging==20.4
    - pathspec==0.8.0
    - pluggy==0.13.1
    - pre-commit==2.8.2
    - py==1.9.0
    - pycodestyle==2.6.0
    - pyflakes==2.2.0
    - pyparsing==2.4.7
    - pytest==6.1.2
    - pytest-cov==2.10.1
    - pytest-forked==1.3.0
    - pytest-runner==5.2
    - pytest-sugar==0.9.4
    - pytest-xdist==2.1.0
    - pytz==2020.1
    - pyyaml==5.3.1
    - regex==2020.10.28
    - requests==2.24.0
    - scipy==1.5.3
    - snowballstemmer==2.0.0
    - sortedcontainers==2.2.2
    - sphinx==3.2.1
    - sphinx-rtd-theme==0.5.0
    - sphinxcontrib-applehelp==1.0.2
    - sphinxcontrib-devhelp==1.0.2
    - sphinxcontrib-htmlhelp==1.0.3
    - sphinxcontrib-jsmath==1.0.1
    - sphinxcontrib-qthelp==1.0.3
    - sphinxcontrib-serializinghtml==1.1.4
    - termcolor==1.1.0
    - toml==0.10.2
    - tox==3.20.1
    - typed-ast==1.4.1
    - typing-extensions==3.7.4.3
    - urllib3==1.25.11
    - virtualenv==20.1.0

RuntimeError with ROL solver

With recent versions of pyadjoint and Firedrake, I'm getting error messages like the following when using the ROL optimization solver:

File ~/Documents/Python_Envs/firedrake/lib/python3.9/site-packages/pyadjoint/tape.py:47, in no_annotations.<locals>.wrapper(*args, **kwargs)
     44 @wraps(function)
     45 def wrapper(*args, **kwargs):
     46     with stop_annotating():
---> 47         return function(*args, **kwargs)
File ~/Documents/Python_Envs/firedrake/lib/python3.9/site-packages/pyadjoint/optimization/rol_solver.py:238, in ROLSolver.solve(self)
    236 params = ROL.ParameterList(self.params_dict, "Parameters")
    237 self.solver = ROL.OptimizationSolver(rolproblem, params)
--> 238 self.solver.solve()
    239 return self.problem.reduced_functional.controls.delist(x.dat)
RuntimeError: Unable to cast Python instance of type <class 'NoneType'> to C++ type 'double'

The problem occurs when running this demo from the icepack documentation. That notebook requires you to download a bunch of observational data from NASA which requires an EarthData account, so I can make a more minimal failing example if that would help debug this.

If it helps narrow things down, the error goes away when I roll back to 9dab49a and firedrakeproject/firedrake@68a94f7c.

C++ expression problem

Hi!
I’m trying to generate white noise by generated a random number for each coordinate (x,y) of my 2D mesh.
To do so, I'm trying to get a 8x8 matrix for each coordinate:

f = Expression((tpl1,tpl2,tpl3,tpl4,tpl5,tpl6,tpl7,tpl8), degree=0)
with for each i=1,…,8
tpl = tuple([str(elt) for elt in lst])
and
lst = np.random.normal(0,10,8)
But when I do so, there’s a problem for the C++ compilation of the expression…

How would you do have f as a white noise generator? Is there a C++ function that could do this?

My whole code is:

from __future__ import print_function
from fenics import *
import matplotlib.pyplot as plt

# Create mesh and define function space
nx = ny = 100
mesh = UnitSquareMesh(nx, ny)
V = FunctionSpace(mesh, 'P', 1)

# Define variational problem
h = TrialFunction(V)
v = TestFunction(V)

lst1 = np.random.normal(0,10,8)
tpl1 = tuple([str(elt) for elt in lst1])
lst2 = np.random.normal(0,10,8)
tpl2 = tuple([str(elt) for elt in lst2])
lst3 = np.random.normal(0,10,8)
tpl3 = tuple([str(elt) for elt in lst3])
lst4 = np.random.normal(0,10,8)
tpl4 = tuple([str(elt) for elt in lst4])
lst5 = np.random.normal(0,10,8)
tpl5 = tuple([str(elt) for elt in lst5])
lst6 = np.random.normal(0,10,8)
tpl6 = tuple([str(elt) for elt in lst6])
lst7 = np.random.normal(0,10,8)
tpl7 = tuple([str(elt) for elt in lst7])
lst8 = np.random.normal(0,10,8)
tpl8 = tuple([str(elt) for elt in lst8])
f = Expression((tpl1,tpl2,tpl3,tpl4,tpl5,tpl6,tpl7,tpl8), degree=0)


a = dot(grad(h), grad(v))*dx
L = f*v*dx


h = Function(V)
solve(a == L, h)

# Plot solution and mesh
h_f = plot(h)
h_n.assign(h)
plt.title('Solution numérique h')
plt.xlabel('x')
plt.ylabel('y')
plt.colorbar(h_f)

# Hold plot
plt.show()

and the error message I get is:

Can only integrate scalar expressions. The integrand is a tensor expression with value shape (8, 8) and free indices with labels ().

New solver parameters interface fails with LinearVariationalProblem

The following code fails because the solver parameters are never passed to the adjoint solve

from firedrake import *
from firedrake_adjoint import *

def main():

    mesh = UnitSquareMesh(100, 100)

    parameters = {
        "mat_type" : "aij",
        "ksp_type" : "preonly",
        "ksp_monitor_true_residual": None,
        "pc_type" : "lu",
        "pc_factor_mat_solver_type" : "mumps"
        }

    P2 = VectorElement("CG", mesh.ufl_cell(), 2)
    P1 = FiniteElement("CG", mesh.ufl_cell(), 1)
    TH = P2*P1
    W = FunctionSpace(mesh, TH)

    U = TrialFunction(W)
    u, p = split(U)
    V = TestFunction(W)
    v, q = split(V)

    a = (inner(grad(u), grad(v)) - div(v)*p - q*div(u))*dx
    bcs = [DirichletBC(W.sub(0), Constant((1, 0)), (4,)),
       DirichletBC(W.sub(0), Constant((0, 0)), (1, 2, 3))]

    U1 = Function(W)
    f = Function(W).project(as_vector([1.0, 1.0, 0.0]), annotate=False)
    L = inner(f, V)*dx(domain=mesh)
    problem = LinearVariationalProblem(a, L, U1, bcs=bcs)
    nullspace = MixedVectorSpaceBasis(W, [W.sub(0), VectorSpaceBasis(constant=True)])
    solver_stokes1 = LinearVariationalSolver(problem, solver_parameters=parameters, nullspace=nullspace)
    solver_stokes1.solve()

    u1, p1 = split(U1)
    J = assemble(u1[0]*dx)
    Jhat = ReducedFunctional(J, Control(f))
    valor = Jhat(f)
    Jhat.derivative()

if __name__ == '__main__':
    main()

Looking at the changes in the code, it seems as if the solver parameters need to be passed to LinearVariationalSolver.solve(), but Firedrake does not accept certain parameters in this function. Maybe it could be solved by passing the solver parameters and adding these parameters to the list of pop_kwargs_keys in GenericSolveBlock. I am not sure about what the right implementation is, but I am willing to write it if given indications.

If I pass forward_kwargs to LinearVariationalSolver.solve() as in

solver_stokes1.solve(forward_kwargs={"solver_parameters" : parameters})

then mat_type is never passed and the direct solver cannot solve the system with a matrix type nest (default in Firedrake).

Adjoint variables kept around longer than necessary

As a brief reminder of how the maths works, suppose we have the following sequence:
$$u_0 = m$$
$$u_1 = f_0(u_0)$$
$$u_2 = f_1(u_1)$$
$$u_3 = f_2(u_1)$$
$$J = f_3(u_2, u_3)$$
Then we get a block on the tape corresponding to each $f_0,\ldots,f_3$ and the adjoint calculation does the following:
$$u'_2, u'_3 = f'_3(u_2, u_3, J)$$
$$u'_1 = f'_2(u_1, u_3, u'_3)$$
$$u'_1 =u'_1+ f'_1(u_1, u_2, u'_2)$$
$$u'_0 = f'_0(u_0, u_1, u'_1)$$
$$m' = u'_0$$
where primes indicate adjoint operations and variables. Let's observe a few patterns:

  1. If $u$ is an input to $f$ then $f'$ adds to $u'$. Multiple adjoint operations can contribute to the same adjoint variable.
  2. $u'$ is an input to $f'$ if and only if the corresponding primal variable $u$ was an output of the primal operation $f$. Consequently, each adjoint variable is only the input to one adjoint operation.

This means that when we are evaluating the adjoint, and assuming we don't then intend to evaluate the Hessian, we can discard the adjoint values to block outputs as soon as the adjoint block has been evaluated. Because we do need to keep the adjoint values lying around if we plan to evaluate the Hessian, this would need to be controlled by an option e.g. "keep_adjoint_variables=True".

This would happen at the end of block.evaluate_adj and would do something like:

if not keep_adjoint_variables:
    for output in outputs:
        output.reset_variables("adjoint")

Note that this should happen even if there are no relevant dependencies, so rather than returning early if there are no relevant dependencies, we should just not call the preparation routine:

        if relevant_dependencies:
            prepared = self.prepare_evaluate_adj(inputs, adj_inputs, relevant_dependencies)

Visualization not working - AttributeError: module 'tensorflow' has no attribute 'compat'

Hi, I was learning from dolfin-adjoint documentations and when it came to manual - visualization, [http://www.dolfin-adjoint.org/en/latest/documentation/debugging.html], there occurred this error:

Traceback (most recent call last):
File "dolfin-adjoint.py", line 24, in
tape.visualise()
File "/Users/ruru/anaconda3/envs/fenics/lib/python3.7/site-packages/pyadjoint/tape.py", line 342, in visualise
tf.compat.v1.reset_default_graph()
AttributeError: module 'tensorflow' has no attribute 'compat'

Getting the adjoint variable

I apologize for this somewhat basic question. Is it possible to use dolfin-adjoint to get the adjoint variable? I understand how to get the derivatives of the functional, but I cannot find any reference to getting the adjoint variable’s value?.

Thanks for your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.