Coder Social home page Coder Social logo

dynamicslab / hydrogym Goto Github PK

View Code? Open in Web Editor NEW
46.0 7.0 10.0 97.78 MB

An RL-Gym for Challenge Problems in Data-Driven Modeling and Control of Fluid Dynamics.

Home Page: https://hydrogym.readthedocs.io

License: MIT License

Python 8.02% GLSL 1.51% Jupyter Notebook 90.47%
computational-fluid-dynamics differentiable-physics-engine hydrodynamics reinforcement-learning reinforcement-learning-environments

hydrogym's Introduction

HydroGym Logo

Ruff Language: Python License WarpX Slack Code style: yapf

About this Package

IMPORTANT NOTE: This package is still ahead of an official public release, so consider anything here as an early beta. In other words, we're not guaranteeing any of this is working or correct yet. Use at your own risk

HydroGym is an open-source library of challenge problems in data-driven modeling and control of fluid dynamics. It is roughly designed as an abstract interface for control of PDEs that is compatible with typical reinforcement learning APIs (in particular Ray/RLLib and OpenAI Gym) along with specific numerical solver implementations for some canonical flow control problems. Currently these "environments" are all implemented using the Firedrake finite element library.

Features

  • Hierarchical: Designed for analysis and controller design from a high-level black-box interface to low-level operator access
    • High-level: hydrogym.env.FlowEnv classes implement the OpenAI gym.Env interface
    • Intermediate: Typical CFD interface with hydrogym.FlowConfig and hydrogym.TransientSolver classes
    • Low-level: Access to linearized operators and sparse scipy or PETSc CSR matrices
  • Modeling and analysis tools: Global stability analysis (via SLEPc) and modal decompositions (via modred)
  • Scalable: Individual environments parallelized with MPI with a highly scalable Ray backend reinforcement learning training.

Installation

By design, the core components of Hydrogym are independent of the underlying solvers in order to avoid custom or complex third-party library installations. This means that the latest release of Hydrogym can be simply installed via PyPI:

pip install hydrogym

BEWARE: The pip-package is currently behind the main repository, and we strongly urge users to build HydroGym directly from the source code. Once we've stabilized the package, we will update the pip package in turn.

However, the package assumes that the solver backend is available, so in order to run simulations locally you will need to separately ensure the solver backend is installed (again, currently all the environments are implemented with Firedrake). Alternatively (and this is important for large-scale RL training), the core Hydrogym package can (or will soon be able to) launch reinforcement learning training on a Ray-cluster without an underlying Firedrake install. For more information and suggested approaches see the Installation Docs.

Quickstart Guide

Having installed Hydrogym into our virtual environment experimenting with Hydrogym is as easy as starting the Python interpreter

python

and then setting up a Hydrogym environment instance

import hydrogym.firedrake as hgym
env = hgym.FlowEnv({"flow": hgym.Cylinder}) # Cylinder wake flow configuration
for i in range(num_steps):
    action = 0.0   # Put your control law here
    (lift, drag), reward, done, info = env.step(action)

To test that you can run individual environment instances in a multithreaded fashion, run the steady-state Newton solver on the cylinder wake with 4 processors:

cd /path/to/hydrogym/examples/cylinder
mpiexec -np 4 python pd-control.py

For more detail, check out:

  • A quick tour of features in notebooks/overview.ipynb
  • Example codes for various simulation, modeling, and control tasks in examples
  • The ReadTheDocs

Flow configurations

There are currently a number of main flow configurations, the most prominent of which are:

  • Periodic cylinder wake at Re=100
  • Chaotic pinball at Re=130
  • Open cavity at Re=7500
  • Backwards-facing step at Re=600

with visualizations of the flow configurations available in the docs.

hydrogym's People

Contributors

cl126162 avatar dependabot[bot] avatar jcallaham avatar ludgerpaehler avatar samahnert avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

hydrogym's Issues

Update Docker image

The Docker image seems outdated and I cannot build it successfully. The requirements.txt file used in the Dockerfile to install dependencies, for example, does not exist anymore in the repository.

Cavity validation

Pretty close here - have confirmed eigenvalues at Re=4000 are near Sipp & Lebedev (2007). Before calling it good, it would be good to also check the following:

  • Critical Reynolds number is near 4140
  • Flow at 5000 is periodic
  • Flow at 7500 is quasiperiodic
  • Stability analysis at 7500 matches Nek5000 results as well as Sipp et al (2010)

Test actuation for pinball

Should be the same as cylinder, but maybe a little trickier with MIMO?

  • Test inputs/outputs for individual cylinders and all three
  • Check differentiability with respect to each input and all inputs
  • Test feedback control
  • Add FlowEnv wrapper

Controller callback

For the FlowEnv I think it's safe to assume that the controller machinery will be external, but it would be nice to have callback interfaces for at least three common cases:

  • Open-loop (time-varying) control
  • Closed-loop control with linear feedback
  • Kalman filter for linear state estimation

Implement thermosyphon

Implementing the chaotic thermosyphon from JC's paper with rotary control of the inner cylinder. Objectives could be either stabilization or (probably more interesting) maximizing heat transfer.

  • Create mesh
  • Add new BoussinesqFlowConfig for problems with heat transfer
  • Solve steady conduction problem with Thermosyphon flow
  • Add semi-implicit scalar solve to the timestepper for heat transfer
  • Validate results against JC's paper
  • Add and test control

Balanced POD

Once we have discrete-time adjoint capabilities (see also #16), we should be able to implement BPOD pretty easily, which would be a nice alternative to adjoint global stability analysis for model reduction. Also related to #13 for Modred with MPI

Differentiable solves broken on release branch

Slipped by me because compute_gradient throws a warning instead of an error, so I'm not totally sure when this broke. Also fails for steady-state solves, so probably rooted in the actuation implementation (could try rolling back to before the Actuator class).

Can reproduce with pytest test_cyl.py -k 'test_steady_grad'

Create cylinder meshes

Create meshes comparable to both Noack et al (2003) and Sipp & Lebedev (2007). The latter has a much larger domain, but may be relatively slow and have higher resolution than is necessary for the purposes of a modeling and control testbed.

Once the meshes are generated, implement a steady-state solver and compare drag coefficients at Re=40 and Re=100 (also compare to Nek5000 and IBPM implementations). The final decision between the two can wait until stability analysis is implemented... possibly it would be better to leave both options in place.

Time-stepping for cylinder

The basic time-stepping functionality can probably be more or less copied from the IPCS demos in FEniCS:

But it would be good to take a couple of extra steps:

  • Validate IPCS solver (Strouhal number, min/max aerodynamic coefficients)
  • Benchmark iterative solvers and preconditioners (though the ones in the tutorial are probably a good place to start)
  • Wrap the physics and simulation in some kind of simple high-level class
  • I/O functionality (checkpointing and restart)

Running Jupyter Notebook from within the VENV produces error

Running "jupyter notebook" to try and run the overview notebook after activating the venv gives me an error:

(firedrake) firedrake@bf0146359d08:/home/hydrogym/notebooks$ jupyter notebook
[I 2022-05-30 22:23:18.342 LabApp] JupyterLab extension loaded from /home/firedrake/firedrake/lib/python3.8/site-packages/jupyterlab
[I 2022-05-30 22:23:18.343 LabApp] JupyterLab application directory is /home/firedrake/firedrake/share/jupyter/lab
[I 22:23:18.362 NotebookApp] Serving notebooks from local directory: /home/hydrogym/notebooks
[I 22:23:18.362 NotebookApp] Jupyter Notebook 6.4.11 is running at:
[I 22:23:18.362 NotebookApp] http://localhost:8888/?token=1d0557ea0bb002ed0f3e6332ad9339742affd4ec77abb3e8
[I 22:23:18.362 NotebookApp] or http://127.0.0.1:8888/?token=1d0557ea0bb002ed0f3e6332ad9339742affd4ec77abb3e8
[I 22:23:18.363 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 22:23:18.378 NotebookApp] No web browser found: could not locate runnable browser.
[C 22:23:18.379 NotebookApp]

To access the notebook, open this file in a browser:
    file:///home/firedrake/.local/share/jupyter/runtime/nbserver-3742-open.html
Or copy and paste one of these URLs:
    http://localhost:8888/?token=1d0557ea0bb002ed0f3e6332ad9339742affd4ec77abb3e8
 or http://127.0.0.1:8888/?token=1d0557ea0bb002ed0f3e6332ad9339742affd4ec77abb3e8

then when trying to access the url, I am "unable to connect" (not sure if this is just a mac thing). I think this is to be expected when trying to run a Jupyter notebook on a Docker/virtual machine and then trying to access it locally since the docker file isn't exactly running locally (probably has it's own IP somewhere although I'm really not familiar with this sort of thing).

Not a priority type issue, but something I ran into as I'm trying to get more familiar to run a RL algorithm on here. I could try to work on it, but it's more of a "it would be nice to have this" if this were a polished product. For now it might just sidetrack from the main stuff, so I'll try to continue learning about the project another way (this thread may be promising for future reference: https://www.digitalocean.com/community/tutorials/how-to-install-run-connect-to-jupyter-notebook-on-remote-server).

Linear algebra module

When working with projection-based modeling and modal analysis, it would be nice to have some expanded linear algebra functionality. This would ideally look something like a combination of the VectorSpaceHandles in Modred and the LinearOperator in Scipy. So you would probably have three fundamental objects:

  1. A Vector-type class (possibly the same as the implementation of the Modred VectorHandle, currently called Snapshot, or maybe with a Function as the underlying object...). Should support basic vector algebra operations
  2. An Operator class which is composable, ideally just inheriting from the Scipy version, but without assuming that the underlying vectors are actually numpy arrays.
  3. A Basis or Subspace, which is a collection of Vectors (and optionally adjoints) that supports creation of projection and orthogonal projection Operators

With these, you could do things like:

  • Apply an Operator to a Vector and get a new Vector (e.g. timestepping as a sequence of linear solves)
  • Project an Operator onto a Subspace to get a reduced-order approximation
  • Compose an Operator with projection onto the complement of a Subspace, for instance to do balanced POD in the stable subspace of a linearized Navier-Stokes operator

Other ideas and thoughts:

  • How would you handle parallel I/O from different sources? See #31 for more discussion
  • Replace explicit dependence on flow in modal analysis with a set of callbacks for the snapshots
  • Hide all references to Snapshot and only use mr.VectorHandle, then hide all references to this to avoid confusion with the new Vector objects

PySINDy integration

Add utilities for easy modeling with SINDy(+c). Should be pretty straightforward, maybe as easy as adding pysindy to the dependencies and a worked example.

Discrete-time LTI

Realized that using the DirichletBC as a control matrix doesn't really make sense in continuous time models, since it's applied at discrete time steps. What you currently get from FlowConfig.linearize_control() is actually something like dt*B, except that there's no dt specified... so in order to use it in a continuous-time model you have to multiply by the timestep you will be using... which is kind of a mess.

But I think you could do an equivalent (maybe cleaner) thing by moving the control to the timestepper. Since applying a zero DirichletBC in initial assembly will set corresponding rows of the dynamics matrix to identity, and rows of the RHS to zero, you could just precompute the discrete-time control vector (which is what we already have) and apply it as x_{k+1} = A*x_k + B*u_k, where A is quasilinear. I think this will work for the forseeable future, since all planned controls are on Dirichlet BCs. This could actually be a cleaner implementation of the timestepping, too. Then you should just be able to get A as a LinearOperator.

  • Change control from time-varying DirichletBC to forcing vector in timestepping
  • Check this is differentiable
  • Wrap timestepping as LinearOperator
  • Check eigenvectors with eigs in scipy
  • Same for adjoint operator

Global stability analysis

Stability analysis will be an important method of validation, as well as a way to benchmark against optimal control theory approaches. This should be straightforward using the SLEPc solver that's actually built into FEniCS.

One additional thing would be to implement adjoint stability analysis. I'm not sure if it's necessary to use dolfin-adjoint here, or if the Jacobian can just be transposed as a PETSc matrix and passed to the SLEPc solver.

Symmetrize pinball meshes

Pinball meshes are slightly asymmetric since there's no centerline on the downstream side. Just need to add that as a boundary in gmsh.

Also, should finish up the "fine" mesh with labels while I'm at it.

Writing up Documentation as we go along

While not an absolute necessity for the paper submission, it would probably be helpful to start writing up docs as we go along (and still have a lot of the stuff fresh in our memory).

What am I proposing?

  1. Deploy an initial readthedocs bare-bones documentation
  2. Document new stuff as we along without looking at the form, or the finer formulation of the notes

I.e. we would basically have a bare-bones documentation with notes inside of it at the end, over which we can comb over after the paper submission to then have the actual documentation, and for which we can also adjust the style files etc.

Snapshot handling

Found some slightly confusing behavior with handling saving and loading functions on meshes from different scripts. Basically I think that if you load checkpoint files from two different scripts which were run with different numbers of processors Firedrake considers them to have come from different meshes, even if they were created from the same gmsh file originally.

The behavior is something like this (just a cartoon though):

Script A (serial):

# ... Do some analysis
save_checkpoint('chkA.h5', qA)

Script B (parallel):

qA = load_checkpoint('chkA.h5')
qB = qA.copy()
save_checkpoint('chkB.h5', qB)

Script C (serial):

qA = load_checkpoint('chkA.h5')
qB = load_checkpoint('chkB.h5')

# Now these are incompatible
inner(qA, qB)

One fix is to re-save qA from Script B so that the checkpoint files were created from the same script. But this would be a pain if you were doing some complicated analysis, say comparing projections onto global modes and POD modes with snapshots from two different simulations.

I think the better way to handle it would be to set it up to distinguish between "restart" checkpoints and "snapshot" checkpoints.
So the default behavior would be to use numpy binaries as the intermediate for working with snapshots, and CheckpointFile for restart files. The only catch is working in parallel... the CheckpointFile can be saved without a bottleneck, but converting to numpy arrays (currently in utils.snapshots_to_array) has to be done in serial. Currently (for POD, for instance), I'm using an intermediate to_arrays.py script to do this, but obviously this is pretty confusing and not idea.

I think a better way would be to have the SnapshotCallback call out a subprocess from rank zero that will do the conversion as a postprocessing step, and then to retool some of the other analysis features so that saving and loading to numpy binaries is more easily supported.

It would still be ideal to be able to gather the PETSc.Vec to rank zero, but still no luck on that front...

[Baseline] Linear optimal control for cylinder

Goal is to have a working LQG (LQR + Kalman filter) for the cylinder wake as a baseline controller. So far I've gotten the following to work (see examples/cylinder/control and examples/cylinder/notebooks/controller-design.ipynb):

  • Convert linearized timestepper to a discrete-time LTI system with the matrix-free LinearOperator from scipy
  • Extract direct and adjoint global eigenmodes from SLEPc as numpy arrays
  • Derive a reduced-order model with Petrov-Galerkin projection onto the global modes
  • Controller design with the Python Control Systems Library
  • Tested both the Kalman filter works and full LQG controller on the ROM

So far so good, except that it blows up when the controller is actually applied to the DNS. Here's my plan to isolate the breakdown:

  • Test Kalman filter and LQR on the ROM as a sanity check
  • Redo everything with the smaller mesh so things can be tested more efficiently
  • Kalman filter on full LTI system
  • LQR on full LTI system
  • Kalman filter on full linearized timestepper (not LinearOperator LTI)
  • LQR on full linearized timestepper
  • Kalman filter on nonlinear timestepper
  • LQR on nonlinear timestepper

Apparent memory leak in PPO training

When running a very simple (serial) PPO training with the ppo_train.py script the training runs successfully for 3 iterations and then crashes (will post the error message later).

I'm not sure if this is an issue on the Firedrake or Ray side - I've run into memory-leak-type behavior with Firedrake before, but there are a couple of documented instances of this kind of thing with Ray:

Debugging ideas:

  • Rebuild image with latest versions of OpenAI Gym (0.26 currently) and Ray (2.0.0)... may also require resolving #54
  • Try garbage collection during env.reset()
  • Use ray.rllib.algorithms.callbacks.MemoryTrackingCallbacks to track in Tensorboard
  • Compare memory usage with SpinningUp PPO implementation to RLlib to see if the problem is with our environment

Modred compatibility

  • Implement SnapshotCallback for saving Checkpoints (also see this)
  • Implement a modred.VectorHandle for loading the checkpoints into modred as PETSc.Vec objects
  • Inner product based on mass matrix
  • Test for POD on cylinder
  • Test in parallel

Cavity and step actuation

I think the most straightforward thing here for the cavity would be to add a Dirichlet BC on the leading-edge wall, either with pressure or velocity, then integrate some quantity (stress or pressure?) on the trailing-edge wall. The step should be similar on the upstream side, but maybe multiple downstream sensors?

  • Check literature for appropriate input/output implementations and objective function
  • Add leading-edge wall domain (and downstream measurement region) to meshes
  • Dirichlet BCs for control (test with harmonic forcing)
  • Implement downstream measurement functions
  • Check construction of discrete-time LTI dynamics, control, and measurement operators
  • Wrap both flows as FlowEnv classes

Update test suite

There is a basic test suite that checks things like steady solve for the cylinder and time-stepping, but it would be good to fill that out now that there are some more capabilities (LTI construction, stability analysis in complex mode, etc)

Little experience feedback after installation

Hi,

First thank you for this amazing work.

I tried to install hydrogym on my Dell Ubuntu 20.04 (Intel i9) laptop. And I have encountered several issues:

  • First I have not understood the following implementation details on the readthedocs.io page:

In the technical approach we take to our distributed backend, we have the default assumption that the virtual environment has no access to a native Firedrake installation, the default simulation engine powering the reinforcement learning (RL) environments, and instead the distributed backend spawns environment instances with every RL-instance.

It's not very clear how to run this instruction: config.update("hydrogym_local_development", True). Where does the config object comes from ?

In git clone --recursively https://github.com/dynamicslab/hydrogym.git the flag --recursively did not work and I had to replace it by --recursive.

  • Second, about Firedrake there are some installations difficulties.

Just importing hydrogym and run the quickstart code did not work. I had to install manually Firedrake and install hydrogym inside the Firedrake's venv.

I first installed Firedrake with the third party revision firedrake @ 2209aae provided by the repository. The installation failed the Firedrake test suite, see https://www.firedrakeproject.org/download.html.
Consequently I installed the last release from the official Firedrake repository, the test suite went fine. However, It failed again but now because of a minor ufl checkpoint loading bug see firedrakeproject/firedrake#2645, and by installing a stable previous version of ufl inside the venv I was able to import hydrogym and run the overview notebook (with some code update with respect to newer versions).

I could provide hydrogym updated introductory .py scripts if necessary:).

Thanks for your consideration,

Kind regards,

Gym API Compliance

The Gym API has just released its latest specifications, which interestingly for us contain breaking changes. They commit to maintaining all features from this point onward, but it would possibly make sense to have a look and see that we do not rely on any of the features which are affected by the breaking changes.

Here are the release notes:

https://github.com/openai/gym/releases/tag/0.26.0

Especially the step termination/truncation affects us directly probably.

Stabilize steady solver

The Newton solver diverges with either the coarse or fine mesh above Re~80.

  1. What is the default linear solver in SNES? Direct or iterative? If iterative, may need to add preconditioning (or at least try switching to MUMPS)
  2. Is it a mesh resolution issue? Could add an even higher-resolution mesh patterned after the sipp-lebedev mesh for Cylinder

Training signal from environments in RLlib

Training a basic PPO agent using RLlib and the script here this is what I'm getting for the "episode reward mean". Somehow it doesn't really seem to be learning anything, although I haven't actually run the learned model forward yet.

image

Aerodynamic coefficients

Figure out how to compute lift/drag in FEniCS... should just be some kind of boundary integral with ds

Rotating cylinder control

Add cylinder rotation as time-varying boundary condition. Also implement this as closed-loop control based on lift/drag measurements (first do #3 for aerodynamic coefficients)

Freestream boundary conditions

The vorticity doesn't currently vanish on the transverse boundaries because the DirichletBC forces the solution to match the free-stream. You can eliminate the freestream conditions in the transient solver, but the steady-state diverges. Some places a zero shear stress condition is imposed instead, so $uy = 0 and v=0$ at the top and bottom.

The task here is to come up with a consistent set of boundary conditions that will work for the transient and steady-state solvers and not introduce unphysical vorticity at the boundaries.

Could also do a non-reflecting boundary condition on the outlet... see Bergmann, Cordier, Brancher (2005)

Multistep time integration

Add support for multistep schemes for explicit terms (e.g. second- and third-order Adams-Bashforth). I think this would mainly just require adding memory terms for the previous solutions.

First release planning

What do we think would be the key features to have in place before releasing to the public/writing an initial paper?

Some things that have already been brought up:

  • Fully validated cylinder, pinball, cavity, and backwards-facing step configurations.
  • Baselines from classical control (LQR and PID, for instance)
  • HPC workflow for RL with basic built-in optimizers (PPO)
  • Test suite
  • Documentation and examples

Implement backwards-facing step

So far all I have is the mesh for this, but it needs a little work to get up and running:

  • Parabolic inlet profile
  • Direct/adjoint stability analysis
  • Random upstream perturbations in DNS (Poiseuille eigenfunctions? in boundary layer?)
  • Validate against Boujo, Gallaire papers

Automatic differentiation

This will be an alternative to black-box training schemes like RL. There are (at least) three cases we should support:

  1. Parameter optimization within Firedrake (i.e. standard optimal control)
  2. hydrogym.Flow as a torch.nn.Module so that parameters can be optimized within PyTorch
  3. Embedding a torch.nn.Module as an Expression (or similar) within Firedrake

The last two cases are sort of opposite. Basically, you should be able to either put the CFD inside an ML model and differentiate with PyTorch, or have an ML model within the CFD code and differentiate with pyadjoint. This will probably be a matter of figuring out how to pass information back and forth between the gradient tapes. Compatibility with other libraries could be nice, but this would be a good place to start.

Goals:

  • Implement automatic differentiation with pyadjoint
  • Profile memory usage: is checkpointing necessary?
  • #14
  • #15

Fluctuation KE objective for cavity & step flows

@ludgerpaehler @SamAhnert these flows are almost good to go but I'm a little stuck trying to figure out the best way to evaluate the objective function for the step and cavity. What you're really trying to minimize (at least in the literature on these flows) is the fluctuation KE, but to calculate that you need a base state to subtract off. I can think of three ways to approach this:

  1. Precompute the steady state when the flow is initialized (this would add some fairly significant overhead)
  2. Save a steady solution in LFS for each mesh resolution that gets loaded on initialization
  3. Use the square of the fluctuating measurement as a proxy for KE, which is actually done sometimes in the literature (then we'd just have to store the measurement associated with the steady state).

I don't really like any of these to be honest, especially because the base flow will be both mesh- and Reynolds-dependent.
But I'm leaning towards option 3. Any thoughts?

For reference, here's the paper I'm looking at for the cavity: https://hal.archives-ouvertes.fr/hal-01021129

Major refactor for initial release

The current package structure was designed for not really designed for distribution or release. We would like to be able to do a pip install hydrogym that does not depend on complicated installation configurations (PETSc, Firedrake, PyTorch, TF, etc). The target for this would be similar to OpenAI gym, where the core dependencies are light but it provides an API for MuJoCo, etc. Ideally the "core" modules would be enough to specify an interface for time-dependent PDE control that is compatible with the distributed training framework with Ray, but would still be solver- and discretization- agnostic.

The overall package will probably end up looking something like this:

hydrogym/
├── core.py
├── distributed/
    ├── ...
├── firedrake/
    ├── flow.py
    ├── ts.py
    ├── envs/
        ├── cavity.py
        ├── cylinder.py
        ├── pinball.py
        ├── step.py
    ├── utils/
        ├── io.py
        ├── linalg.py
        ├── utils.py
docker/
├── Dockerfile
docs/
├── ...
test/
├── ...
examples/
├── ...
setup.py

So possibly the whole thing could be installed with pip install hydrogym[firedrake]? But I assume that would only likely be done in a Docker container... @ludgerpaehler how far did you get with this on #53? Also, I know we talked about this but I lost my notes... what should we be doing with Docker as part of this? Should the new package be able to spin off Docker containers (using docker-py as part of the FlowEnv), or will the containers be handled by Ray? Either way, it seems like we should probably maintain a hydrogym-firedrake image, right?

Hardcoded analytic torque value in `test_cyl.py`

@SamAhnert one of the new tests for the torque-based actuation has a value in it that looks like something calculated based on a specific value of TAU = 0.0556. Here's the test:

def test_fixed_torque():
    print("")
    print("Fixed Torque Convergence")
    time_start = time.time()
    flow = hgym.Cylinder(mesh="coarse", control_method="indirect")
    dt = 1e-3
    solver = hgym.IPCS(flow, dt=dt)
    flow.actuators[0].implicit = True

    # Apply steady torque of 35.971223 Nm, should converge to ~2 rad/sec with k_damp = 1/TAU
    tf = 1e-2  # sec
    torque = 35.971223  # Nm <--- HERE'S THE VALUE

    # Run sim
    num_steps = int(tf / dt)
    for iter in range(num_steps):
        flow = solver.step(iter, control=torque)

        print(flow.control_state)

    assert np.isclose(flow.control_state, 2.0, atol=1e-3)

    print("finished @" + str(time.time() - time_start))

Could you make this calculation transparent so that we can change Cylinder.TAU and still have it calculate the right value? This snippet is from the release branch, but you could just commit the change to master and I'll merge it from there.

Testcases Completeness

Hey,

Are we now sort of close to test cases completeness @jcallaham ? I have a pretty clear view of the Ray layer and will add all of it in the coming week. Should be able to run a small ray cluster across a few workstations as a proof-of-concept.

  • Ludger

Validate backwards-facing step

Compare to results from Boujo & Gallaire (2015)

  • Secondary upper recirculation zone appears near Re=272 (x_u ~ 8.2)
  • Separation and reattachment points at Re=600: (x_lr, x_us, x_ur) = (11.82, 9.34, 20.59)
  • 3D stability analysis with periodic spanwise direction: (Re_c, beta_c) = (715, 0.88)

Currently the separation and reattachment points are significantly off - for instance, x_lr = 4.12 and the upper recirculation zone appears above Re=500. Need to check definitions of Reynolds number, inlet velocity, etc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.