Coder Social home page Coder Social logo

sjvrijn / mf2 Goto Github PK

View Code? Open in Web Editor NEW
24.0 3.0 7.0 1.1 MB

Collection of Multi-Fidelity benchmark functions

Home Page: https://mf2.readthedocs.io/

License: GNU General Public License v3.0

Python 88.36% TeX 11.64%
multi-fidelity benchmark-functions benchmark-suite benchmarking-suite python

mf2's Introduction

MF2: Multi-Fidelity-Functions

Package Info Status Support
PyPI version Tests status Docs Status
Conda Coverage Status Gitter
PyPI - Python Version Codacy Badge
License: GPL v3 Project Status: Active
DOI CII Best Practices
status

Introduction

The mf2 package provides consistent, efficient and tested Python implementations of a variety of multi-fidelity benchmark functions. The goal is to simplify life for numerical optimization researchers by saving time otherwise spent reimplementing and debugging the same common functions, and enabling direct comparisons with other work using the same definitions, improving reproducibility in general.

A multi-fidelity function usually reprensents an objective which should be optimized. The term 'multi-fidelity' refers to the fact that multiple versions of the objective function exist, which differ in the accuracy to describe the real objective. A typical real-world example would be the aerodynamic efficiency of an airfoil, e.g., its drag value for a given lift value. The different fidelity levels are given by the accuracy of the evaluation method used to estimate the efficiency. Lower-fidelity versions of the objective function refer to less accurate, but simpler approximations of the objective, such as computational fluid dynamic simulations on rather coarse meshes, whereas higher fidelity levels refer to more accurate but also much more demanding evaluations such as prototype tests in wind tunnels. The hope of multi-fildelity optimization approaches is that many of the not-so-accurate but simple low-fidelity evaluations can be used to achieve improved results on the realistic high-fidelity version of the objective where only very few evaluations can be performed.

The only dependency of the mf2 package is the numpy package.

Documentation is available at mf2.readthedocs.io

Installation

The recommended way to install mf2 in your (virtual) environment is with Python's pip:

pip install mf2

or alternatively using conda:

conda install -c conda-forge mf2

For the latest version, you can install directly from source:

pip install https://github.com/sjvrijn/mf2/archive/main.zip

To work in your own version locally, it is best to clone the repository first, and additionally create an editable install that includes the dev-requirements:

git clone https://github.com/sjvrijn/mf2.git
cd mf2
pip install -e ".[dev]"

Example Usage

import mf2
import numpy as np

# set numpy random seed for reproducibility
np.random.seed(42)
# generate 5 random samples in 2D as matrix
X = np.random.random((5, 2))

# print high fidelity function values
print(mf2.branin.high(X))
# Out: array([36.78994906 34.3332972  50.48149005 43.0569396  35.5268224 ])

# print low fidelity function values
print(mf2.branin.low(X))
# Out: array([-5.8762639  -6.66852889  3.84944507 -1.56314141 -6.23242223])

For more usage examples, please refer to the full documentation on readthedocs.

Contributing

Contributions to this project such as bug reports or benchmark function suggestions are more than welcome! Please refer to CONTRIBUTING.md for more details.

Contact

The Gitter channel is the preferred way to get in touch for any other questions, comments or discussions about this package.

Citation

Was this package useful to you? Great! If this leads to a publication, we'd appreciate it if you would cite our JOSS paper:

@article{vanRijn2020,
  doi = {10.21105/joss.02049},
  url = {https://doi.org/10.21105/joss.02049},
  year = {2020},
  publisher = {The Open Journal},
  volume = {5},
  number = {52},
  pages = {2049},
  author = {Sander van Rijn and Sebastian Schmitt},
  title = {MF2: A Collection of Multi-Fidelity Benchmark Functions in Python},
  journal = {Journal of Open Source Software}
}

mf2's People

Contributors

arfon avatar sjvrijn avatar sourcery-ai-bot avatar zbeekman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mf2's Issues

JOSS Review. Repo Changes.

According to my review and @sjvrijn response.

The following need work:

  • The pip installation doesn't automatically install the required dependencies. (tested on a clean pyenv).
  • Also, when trying to run python3 tests/create_regression_data.py I get ImportError: attempted relative import with no known parent package.
  • I see there's a fair amount of duplicated code, particularly, the artificialMultifidelity.py and borehole.py (from codacy).
  • Scalabiliy plot for implementations.
  • (lf possible) I think if the implementations are fairly well known in the field, a simple comparison could help convince people to use your package. It should be fairly easy if you have access to a matlab license.

Originally posted by @torressa in openjournals/joss-reviews#2049 (comment)
and
Originally posted by @torressa in openjournals/joss-reviews#2049 (comment)

[Enhancement] Update installation to use pyproject.toml (PEP 660)

The issue
The modern way to make a package installable is by specifying a pyproject.toml file instead of setup.py. For future proofing, this should be implemented for this package too.

Describe the solution you'd like
Replace setup.py with a pyproject.toml specifying all the same information

Additional context
For development, mf2 currently recommends using an editable install (pip install -e .). At this moment, setuptools does not yet support editable installs when using pyproject.toml, but this is work in progress: See PR 3488 of setuptools: https://github.com/pypa/setuptools. To prevent potential issues, I would wait untill setuptools is updated with this support.

[Suggestion] Artificial 6-fidelity function by Branke et al.

In which paper is this function introduced?
"Efficient Use of Partially Converged Simulations in Evolutionary Optimization", J. Branke, M. Asafuddoula, K. S. Bhattacharjee and T. Ray, doi: 10.1109/TEVC.2016.2569018.

What function(s) should be added from this paper
Artificial multi-fidelity function

What are characteristics of these functions?
A 1d 6-fidelity function with deliberately misleading optima in the lower fidelities

Additional information
Public PDF through semantic scholar

Reference [39]: Multi-fidelity benchmark matlab code.

[Feature request] 'upgrade' factory functions for adjustable benchmark functions to classes

Is your feature request related to a problem? Please describe.
The current factory functions do not support type-checking for 'AdjustableMultiFidelityFunction' objects when programmatically varying the adjustable parameter. Also, in order to access attributes such as name, currently the factory function has to be called to generate an instance instead of simply accessing the attribute from the imported object as can be done for the non-adjustable functions.

>>> import mf2
>>> mf2.branin.name
... 'Branin'
>>> mf2.adjustable.branin.name
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-4-2ac5400b07a5> in <module>
----> 1 mf2.adjustable.branin.name

AttributeError: 'function' object has no attribute 'name'
>>> mf2.adjustable.branin(0).name
... 'Adjustable Branin 0'

Describe the solution you'd like
The factory functions should become a class onto themselves that still behave as factory functions through classmethods or use of __call__.

[Feature request] Add warning if `x_opt` is not within bounds

There is currently no warning given if an x_opt is specified that lies outside of the given bounds.
This would serve as a sanity check to prevent typos, etc.

This hints at a further warning that should be given if the upper bound is not higher than the lower bound.

[BUG] minus sign omitted in six_hump_camelback

Describe the bug
The problem is in this line:

term3 = 4*x2**2 + 4*x2**4

Compared to e.g. http://www.sfu.ca/~ssurjano/camel6.html, this line should be: term3 = -4*x2**2 + 4*x2**4
This results in the wrong f(x) value being calculated at e.g. the global optima (0.0898, -0.7126) and (-0.0898, 0.7126).

To Reproduce

In [1]: import mf2
        sixhump_optima = [
            [ 0.0898, -0.7126],
            [-0.0898,  0.7126],
        ]
        print(f'{mf2.six_hump_camelback.high(sixhump_optima)=}')
Out[1]: mf2.six_hump_camelback.high(sixhump_optima)=array([3.0307616570719187, 3.0307616570719187])

Expected behavior

In [2]: import mf2
        sixhump_optima = [
            [ 0.0898, -0.7126],
            [-0.0898,  0.7126],
        ]
        print(f'{mf2.six_hump_camelback.high(sixhump_optima)=}')
Out[2]: mf2.six_hump_camelback.high(sixhump_optima)=array([-1.0316284229280819, -1.0316284229280819])
                                                           ^^^^^^^^^^^^^^^^^^^  ^^^^^^^^^^^^^^^^^^^

Version information:

  • MF2 version: 2021.2.0

[Refactor] Simplify explicit dimensionality tests in property_test

Currently, a different test function exists for each dimensionality, requiring explicit maintenance if any new functions of different dimensionalities are added.

For example:

@given(ndim_array(n=6))
@pytest.mark.parametrize("function", [
    mf2.hartmann6,
])
def test_6d_functions(function, x):
    _test_single_function(function, x)


@given(ndim_array(n=8))
@pytest.mark.parametrize("function", [
    mf2.borehole,
])
def test_8d_functions(function, x):
    _test_single_function(function, x)

This does the same thing twice, except for the parameter n to the hypothesis strategy ndim_array, which can be obtained from the function in question.

How to incorporate this such that the hypothesis strategy can be used within the pytest parametrization is currently the open question.

[Feature request] Add easy inversion option

Currently, most functions are minimizable problems while e.g. Currin is maximizable. It would be helpful to have them all be of the same type to begin with, but that would be unclear with regards to the original definitions.

The next best thing would be an invert() option to invert a maximization problem to a minimization problem and vice versa.

[Feature request] Add xopt for adjustable functions (if possible)

Currently, the non-adjustable functions have had an x_opt attribute added, but the adjustable functions do not.

If it is possible to (easily) determine the x_opt for an adjustable function, it should be added.

Current adjustable functions:

  • Branin
  • Paciorek
  • Hartmann
  • Trid

[Docs] Add landscape images for 2d functions in documentation

The documentation per function is currently rather bare. By including 2d heatmap plots and/or 3d landscape plots of the functions, it will be easier for users to inspect the functions for landscape properties they may be looking for.

The plots should be made using a single 'default' function that can simply be given a MultiFideltiyFunction object.

[Docs] Update docs for v2022.6

Current documentation needs an update about the AdjustableMultiFidelityFunction class and any old references to numbered adjustable parameters such as a2 for paciorek

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.