Coder Social home page Coder Social logo

basnijholt / adaptive Goto Github PK

View Code? Open in Web Editor NEW

This project forked from python-adaptive/adaptive

0.0 0.0 0.0 1.21 MB

:chart_with_upwards_trend: Tools for adaptive and parallel samping of mathematical functions

Home Page: http://adaptive.readthedocs.io/

License: BSD 3-Clause "New" or "Revised" License

Python 85.81% Shell 0.10% Jupyter Notebook 13.96% Dockerfile 0.14%

adaptive's Introduction

Bas Nijholt 👋

  • 👷🏻‍♂️ Currently at IonQ, doing my bit in building a quantum computer, before that I was at Microsoft Quantum.
  • 🌟 A deep dive into computational topological quantum mechanics earned me my PhD.
  • 🎨 I've crafted a few libraries for Home Assistant, making home automation a bit more fun.
  • ⚒️ Made other tools speed up and massively parallelize numerical simulations.
  • 🏅 Very passionate about open-source, software quality, user experience, and smooth performance.
  • 🐍 Python is my go-to language in most of my projects.
  • Some of my favorite creations:
    • 📈 python-adaptive/adaptive: Parallel active learning of mathematical functions? Check!
    • 🧬 unidep: Unifying pip and conda requirements, single command to set up a full dev environment.
    • 💡 adaptive-lighting: A custom component for Home Assistant to keep your lighting in synn with the sun.
    • 📝 markdown-code-runner: Run (hidden) code blocks right within your Markdown files - keep simple README.mds in sync!
    • 🕒 rsync-time-machine.py: Time Machine-style backups with rsync for the minimalists.
    • 🏠 home-assistant-config: Over 100 documented automations in my Home Assistant config

Below are some (automatically generated) statistics about my activity on GitHub. For more info check out my website www.nijho.lt or talk to me on Mastodon.

Ask me about:

Last updated at 2024-06-14 12:13:38.570411.

GitHub statistics — my top 20

number of GitHub stars ⭐️

  1. basnijholt/adaptive-lighting, 1690 ⭐️s
  2. basnijholt/home-assistant-config, 1663 ⭐️s
  3. python-kasa/python-kasa, 1126 ⭐️s
  4. python-adaptive/adaptive, 1121 ⭐️s
  5. basnijholt/lovelace-ios-themes, 584 ⭐️s
Click to expand!
  1. basnijholt/lovelace-ios-dark-mode-theme, 447 ⭐️s
  2. basnijholt/rsync-time-machine.py, 370 ⭐️s
  3. basnijholt/miflora, 362 ⭐️s
  4. topocm/topocm_content, 268 ⭐️s
  5. basnijholt/unidep, 215 ⭐️s
  6. basnijholt/home-assistant-streamdeck-yaml, 208 ⭐️s
  7. basnijholt/home-assistant-macbook-touch-bar, 94 ⭐️s
  8. kwant-project/kwant, 85 ⭐️s
  9. basnijholt/markdown-code-runner, 83 ⭐️s
  10. basnijholt/home-assistant-streamdeck-yaml-addon, 63 ⭐️s
  11. basnijholt/aiokef, 37 ⭐️s
  12. basnijholt/thesis-cover, 34 ⭐️s
  13. basnijholt/adaptive-scheduler, 26 ⭐️s
  14. basnijholt/instacron, 20 ⭐️s
  15. kwant-project/kwant-tutorial-2016, 19 ⭐️s

number of commits :octocat:

  1. basnijholt/home-assistant-config, 1769 commits :octocat:
  2. python-adaptive/adaptive, 1430 commits :octocat:
  3. basnijholt/adaptive-scheduler, 758 commits :octocat:
  4. basnijholt/adaptive-lighting, 558 commits :octocat:
  5. basnijholt/thesis, 452 commits :octocat:
Click to expand!
  1. basnijholt/unidep, 439 commits :octocat:
  2. basnijholt/zigzag-majoranas, 413 commits :octocat:
  3. basnijholt/home-assistant-streamdeck-yaml, 314 commits :octocat:
  4. topocm/topocm_content, 304 commits :octocat:
  5. basnijholt/aiokef, 288 commits :octocat:
  6. basnijholt/nijho.lt, 286 commits :octocat:
  7. basnijholt/supercurrent-majorana-nanowire, 282 commits :octocat:
  8. conda-forge/staged-recipes, 279 commits :octocat:
  9. basnijholt/net-worth-tracker, 228 commits :octocat:
  10. python-adaptive/paper, 198 commits :octocat:
  11. home-assistant/core, 192 commits :octocat:
  12. basnijholt/spin-orbit-nanowires, 191 commits :octocat:
  13. ohld/igbot, 191 commits :octocat:
  14. basnijholt/lovelace-ios-themes, 161 commits :octocat:
  15. basnijholt/media_player.kef, 157 commits :octocat:

These plots and stats are generated by this Jupyter notebook using this GitHub Action.

adaptive's People

Contributors

akhmerov avatar basnijholt avatar jbweston avatar jhoofwijk avatar

Watchers

 avatar  avatar  avatar

adaptive's Issues

Issues that can potentially be closed

(original issue on GitLab)

opened by Bas Nijholt (basnijholt) at 2018-12-07T19:57:55.908Z

I closed a bunch of issues that I think could be closed, however, there are some more that I don't know what to do with:

(LearnerND) add advanced usage example

(original issue on GitLab)

opened by Bas Nijholt (@basnijholt) at 2018-10-20T12:51:05.772Z

We should add the following "real world usage" code to the tutorial as an "Advanced example" once we merged !127 and !124.

This as a downloadable file kwant_functions.py

from functools import lru_cache
import numpy as np
import scipy.linalg
import scipy.spatial
import kwant


@lru_cache()
def create_syst(unit_cell):
    lat = kwant.lattice.Polyatomic(unit_cell, [(0, 0, 0)])
    syst = kwant.Builder(kwant.TranslationalSymmetry(*lat.prim_vecs))
    syst[lat.shape(lambda _: True, (0, 0, 0))] = 6
    syst[lat.neighbors()] = -1
    return kwant.wraparound.wraparound(syst).finalized()


def get_brillouin_zone(unit_cell):
    syst = create_syst(unit_cell)
    A = get_A(syst)
    neighbours = kwant.linalg.lll.voronoi(A)
    lattice_points = np.concatenate(([[0, 0, 0]], neighbours))
    lattice_points = 2 * np.pi * (lattice_points @ A.T)
    vor = scipy.spatial.Voronoi(lattice_points)
    brillouin_zone = vor.vertices[vor.regions[vor.point_region[0]]]
    return scipy.spatial.ConvexHull(brillouin_zone)


def momentum_to_lattice(k, syst):
    A = get_A(syst)
    k, residuals = scipy.linalg.lstsq(A, k)[:2]
    if np.any(abs(residuals) > 1e-7):
        raise RuntimeError("Requested momentum doesn't correspond"
                           " to any lattice momentum.")
    return k


def get_A(syst):
    B = np.asarray(syst._wrapped_symmetry.periods).T
    return np.linalg.pinv(B).T


def energies(k, unit_cell):
    syst = create_syst(unit_cell)
    k_x, k_y, k_z = momentum_to_lattice(k, syst)
    params = {'k_x': k_x, 'k_y': k_y, 'k_z': k_z}
    H = syst.hamiltonian_submatrix(params=params)
    energies = np.linalg.eigvalsh(H)
    return min(energies)

This in the tutorial:

from functools import partial

from ipywidgets import interact_manual
import numpy as np

import adaptive
from kwant_functions import get_brillouin_zone, energies

adaptive.notebook_extension()

# Define the lattice vectors of some common unit cells
lattices = dict(
    hexegonal=(
        (0, 1, 0),
        (np.cos(-np.pi / 6), np.sin(-np.pi / 6), 0),
        (0, 0, 1)
    ),
    simple_cubic=(
        (1, 0, 0),
        (0, 1, 0),
        (0, 0, 1)
    ),
    fcc=(
        (0, .5, .5),
        (.5, .5, 0),
        (.5, 0, .5)
    ),
    bcc=(
        (-.5, .5, .5),
        (.5, -.5, .5),
        (.5, .5, -.5)
    )
)


learners = []
for name, unit_cell in lattices.items():
    hull = get_brillouin_zone(unit_cell)
    learner = adaptive.LearnerND(partial(energies, unit_cell=unit_cell), hull)
    learner.fname = name
    learners.append(learner)

learner = adaptive.BalancingLearner(learners, strategy='npoints')
adaptive.runner.simple(learner, goal=lambda l: l.learners[0].npoints > 20)

# XXX: maybe this could even be a `BalancingLearner` method.
def select(name, learner=learner):
    return next(l for l in learner.learners if l.fname == name)

def iso(unit_cell, level=8.5):
    l = select(unit_cell)
    return l.plot_isosurface(level=level)

def plot_tri(unit_cell):
    l = select(unit_cell)
    return l.plot_3D()

interact_manual(plot_tri, unit_cell=lattices.keys())  # this won't work, but something along these lines
interact_manual(iso, level=(-6, 9, 0.1), unit_cell=lattices.keys())

Issues that can potentially be closed

(original issue on GitLab)

opened by Bas Nijholt (username_url) at 2018-12-07T19:57:55.908Z

I closed a bunch of issues that I think could be closed, however, there are some more that I don't know what to do with:

(LearnerND) add advanced usage example

(original issue on GitLab)

opened by Bas Nijholt (basnijholt) at 2018-10-20T12:51:05.772Z

We should add the following "real world usage" code to the tutorial as an "Advanced example" once we merged !127 and !124.

This as a downloadable file kwant_functions.py

from functools import lru_cache
import numpy as np
import scipy.linalg
import scipy.spatial
import kwant


@lru_cache()
def create_syst(unit_cell):
    lat = kwant.lattice.Polyatomic(unit_cell, [(0, 0, 0)])
    syst = kwant.Builder(kwant.TranslationalSymmetry(*lat.prim_vecs))
    syst[lat.shape(lambda _: True, (0, 0, 0))] = 6
    syst[lat.neighbors()] = -1
    return kwant.wraparound.wraparound(syst).finalized()


def get_brillouin_zone(unit_cell):
    syst = create_syst(unit_cell)
    A = get_A(syst)
    neighbours = kwant.linalg.lll.voronoi(A)
    lattice_points = np.concatenate(([[0, 0, 0]], neighbours))
    lattice_points = 2 * np.pi * (lattice_points @ A.T)
    vor = scipy.spatial.Voronoi(lattice_points)
    brillouin_zone = vor.vertices[vor.regions[vor.point_region[0]]]
    return scipy.spatial.ConvexHull(brillouin_zone)


def momentum_to_lattice(k, syst):
    A = get_A(syst)
    k, residuals = scipy.linalg.lstsq(A, k)[:2]
    if np.any(abs(residuals) > 1e-7):
        raise RuntimeError("Requested momentum doesn't correspond"
                           " to any lattice momentum.")
    return k


def get_A(syst):
    B = np.asarray(syst._wrapped_symmetry.periods).T
    return np.linalg.pinv(B).T


def energies(k, unit_cell):
    syst = create_syst(unit_cell)
    k_x, k_y, k_z = momentum_to_lattice(k, syst)
    params = {'k_x': k_x, 'k_y': k_y, 'k_z': k_z}
    H = syst.hamiltonian_submatrix(params=params)
    energies = np.linalg.eigvalsh(H)
    return min(energies)

This in the tutorial:

from functools import partial

from ipywidgets import interact_manual
import numpy as np

import adaptive
from kwant_functions import get_brillouin_zone, energies

adaptive.notebook_extension()

# Define the lattice vectors of some common unit cells
lattices = dict(
    hexegonal=(
        (0, 1, 0),
        (np.cos(-np.pi / 6), np.sin(-np.pi / 6), 0),
        (0, 0, 1)
    ),
    simple_cubic=(
        (1, 0, 0),
        (0, 1, 0),
        (0, 0, 1)
    ),
    fcc=(
        (0, .5, .5),
        (.5, .5, 0),
        (.5, 0, .5)
    ),
    bcc=(
        (-.5, .5, .5),
        (.5, -.5, .5),
        (.5, .5, -.5)
    )
)


learners = []
for name, unit_cell in lattices.items():
    hull = get_brillouin_zone(unit_cell)
    learner = adaptive.LearnerND(partial(energies, unit_cell=unit_cell), hull)
    learner.fname = name
    learners.append(learner)

learner = adaptive.BalancingLearner(learners, strategy='npoints')
adaptive.runner.simple(learner, goal=lambda l: l.learners[0].npoints > 20)

# XXX: maybe this could even be a `BalancingLearner` method.
def select(name, learner=learner):
    return next(l for l in learner.learners if l.fname == name)

def iso(unit_cell, level=8.5):
    l = select(unit_cell)
    return l.plot_isosurface(level=level)

def plot_tri(unit_cell):
    l = select(unit_cell)
    return l.plot_3D()

interact_manual(plot_tri, unit_cell=lattices.keys())  # this won't work, but something along these lines
interact_manual(iso, level=(-6, 9, 0.1), unit_cell=lattices.keys())

divide by zero warnings in LearnerND

(original issue on GitLab)

opened by Joseph Weston (@jbweston) at 2018-12-12T10:57:01.525Z

jbw@broadway adaptive-evaluation ((HEAD detached at v0.7.0)) $ python
Python 3.6.5 | packaged by conda-forge | (default, Apr  6 2018, 13:39:56) 
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import adaptive
>>> import numpy as np
>>> adaptive.__version__
'0.7.0'
>>> np.__version__
'1.15.2'
>>> def f(xy):
...     return 1
... 
>>> learner = adaptive.LearnerND(f, ((-1, 1), (-1, 1)))
>>> adaptive.runner.simple(learner, lambda l: l.npoints >= 100)
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:524: RuntimeWarning: divide by zero encountered in long_scalars
  scale_multiplier = 1 / self._scale
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:543: RuntimeWarning: invalid value encountered in long_scalars
  scale_factor = np.max(np.nan_to_num(self._scale / self._old_scale))

I guess we should explicitly set the places where _scale is zero to have an infinite multiplier?

divide by zero warnings in LearnerND

(original issue on GitLab)

opened by Joseph Weston (jbweston) at 2018-12-12T10:57:01.525Z

jbw@broadway adaptive-evaluation ((HEAD detached at v0.7.0)) $ python
Python 3.6.5 | packaged by conda-forge | (default, Apr  6 2018, 13:39:56) 
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import adaptive
>>> import numpy as np
>>> adaptive.__version__
'0.7.0'
>>> np.__version__
'1.15.2'
>>> def f(xy):
...     return 1
... 
>>> learner = adaptive.LearnerND(f, ((-1, 1), (-1, 1)))
>>> adaptive.runner.simple(learner, lambda l: l.npoints >= 100)
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:524: RuntimeWarning: divide by zero encountered in long_scalars
  scale_multiplier = 1 / self._scale
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:543: RuntimeWarning: invalid value encountered in long_scalars
  scale_factor = np.max(np.nan_to_num(self._scale / self._old_scale))

I guess we should explicitly set the places where _scale is zero to have an infinite multiplier?

Issues that can potentially be closed

(original issue on GitLab)

opened by Bas Nijholt (@basnijholt) at 2018-12-07T19:57:55.908Z

I closed a bunch of issues that I think could be closed, however, there are some more that I don't know what to do with:

Document and test loss function signatures

(original issue on GitLab)

opened by Anton Akhmerov (@anton-akhmerov) at 2018-07-23T19:06:55.212Z

A loss function is a significant part of the interface of each learner. It provides the users with nearly infinite ways to customize the learner's behavior, and it is also the main way for the users to do so.

As a consequence I believe we need to do the following:

  • Each learner that allows a custom loss function must specify the detailed call signature of this function in the docstring.
  • We should test whether a learner provides a correct input to the loss function. For example if we say that Learner2D passes an interpolation instance to the loss, we should try and run Learner2D with the loss that verifies that its input is indeed an instance of interpolation. We did not realize this, but loss is a part of the learner's public API.
  • All loss functions that we provide should instead be factory functions that return a loss function whose call signature conforms to the spec. For example learner2D.resolution_loss(ip, min_distance=0, max_distance=1) does not conform to the spec, and is not directly reusable. Instead this should have been a functools.partial(learner2D.resolution_loss, min_distance=0, max_distance=1).
  • We should convert all our loss functions that have arbitrary hard-coded parameters into such factory functions, and we should test their conformance to the spec.

revisit learner tests

(original issue on GitLab)

opened by Joseph Weston (@jbweston) at 2018-07-09T14:37:53.477Z

Currently most of the learner tests are property-based tests.

Currently most of the learner tests fail.

We should re-visit what is actually being tested and see if these are actually properties that we want learners to have.
If they are then we should aim to fix adaptive ASAP, if they are not then we should remove the tests.

Issues that can potentially be closed

(original issue on GitLab)

opened by Bas Nijholt (@basnijholt) at 2018-12-07T19:57:55.908Z

I closed a bunch of issues that I think could be closed, however, there are some more that I don't know what to do with:

divide by zero warnings in LearnerND

(original issue on GitLab)

opened by Joseph Weston (@jbweston) at 2018-12-12T10:57:01.525Z

jbw@broadway adaptive-evaluation ((HEAD detached at v0.7.0)) $ python
Python 3.6.5 | packaged by conda-forge | (default, Apr  6 2018, 13:39:56) 
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import adaptive
>>> import numpy as np
>>> adaptive.__version__
'0.7.0'
>>> np.__version__
'1.15.2'
>>> def f(xy):
...     return 1
... 
>>> learner = adaptive.LearnerND(f, ((-1, 1), (-1, 1)))
>>> adaptive.runner.simple(learner, lambda l: l.npoints >= 100)
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:524: RuntimeWarning: divide by zero encountered in long_scalars
  scale_multiplier = 1 / self._scale
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:543: RuntimeWarning: invalid value encountered in long_scalars
  scale_factor = np.max(np.nan_to_num(self._scale / self._old_scale))

I guess we should explicitly set the places where _scale is zero to have an infinite multiplier?

(LearnerND) add advanced usage example

(original issue on GitLab)

opened by Bas Nijholt (@basnijholt) at 2018-10-20T12:51:05.772Z

We should add the following "real world usage" code to the tutorial as an "Advanced example" once we merged !127 and !124.

This as a downloadable file kwant_functions.py

from functools import lru_cache
import numpy as np
import scipy.linalg
import scipy.spatial
import kwant


@lru_cache()
def create_syst(unit_cell):
    lat = kwant.lattice.Polyatomic(unit_cell, [(0, 0, 0)])
    syst = kwant.Builder(kwant.TranslationalSymmetry(*lat.prim_vecs))
    syst[lat.shape(lambda _: True, (0, 0, 0))] = 6
    syst[lat.neighbors()] = -1
    return kwant.wraparound.wraparound(syst).finalized()


def get_brillouin_zone(unit_cell):
    syst = create_syst(unit_cell)
    A = get_A(syst)
    neighbours = kwant.linalg.lll.voronoi(A)
    lattice_points = np.concatenate(([[0, 0, 0]], neighbours))
    lattice_points = 2 * np.pi * (lattice_points @ A.T)
    vor = scipy.spatial.Voronoi(lattice_points)
    brillouin_zone = vor.vertices[vor.regions[vor.point_region[0]]]
    return scipy.spatial.ConvexHull(brillouin_zone)


def momentum_to_lattice(k, syst):
    A = get_A(syst)
    k, residuals = scipy.linalg.lstsq(A, k)[:2]
    if np.any(abs(residuals) > 1e-7):
        raise RuntimeError("Requested momentum doesn't correspond"
                           " to any lattice momentum.")
    return k


def get_A(syst):
    B = np.asarray(syst._wrapped_symmetry.periods).T
    return np.linalg.pinv(B).T


def energies(k, unit_cell):
    syst = create_syst(unit_cell)
    k_x, k_y, k_z = momentum_to_lattice(k, syst)
    params = {'k_x': k_x, 'k_y': k_y, 'k_z': k_z}
    H = syst.hamiltonian_submatrix(params=params)
    energies = np.linalg.eigvalsh(H)
    return min(energies)

This in the tutorial:

from functools import partial

from ipywidgets import interact_manual
import numpy as np

import adaptive
from kwant_functions import get_brillouin_zone, energies

adaptive.notebook_extension()

# Define the lattice vectors of some common unit cells
lattices = dict(
    hexegonal=(
        (0, 1, 0),
        (np.cos(-np.pi / 6), np.sin(-np.pi / 6), 0),
        (0, 0, 1)
    ),
    simple_cubic=(
        (1, 0, 0),
        (0, 1, 0),
        (0, 0, 1)
    ),
    fcc=(
        (0, .5, .5),
        (.5, .5, 0),
        (.5, 0, .5)
    ),
    bcc=(
        (-.5, .5, .5),
        (.5, -.5, .5),
        (.5, .5, -.5)
    )
)


learners = []
for name, unit_cell in lattices.items():
    hull = get_brillouin_zone(unit_cell)
    learner = adaptive.LearnerND(partial(energies, unit_cell=unit_cell), hull)
    learner.fname = name
    learners.append(learner)

learner = adaptive.BalancingLearner(learners, strategy='npoints')
adaptive.runner.simple(learner, goal=lambda l: l.learners[0].npoints > 20)

# XXX: maybe this could even be a `BalancingLearner` method.
def select(name, learner=learner):
    return next(l for l in learner.learners if l.fname == name)

def iso(unit_cell, level=8.5):
    l = select(unit_cell)
    return l.plot_isosurface(level=level)

def plot_tri(unit_cell):
    l = select(unit_cell)
    return l.plot_3D()

interact_manual(plot_tri, unit_cell=lattices.keys())  # this won't work, but something along these lines
interact_manual(iso, level=(-6, 9, 0.1), unit_cell=lattices.keys())

Issues that can potentially be closed

(original issue on GitLab)

opened by Bas Nijholt (@basnijholt) at 2018-12-07T19:57:55.908Z

I closed a bunch of issues that I think could be closed, however, there are some more that I don't know what to do with:

make triangulation tests stronger with more randomness

(original issue on GitLab)

opened by Joseph Weston (@jbweston) at 2018-07-10T15:35:28.517Z

Currently we test against the standard simplex.

We could improve matters by applying a random affine transform to the standard simplex, and checking that the tests still pass.

We would also need to have functions for generating points around simplices (inside, outside, on face). This should not be too hard.

For example we can generate points on a face by choosing ndim positive random numbers from successively smaller intervals, and then choosing a final number so that the sum is 1. These are the coordinates of a point in a simplex in the basis of the vertex vectors.

learner tests fail

(original issue on GitLab)

opened by Joseph Weston (@jbweston) at 2018-07-09T15:41:24.516Z

Currently several integrator learner tests fail.

To reproduce checkout current master (f268c8d as of now)

Running the test suite with the following random seeds will illustrate all of the failure modes:

pytest --randomly-dont-reorganize --randomly-seed=1531149508 adaptive/tests/test_cquad.py  # 2 failures in integrator learner
pytest --randomly-dont-reorganize --randomly-seed=1531150331 adaptive/tests/test_cquad.py  # 1 same failure as above, and 1 different

I will mark all these tests as xfailing for now, but IMO at least 1 of these failures does indicate a bug in the interval logic.

Speed up LearnerND

(original issue on GitLab)

opened by Jorn Hoofwijk (@Jorn) at 2018-07-08T17:31:01.760Z

To make the code run faster

  • cython
  • numba

To implement a new algorithm for better performance:

  • the _ask function makes a new heap every time it is called, that is wasted time (resolved in !88)
  • the _ask function recomputes the loss in subtriangulated simplices everytime, should not be needed (resolved in !88)

Document and test loss function signatures

(original issue on GitLab)

opened by Anton Akhmerov (anton-akhmerov) at 2018-07-23T19:06:55.212Z

A loss function is a significant part of the interface of each learner. It provides the users with nearly infinite ways to customize the learner's behavior, and it is also the main way for the users to do so.

As a consequence I believe we need to do the following:

  • Each learner that allows a custom loss function must specify the detailed call signature of this function in the docstring.
  • We should test whether a learner provides a correct input to the loss function. For example if we say that Learner2D passes an interpolation instance to the loss, we should try and run Learner2D with the loss that verifies that its input is indeed an instance of interpolation. We did not realize this, but loss is a part of the learner's public API.
  • All loss functions that we provide should instead be factory functions that return a loss function whose call signature conforms to the spec. For example learner2D.resolution_loss(ip, min_distance=0, max_distance=1) does not conform to the spec, and is not directly reusable. Instead this should have been a functools.partial(learner2D.resolution_loss, min_distance=0, max_distance=1).
  • We should convert all our loss functions that have arbitrary hard-coded parameters into such factory functions, and we should test their conformance to the spec.

divide by zero warnings in LearnerND

(original issue on GitLab)

opened by Joseph Weston (@jbweston) at 2018-12-12T10:57:01.525Z

jbw@broadway adaptive-evaluation ((HEAD detached at v0.7.0)) $ python
Python 3.6.5 | packaged by conda-forge | (default, Apr  6 2018, 13:39:56) 
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import adaptive
>>> import numpy as np
>>> adaptive.__version__
'0.7.0'
>>> np.__version__
'1.15.2'
>>> def f(xy):
...     return 1
... 
>>> learner = adaptive.LearnerND(f, ((-1, 1), (-1, 1)))
>>> adaptive.runner.simple(learner, lambda l: l.npoints >= 100)
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:524: RuntimeWarning: divide by zero encountered in long_scalars
  scale_multiplier = 1 / self._scale
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:543: RuntimeWarning: invalid value encountered in long_scalars
  scale_factor = np.max(np.nan_to_num(self._scale / self._old_scale))

I guess we should explicitly set the places where _scale is zero to have an infinite multiplier?

Document and test loss function signatures

(original issue on GitLab)

opened by Anton Akhmerov (@anton-akhmerov) at 2018-07-23T19:06:55.212Z

A loss function is a significant part of the interface of each learner. It provides the users with nearly infinite ways to customize the learner's behavior, and it is also the main way for the users to do so.

As a consequence I believe we need to do the following:

  • Each learner that allows a custom loss function must specify the detailed call signature of this function in the docstring.
  • We should test whether a learner provides a correct input to the loss function. For example if we say that Learner2D passes an interpolation instance to the loss, we should try and run Learner2D with the loss that verifies that its input is indeed an instance of interpolation. We did not realize this, but loss is a part of the learner's public API.
  • All loss functions that we provide should instead be factory functions that return a loss function whose call signature conforms to the spec. For example learner2D.resolution_loss(ip, min_distance=0, max_distance=1) does not conform to the spec, and is not directly reusable. Instead this should have been a functools.partial(learner2D.resolution_loss, min_distance=0, max_distance=1).
  • We should convert all our loss functions that have arbitrary hard-coded parameters into such factory functions, and we should test their conformance to the spec.

divide by zero warnings in LearnerND

(original issue on GitLab)

opened by Joseph Weston (username_url) at 2018-12-12T10:57:01.525Z

jbw@broadway adaptive-evaluation ((HEAD detached at v0.7.0)) $ python
Python 3.6.5 | packaged by conda-forge | (default, Apr  6 2018, 13:39:56) 
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import adaptive
>>> import numpy as np
>>> adaptive.__version__
'0.7.0'
>>> np.__version__
'1.15.2'
>>> def f(xy):
...     return 1
... 
>>> learner = adaptive.LearnerND(f, ((-1, 1), (-1, 1)))
>>> adaptive.runner.simple(learner, lambda l: l.npoints >= 100)
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:524: RuntimeWarning: divide by zero encountered in long_scalars
  scale_multiplier = 1 / self._scale
/home/jbw/work/code/2017/adaptive-evaluation/adaptive/learner/learnerND.py:543: RuntimeWarning: invalid value encountered in long_scalars
  scale_factor = np.max(np.nan_to_num(self._scale / self._old_scale))

I guess we should explicitly set the places where _scale is zero to have an infinite multiplier?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.