Coder Social home page Coder Social logo

huji-deep / flowket Goto Github PK

View Code? Open in Web Editor NEW
37.0 7.0 11.0 10.77 MB

A framework based on Tensorflow for running variational Monte-Carlo simulations of quantum many-body systems.

License: MIT License

Python 100.00%
variational-monte-carlo tensorflow keras autoregressive pixelcnn python3

flowket's People

Contributors

noamwies avatar orsharir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

flowket's Issues

Observable design

A couple issues with the current design of the Observable class:

  1. Observable shouldn't reference VMC directly, because it's not really needed and also because VMC also hold an observable, which creates a cyclic reference.
  2. Objects should hold as little state as possible, and specifically, update_batch_local_energy (or its variants) shouldn't save the results to a class variable, but return the results to the caller.
  3. Observable are not strictly about energies, so the relevant methods should have a more general descriptive name, e.g. def estimate(self, wave_function_model, configurations).
  4. SigmaZ should also be a type of observable (preferably one that inherits from so called LambdaObservable, that apply a general python function on a spin-configuration).

Running Flowket under CPU parallelization with MPI

Hi,

Walking through the examples such as this code, I noticed that we can speed up the sampling by using GPUs.
I am not an expert of tensorflow or Horovod, but from the website of Horovod I understand that it is capable of calculation using MPI.

May I ask if it is possible to run flowket codes without GPU but with CPU parallelization using MPI?
If so, how shall I modify the codes?
(I do not have GPU machines right now, so wanted to know if my current environment in supercomputers are fine)

Thank you very much for your help in advance.

Best regards,
Nobu

Tuple / List normalization issue

When using the Heisenberg hamiltonian, if the state shape is a list (instead of a tuple), the following error is raised. Wherever it is reasonable, we should convert an input list/tuple to a single consistent type, or raise appropriate errors.

Traceback (most recent call last):
File "basic_heisenberg_2d.py", line 43, in
max_queue_size=0, workers=0)
File "/cs/labs/shashua/ors07/env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 2177, in fit
_generator
initial_epoch=initial_epoch)
File "/cs/labs/shashua/ors07/env/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_generator.py", line 14
7, in fit_generator
generator_output = next(output_generator)
File "/cs/labs/shashua/ors07/Pyket/src/pyket/optimization/variational_monte_carlo.py", line 101, in to_generator
yield next(self)
File "/cs/labs/shashua/ors07/Pyket/src/pyket/optimization/variational_monte_carlo.py", line 97, in next
return self.next()
File "/cs/labs/shashua/ors07/Pyket/src/pyket/optimization/variational_monte_carlo.py", line 88, in next
self._update_batch_local_energy()
File "/cs/labs/shashua/ors07/Pyket/src/pyket/optimization/variational_monte_carlo.py", line 80, in _update_batch_local_ener
gy
self.energy_observable.update_batch_local_energy()
File "/cs/labs/shashua/ors07/Pyket/src/pyket/optimization/variational_monte_carlo.py", line 51, in update_batch_local_energ
y
local_connections, hamiltonian_values, all_use_conn = self.operator.find_conn(self.variational_monte_carlo.current_batch)
File "/cs/labs/shashua/ors07/Pyket/src/pyket/operators/heisenberg.py", line 42, in find_conn
calculator = HeisenbergFindConn(self, batch_size, pbc=self.pbc)
File "/cs/labs/shashua/ors07/Pyket/src/pyket/operators/heisenberg.py", line 60, in init
self.all_conn = numpy.zeros((num_of_conn, batch_size) + self.ham.hilbert_state_shape)

Add a simple "wrapper"-like function or general class for easy autoregressive model

While working with the framework, it seems that the current method for defining a new autoregressive model is to define a new class that inherit from AutoNormalizedAutoregressiveMachine. Though it's definitely good to be able to define general architectures this way, it would be nice to have a simple wrapper that accepts a keras model that outputs N unnormalized conditional probabilities (or N unnormalized complex conditional WF) and produces two models, one outputting the normalized conditional probabilities and one outputting the normalized WF (after summing over the chosen indices). It's also fine to do it as a wrapper/factory class accepting as input a keras model and returning a class inheriting from AutoNormalizedAutoregressiveMachine using this model.

You could optionally use the logic of the fast auto sampler to check that the model is autoregressive (induces a valid ordering on the input), though it should be strict, i.e., if the model uses unsupported layers, simply output a warning that it cannot determined the correctness of the model.

Tensorboard reporting samples instead of iterations

I've noticed that in tensorboard instead of having measurements (loss, energy, time, variance, etc.) correspond to number of iterations, it's currently correspond to number of samples, which is confusing and makes it difficult comparing the affect of different batch sizes.

As a workaround I can use relative time, but that makes sense only when I use the exact same machine.

Tensorboard callback breaks on TF=1.14 (and probably TF2)

When used on TF1.14, there's no longer model.targets variable, and so the Tensorboard callback breaks. It appears that the callback was based on the public Keras Tensorboard callback, instead of the TF.Keras version which corrects handle this without using the targets member.

module named error

Traceback (most recent call last):
File "basic_autoregressive_2d.py", line 10, in
from flowket.optimization import VariationalMonteCarlo, loss_for_energy_minimization
File "/home/---/.conda/envs/pytorch-gpu/lib/python3.7/site-packages/flowket/optimization/init.py", line 3, in
from .exact_variational import ExactVariational
File "/home/---/.conda/envs/pytorch-gpu/lib/python3.7/site-packages/flowket/optimization/exact_variational.py", line 3, in
from ..exact.utils import binary_array_to_decimal_array, decimal_array_to_binary_array, fsum, complex_norm_log_fsum_exp
ModuleNotFoundError: No module named 'flowket.exact'

When I run the files in experiment folder, I always meet the above error. I am sure my package is well installed.

J1J2 exact example leads to NaN values

The simple J1-J2 example with exact optimization (j1j2_2d_exact_4.py) seems to lead to NaN / overflow values after a few iterations:

/cs/labs/shashua/ors07/Pyket/src/pyket/optimization/exact_variational.py:92: RuntimeWarning: overflow encou$
tered in multiply
np.multiply(np.real(self.naive_local_energy_minus_energy), np.real(self.naive_local_energy_minus_energy), out=self.naive_local_energy_minus_energy_squared)
/cs/labs/shashua/ors07/Pyket/src/pyket/optimization/exact_variational.py:93: RuntimeWarning: invalid value encountered in multiply
np.multiply(self.naive_local_energy_minus_energy_squared, self.exact_variational.probs, out=self.probs_mult_local_energy_variance)

Compatibility with TF2

Now that TF2.0 has reached beta (and API freeze), we can start testing if our library is compatible with it. The idea isn't to leverage any of the new capabilities of TF2.0 (for instance, there's going to be a new and better support for multi-gpu upon TF2.0 release), only to check that our current code can be ran as is using TF2 backend.

This is a low priority task, that can be postponed after the other issues are resolved.

Break validation to steps instead of one large batch

To make it possible to compute good estimates of the validation, the code should support aggregating mini-batches for estimating the energy (and other observables) over the validation set. Moreover, there should be an easy way to take an Observable instance, and estimate it's property outside of fit() or tensorboard.

wrong max_number_of_local_connections fixing and continue recursively

While training to train J1J2 (simply taking the basic ising example and replacing the operator), the message "wrong max_number_of_local_connections fixing and continue recursively" is repeatedly printed during training.

If this is the normal behavior, should we show this to the user? If not, what am I doing wrong?

Stochastic reconfiguration for autoregressive models?

Hi,

Thank you very much for making this great library public!
I was walking through the example codes and noticed that the stochastic reconfiguration is available, e.g. for Rbm.
However, when I tried to apply this to convnet autoregressive model, I encountered an error.

Concretely, I executed a code looking like the following, but the optimizer could not be defined.
Is SR simply not available so far, or is there a workaround?

Thanks very much in advance!

Nobu

from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model

from flowket.optimizers import ComplexValuesStochasticReconfiguration
from flowket.layers import LogSpaceComplexNumberHistograms
from flowket.machines import ConvNetAutoregressive2D
from flowket.operators import Ising
from flowket.optimization import VariationalMonteCarlo, loss_for_energy_minimization
from flowket.samplers import AutoregressiveSampler

Nx = 3
Ny = 3

depth = 2
nchannel = 8

# Define machines
hilbert_state_shape = [Nx, Ny]
inputs = Input(shape=hilbert_state_shape, dtype='int8')
convnet = ConvNetAutoregressive2D(inputs, depth=depth, num_of_channels=nchannel, weights_normalization=False)
predictions, conditional_log_probs = convnet.predictions, convnet.conditional_log_probs
predictions = LogSpaceComplexNumberHistograms(name='psi')(predictions)

model = Model(inputs=inputs, outputs=predictions)
conditional_log_probs_model = Model(inputs=inputs, outputs=conditional_log_probs)
optimizer = ComplexValuesStochasticReconfiguration(model, convnet.predictions_jacobian, lr=0.05, diag_shift=0.1,
                                                   iterative_solver=False, )

Cannot pip install flowket!

When I use the command pip install flowket, the terminal will always remind me the HTTPerrors. I believe my network is work well. I doubt the package is wether properly well released.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.