Coder Social home page Coder Social logo

turinglang / advancedhmc.jl Goto Github PK

View Code? Open in Web Editor NEW
221.0 14.0 39.0 3.67 MB

Robust, modular and efficient implementation of advanced Hamiltonian Monte Carlo algorithms

Home Page: https://turinglang.org/AdvancedHMC.jl/

License: MIT License

Julia 15.81% Jupyter Notebook 84.19%
hmc nuts hamiltonian-monte-carlo mcmc

advancedhmc.jl's People

Contributors

andreasnoack avatar chriselrod avatar chrisrackauckas avatar cpfiffer avatar cscherrer avatar devmotion avatar ebb-earl-co avatar github-actions[bot] avatar ivanyashchuk avatar jaimerzp avatar juliatagbot avatar kaandocal avatar mohamed82008 avatar rhaps0dy avatar saranjeetkaur avatar scheidan avatar sethaxen avatar simeonschaub avatar theogf avatar torfjelde avatar trappmartin avatar treigerm avatar vaibhavdixit02 avatar willtebbutt avatar xukai92 avatar yebai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

advancedhmc.jl's Issues

Update repo description

@yebai It seems that I don't have access to edit the description of the repo after the transfer. Maybe we want something like "Efficient building blocks for modern HMC samplers"?

Related papers

Neal, R. M. (2011). MCMC using Hamiltonian dynamics. Handbook of Markov chain Monte Carlo, 2(11), 2. (pdf)

Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434.

Girolami, M., & Calderhead, B. (2011). Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2), 123-214. (link)

Betancourt, M. J., Byrne, S., & Girolami, M. (2014). Optimizing the integrator step size for Hamiltonian Monte Carlo. arXiv preprint arXiv:1411.6669.

Betancourt, M. (2016). Identifying the optimal integration time in Hamiltonian Monte Carlo. arXiv preprint arXiv:1601.00225.

Hoffman, M. D., & Gelman, A. (2014). The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 1593-1623.

Correctly testing show functions

Currently during test the verbosity is disable for clean test output thus show() functions are not tests at all. We may want to include tests for the explicitly.

Instantiation of new `AbstractMetric`s

We currently have a method for each AbstractMetric which enables us to make another instance of the metric of the following form:

metric = ...
new_metric_of_same_type = metric(inverse_mass_matrix)

Why do we need such a method?

License?

Hi @xukai92, I'd like to start a repository heavily based on your code, but with some modifications that lead to incompatibility. I don't see a license file. Is there one you prefer? I'd like to use MIT or BSD if possible. Thanks!

Plan for v0.2

  • Consider introduce the PhasePoint type #17 #50
  • Update gradient function to return both value and gradient #56
  • RFC find_good_eps #27
  • More robust and modular way of detecting divergence #16
  • Return more information for each step #59
  • Implement Base.show for all user side types #36
  • RFC Adapter types #28
  • Instantiation of new AbstractMetrics #45
  • Simplifying Metric types by using AbstractPDMat #34
  • Adding noise to step size? #10

RFC `find_good_eps`

Copied from @yebai comment in #23 (comment)

    r = rand_momentum(rng, h)
    H = hamiltonian_energy(h, θ, r)

out of find_good_eps and wraping

        θ′, r′, _is_valid = step(Leapfrog(ϵ′), h, θ′, r′)
        H_new = _is_valid ? hamiltonian_energy(h, θ′, r′) : Inf

into a function

function A(ϵ, h, θ, r) 
        θ′, r′, _is_valid = step(Leapfrog(ϵ′), h, θ′, r′)
        return H_new = _is_valid ? hamiltonian_energy(h, θ′, r′) : Inf
end

Then, we can drop the dependency on AdvancedHMC from this function: find_good_eps(h::Hamiltonian, θ::AbstractVector{T}, A::Function; max_n_iters::Int=100), and move it into adaption/stepsize.jl.

Add more informative error messages.

I just came across the error message at:

@warn "The current proposal will be rejected (due to numerical error(s))..."

This error message should be more informative so that the user gets an idea why he gets it, i.e. in my case I suppose the model I'm looking at is misspecified. It would be good to know what is Inf.

Samples during adaption phase

Maybe we should make the sample function not return those samples by default (and support an optional keyword for it)?

Support mutable and immutable array types

Discussions in #5 (comment)

@mohamed82008 I reverted some functions about pre-allocating memory so that only intermediate variables are pre-allocated and all functions remains immutable. This is to avoid un-expected behaviour due to mutability. We probably want to implement mutable version of functions to pre-allocate memory for return variables explicitly later. Does it sound good to you?

Sorry, I missed this question. I think it would be nice to support mutable and immutable array types, e.g. StaticArrays.SArray and StaticArrays.MArray or Vector. I saw Chris mention somewhere that DiffEq does this by defining 2 versions of the function, one for mutable arrays and one for immutable. We might want to explore something like this in a future PR. The mutating version takes a first argument the pre-allocated output vector whereas the non-mutating version doesn't. Supporting StaticArrays would be an interesting direction to pursue to gain some speedup for parameter vectors of size <1000 or so. Small arrays is the sweet spot of StaticArrays putting ordinary arrays to shame in many cases.

[RFC]: notations used in this package

At the moment, we are using the following notations:

  • logπ - log target distribution
  • ∂logπ∂θ - gradient
  • prop - transitioning kernel
  • θ_init - initial value for sampling states / parameters

I think this can be improved by

  • ∂logπ∂θ ==> ∇logπ
  • prop ==> κ (kappa) or τ (tau) - MH transitioning kernel
  • θ - parameters, r - momentum
  • θ_init ==> θ₀ (theta_{0}), r_init ==> r₀ (r_{0})
  • z := (θ, r) - (position, momentum) pair

Any other notations missing?

Identify divergent transition

I was speaking with Kai about this and agreed it would be good to be able to identify divergence of a particular transition (Sec. 6.2 in here ) allowing us to identify pathologies which may bias the posterior sample.

More robust and modular way of detecting divergence

Approximation error of leapfrog integration (i.e. accumulated Hamiltonian energy error) can sometimes explode, for example when the curvature of the current region is very high. This type of approximation error is sometimes called divergence [1] since it shifts a leapfrog simulation away from the correct solution.

In Turing, this type of errors is currently caught a relatively ad-hoc function called is_valid ,

function is_valid(v::AbstractVector{<:Real})

is_valid can catch cases where one or more elements of the parameter vector is either nan or inf. This has several drawbacks

  • it's not general enough for catching all leapfrog approximation errors, e.g. when the parameter vector is valid but Hamiltonian energy is invalid.
  • it may be delayed because numerical errors can appear in Hamiltonian energy earlier than in parameter vectors
  • it's hard to propagate the exception around, i.e. at the moment we use a heuristic to find the previous valid point before approximation/numerical error happens

Therefore, we might want to refactor the current code a bit for a more robust mechanism for handling leapfrog approximation errors. Perhaps we can learn from the DynamicHMC implementation:

https://github.com/tpapp/DynamicHMC.jl/blob/master/src/buildingblocks.jl#L168

[1]: Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434.

Possible (algorithmic) speedups

  1. avoiding running unnecessary pass of the pdf function of the target distribution
  • the current design separates the pdf and its gradient and there are some duplicated evaluation of the pdf
  1. update is doing unnecessary decomposition of matrix (#24)
  • need to have a flag to indicate if M inverse is update by adaptor

sample in this package and Distributions

Both this package and Distributions.jl define a sample function, as opposed to this package extending Distributions.jl's sample function with new methods. This causes a name clash which, given that these packages will likely be used to together a lot, is annoying. Should we maybe extend Distributions.jl's sample rather than define our own?

Hong's abstraction draft for unifying trajectories from PR #69

Moved from #69

#################################
### Hong's abstraction starts ###
#################################

###
### Create a `Termination` type for each `Trajectory` type, e.g. HMC, NUTS etc.
### Merge all `Trajectory` types, and make `transition` dispatch on `Termination`,
### such that we can overload `transition` for different HMC samplers.
### NOTE:  stopping creteria, max_depth::Int, Δ_max::AbstractFloat, n_steps, λ
###

"""
Abstract type for termination.
"""
abstract type AbstractTermination end

# Termination type for HMC and HMCDA
struct StaticTermination{D<:AbstractFloat} <: AbstractTermination
    n_steps :: Int
    Δ_max :: D
end

# NoUTurnTermination
struct NoUTurnTermination{D<:AbstractFloat} <: AbstractTermination
    max_depth :: Int
    Δ_max :: D
    # TODO: add other necessary fields for No-U-Turn stopping creteria.
end

struct Trajectory{I<:AbstractIntegrator} <: AbstractTrajectory{I}
    integrator :: I
    n_steps :: Int # Counter for total leapfrog steps already applied.
    Δ :: AbstractFloat # Current hamiltonian energy minus starting hamiltonian energy
    # TODO: replace all ``*Trajectory` types with `Trajectory`.
    # TODO: add turn statistic, divergent statistic, proposal statistic
end

isterminated(
    x::StaticTermination,
    τ::Trajectory
) = τ.n_steps >= x.n_steps || τ.Δ >= x.Δ_max

# Combine trajectories, e.g. those created by the build_tree algorithm.
#  NOTE: combine proposal (via slice/multinomial sampling), combine turn statistic,
#       and combine divergent statistic.
combine_trajectory(τ′::Trajectory, τ′′::Trajectory) = nothing # To-be-implemented.

## TODO: move slice variable `logu` into `Trajectory`?
combine_proposal(τ′::Trajectory, τ′′::Trajectory) = nothing # To-be-implemented.
combine_turn(τ′::Trajectory, τ′′::Trajectory) = nothing # To-be-implemented.
combine_divergence(τ′::Trajectory, τ′′::Trajectory) = nothing # To-be-implemented.

transition(
    τ::Trajectory{I},
    h::Hamiltonian,
    z::PhasePoint,
    t::T
) where {I<:AbstractIntegrator,T<:AbstractTermination} = nothing

###############################
### Hong's abstraction ends ###
###############################

DynamicHMC is (~5 times) faster than Turing's HMC implementation

Time used for collecting 2 million samples

  • Turing.NUTS: 3318.293540 seconds (33.35 G allocations: 1.299 TiB, 29.35% gc time)
  • Turing.DynamicNUTS: 848.741643 seconds (7.86 G allocations: 251.076 GiB, 32.13% gc time)

using the following

using DynamicHMC, Turing, Test

@model gdemo(x, y) = begin
  s ~ InverseGamma(2,3)
  m ~ Normal(0,sqrt(s))
  x ~ Normal(m, sqrt(s))
  y ~ Normal(m, sqrt(s))
  return s, m
end

mf = gdemo(1.5, 2.0)

@time chn1 = sample(mf, DynamicNUTS(2000000));

@time chn2 = sample(mf, Turing.NUTS(2000000, 0.65));

Simplifying `Metric` types by using `AbstractPDMat`

The following 3 types in src/metric.jl:

  • struct DiagEuclideanMetric{T<:Real,A<:AbstractVector{T}} <: AbstractMetric{T}
    dim :: Int
    # Diagnal of the inverse of the mass matrix
    M⁻¹ :: A
    # Sqare root of the inverse of the mass matrix
    sqrtM⁻¹ :: A
    # Pre-allocation for intermediate variables
    _temp :: A
    end

  • struct DenseEuclideanMetric{T<:Real,AV<:AbstractVector{T},AM<:AbstractMatrix{T}} <: AbstractMetric{T}
    dim :: Int
    # Inverse of the mass matrix
    M⁻¹ :: AM
    # U of the Cholesky decomposition of the mass matrix
    cholM⁻¹ :: UpperTriangular{T,AM}
    # Pre-allocation for intermediate variables
    _temp :: AV
    end

  • struct UnitEuclideanMetric{T<:Real} <: AbstractMetric{T}
    dim :: Int
    end

can be merged into one type using AbstractPDMat

struct EuclideanMetric{T<:Real,A<:AbstractPDMat{T}} <: AbstractMetric{T}
    M⁻¹     ::  A # contains dim, M⁻¹ and its Cholesky decomposition. 
    # Pre-allocation for intermediate variables
    _temp   ::  A
end

Reference: https://github.com/JuliaStats/PDMats.jl

RFC `Adapter` types

Copied from @yebai's comments in #23 (comment)

can we

  • move this file's content into adaptation/Adaptation.jl
  • remove the file src/adaptation.jl
  • and include adaptation/Adaptation.jl from AdvancedHMC directly?

Copied from @yebai's comments in #23 (comment)

consider merging the following three functions

  • getss(fss::FixedStepSize)
  • getss(da::DualAveraging)
  • getss(mssa::ManualSSAdapter)

into

  • getϵ(a:: StepSizeAdapter) = a.ϵ, or more generally
  • getϵ(a:: AbstractAdapter) = a.ϵ

Copied from @yebai's comments in #23 (comment)

consider removing ManualSSAdapter and related code, since we can always use FixedStepSize

Copied from @yebai's comments in #23 (comment)

can we make DualAveraging an immutable type, and let

  • adapt_stepsize!, adapt!
  • finalize!

always return a new DualAveraging instance. This can improve both readability and performance since immutable types are allocated on stacks. If so, we can also remove

  • reset!

and merge

  • adapt!(da::DualAveraging, θ::AbstractVector{<:AbstractFloat}, α::AbstractFloat)
  • adapt_stepsize!(da::DualAveraging, α::AbstractFloat)

into

-adapt(d::AbstractAdaptor, α::AbstractFloat=1.0, θ::AbstractVector{<:AbstractFloat}=zeros(0))

We can probably do similar things for Metric adaptors.

Copied from @yebai's comments in #23 (comment)

Consider

  • removing `AbstractCompositeAdapter, and
  • making NaiveCompAdaptor and ThreePhaseAdaptor direct subtypes of AbstractAdapter

Since AbstractCompositeAdapter is only used to define

  • getM⁻¹(ca::AbstractCompositeAdapter)
  • getss(ca::AbstractCompositeAdapter)
  • update(h::Hamiltonian, p::AbstractProposal, a::Adaptation.AbstractCompositeAdapter)

These functions can be defined for AbstractAdaptor directly:

  • getM⁻¹(a::AbstractAdapter) = a.var
  • getss(a::AbstractAdapter) = a.ϵ
  • update(h::Hamiltonian, p::AbstractProposal, a::Adaptation.AbstractCompositeAdapter)

Then we can overload these functions for subtypes of AbstractAdaptor

  • getM⁻¹(a:: ThreePhaseAdapter) = getM⁻¹(a.pc)
  • getss(a:: ThreePhaseAdapter) = getss(a.ssa)
  • getM⁻¹(dpc::UnitPreConditioner) = nothing (perhaps return 1.0 instead of nothing)
  • getM⁻¹(dpc::DensePreConditioner) = dpc.covar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.