Comments (3)
Initial tests seem to indicate that the variation in the KL divergence (not necessarily the mean) between the "distribution" formed by a "sampled" run and a "baseline" run seems to behave as I had hoped. This makes some sense in retrospect: we expect there to be some typical divergence, so it's the stability in this divergence that actually matters. I might add some more notes here for (tentative) public posterity depending on how further tests turn out.
from dynesty.
Although resampling complicates the procedure algorithmically, I managed to implement something that works. Essentially, we compute the KL divergence from the perspective of the "first" run (we're computing the KL divergence to this run), which may or may not include resampled positions, and then evaluate terms on the KL divergence one-by-one by position. I split this by unique particle IDs to speed up position checks. In the case where there are multiple positions in the "second" run (we're computing the KL divergence from this run) that match (leading to possible confusion), I choose the term that is closest to zero (which correctly gives zero overall in the case where we compare the same "resampled" run against itself).
from dynesty.
After some testing using a 3-D correlated normal, I ended up setting the default stopping values to 2% fractional variation in the KL divergence and 0.1 standard deviation in lnz (~10%), which gave similar sample sizes to a standard nested sampling run with K=1000
live points under posterior-optimized and evidence-optimized estimation, respectively. I might change this later depending on more testing.
from dynesty.
Related Issues (20)
- Put a warning if bootstrap enlarge factors are large HOT 1
- Resuming pooled sampler with pool doesn't use the pool HOT 3
- "acceptance walk" implementation in the sampling methods? HOT 1
- multi-GPU parallellzation HOT 2
- The loglstar is always -inf<-inf<inf HOT 3
- An issue about dyplot HOT 1
- How to get the MAP from chains? HOT 3
- Renaming pickle file from .tmp to .pickle in save_sampler() fails in Windows. HOT 2
- Recover partial chains from the dynesty.save checkpoint file
- An issue about dynesty posterior HOT 6
- live point's likelihood not valid HOT 6
- Documentation incorrect for Pool helper object HOT 1
- Importing Nested Sampling chains from file to plot HOT 2
- Using dynesty with npdim HOT 8
- Questions about dyplot.cornerplot HOT 4
- Questions about DynamicNestedSampler setting HOT 4
- Periodic parameters improvement by rotating them
- Ellipsoid check failed HOT 2
- tqdm example HOT 4
- Discretised prior in Dynesty HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dynesty.