Comments (9)
hmm .... I guess why not, that should be possible.
We would have to discuss how we handle restarts, i.e. can the user change this setting if the MCMC is restarted or should we save it, and once it was set for the first time, it cannot be changed any more.
I'm putting this up as a possible enhancement, not sure when we'll find the time to implement this.
In the meantime, I guess you can solve your problem with a small wrapper as in
runMyMCMC <- function(alpha, beta, ... ){
likelihood <- function{
...
alpha, beta
}
runMCMC (likelihood, ...)
}
from bayesiantools.
Hi Dominik,
glad you find BT helpful!
About the ... -> what would you like to pass to the likelihood?
We discussed whether it would be sensible to let the user pass the data, but then the MCMC needs to store the data somewhere, plus this would break compatibility with optim and sensitivity, so we decided against it.
from bayesiantools.
Thanks for your quick response!
In my case, the likelihood function calls an external model which basically follows the following structure (in magrittr jargon)
LL <- function(x) {
run_external_model(param = x) %>%
evaluate_model(.) %>%
do_post_processing(., alpha = 1.234, beta = 5.678) %>%
calculate_loglikelihood(.)
}
Here, the dummy parameters alpha
and beta
are individually set according to the calibration setup and are fixed during the MCMC sampling. So in my case, the following would be ideal:
LL <- function(x, ...) {
run_external_model(param = x) %>%
evaluate_model(.) %>%
do_post_processing(., list(...)) %>%
calculate_loglikelihood(.)
}
With this approch, I could easily create multiple MCMC setups for further analysis. stats::optim
offers a comparable ...
"which passes further arguments to fn and gr".
from bayesiantools.
I see. And then you would want to provide the ... to the runMCMC function I guess?
from bayesiantools.
exactly!
from bayesiantools.
Cool! Thanks! Let me know if I can support you with this (e.g, testing, etc)!
from bayesiantools.
OK, guys, I think we should make a decision about this ... I don't want to drag this into the next release.
I'm kind of leaning against implementing this, as it doesn't seem to be a huge problem for the user to simply change the likelihood, or, alternatively (a bit dangerous though), let the likelihood access a global variable that is then changed.
Providing additional comments in the rumMCMC commands would mean passing them from the rumMCMC to the samplers, to the likelihood calls ... it's not a huge effort, but it's more code that we have to maintain.
Additionally, if we do stuff like the WAIC calculations or other things, the parameters need to be available, so they would need to be stored somewhere.
Once the stuff is stored, the issue arises that a user might re-use a setup with stored additional parameters in a different context, where they modify again the parameters, in which case we have to throw a warning or whatever.
It all seems to me as if we create a pretty fragile structure for a small improvement in convenience.
from bayesiantools.
That's fine for me, although I think being in line with other parameter estimators or samplers (stats::optim
, DEoptim::DEoptim
, mco::nsga2
, adaptMCMC::MCMC
) promotes users to feel comfortable at first sight.
However, I can't assess additional efforts required to support this as I am not that deep into the code basis.
from bayesiantools.
OK, this is closed
from bayesiantools.
Related Issues (20)
- Pass on extra arguments to the likelihood HOT 4
- Unwritable Outputs For Some Random Seeds HOT 7
- CRAN problem (HTML5)
- Combine DE and DEzs code
- CRAN requires HTML5 for documentation HOT 1
- Add red card dataset to DHARMa?
- Metropolis sampler startup error: Matrix seems negative semi-definite HOT 1
- No longer on CRAN HOT 2
- Incoming CRAN / win builder checks - detritus in temp directory HOT 3
- bridgesample function neccessary? HOT 2
- Questions about BT
- Simulation-based calibration HOT 1
- How to tell if the parallel computation is actually turned on HOT 7
- Number of calls to likelihood function HOT 1
- CRAN DHARMa.Rd issue HOT 3
- Vignette - should there be an example for convergence check?? HOT 1
- Examples in Vignette with eval = F intentional? HOT 1
- Spatially constrained priors? HOT 3
- AR1 function sd estimate wrongly scaled HOT 3
- Writing a hierarchical model - unsure where to assign population-level parameters
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bayesiantools.