Coder Social home page Coder Social logo

oliviergimenez / banana-book Goto Github PK

View Code? Open in Web Editor NEW
14.0 4.0 11.0 866.14 MB

Repo for a book on Bayesian capture-recapture w/ HMMs

Home Page: https://oliviergimenez.github.io/banana-book/

TeX 66.84% CSS 1.53% R 31.63%
capture-recapture bayesian-inference nimble hidden-markov-models rstats

banana-book's Introduction

Welcome 👋 :bowtie: My name is Olivier Gimenez and I am a 🇫🇷 researcher working at the interface of animal ecology 🐺 🐬 🐘, statistical modeling 📉 and social sciences 📚.

On my GitHub, you will find repos with research and teaching material you can use for your own purpose. Check out also these Gists (short bits of code and data).

For more info, check out oliviergimenez.github.io or reach out on Twitter.

banana-book's People

Contributors

oliviergimenez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

banana-book's Issues

Compilation banana-book

The build tool to compile the book doesn't work properly for me. This is what I get

Captura de pantalla 2021-09-23 a las 15 55 11

Captura de pantalla 2021-09-23 a las 15 55 26

And the execution of the index.Rmd file worked without include the other chapters.

MCMC performances

Explain how MCMC performance should be measured w/ effective sample size / computation time instead of just computation time.

Typo on likelihood

Fitsum found that:

You used different notation for the likelihood (Pr(data|theta) and L(data|theta)). See Pr(data).

See also comment by Sarah on P and Pr in A/B example.

few comments on intronimble

First, I wanted to say that this is very nicely written and I will recommend this chapter to anyone that starts using nimble. Find below some comments that I hope can be useful:

2.3

  • It might be worth pointing out to the nimble-users email list because the NIMBLE team is very helpful and usually responds very quickly! 😊
  • “is distributed as (that’s the ~) as”; remove one “as”.
  • “assigns a uniform between 0 and 1 as a prior distribution to the survival probability"= “assigns a uniform prior distribution between 0 and 1 to the survival probability”.
  • NIMBLE also provides a summary of the MCMC using: summary(myNimbleOutput).
  • “This flexibility often comes with faster convergence.” And faster runtime (Turek et al 2020).
  • It might be interesting to touch on the dimensions. Nimble requires to give the dimensions of the object if it is not a scalar. They can either be provided in the model code using square brackets or using the “dimensions” argument. After reading everything, I see that it comes in the section 2.7. Maybe refer to this section in the section 2.3?

2.5

  • “Give example? Provide negative initial value for theta, or released in data < survived.” I think it is a good idea, calculate() will return "-Inf". Then survival$logProb_theta, will also return "-Inf" which indicates that the issue comes from this node.
    it also possible to reconstruct the issue in R to better understand the problem: log(dunif(survival$theta, 0, 1 )). This can be useful when dealing with more complex distributions.

csurvival$calculate() and survival$calculate() should return the same value. This is a good check when writing custom distributions as sometimes the distribution works in R but not after being compiled. If there is an issue with the custom distribution after compiling, R may even crash at this step.

2.7.3

  • The downside of not doing the calculate(), is that you might not be able to identify issues that could help you save time in the long run.

2.7.4

  • For a model that takes long to run, it might be useful to run MCMC in “bites”. This allows to check the results before the model is done running. For example we can save 80 bites each containing 250 iterations.
bite.size <- 250 # number of iterations in each bite to be exported and cleared
bites <- 80 # we run 80 bites containing of 250 iterations each. 
for(nb in 1:80){
   print(nb)
   if(nb==1){# first bite 
      Cmcmc$run(bite.size)
   }
   if(nb>1){# second bite or more
      Cmcmc$run(bite.size, reset = FALSE)
   }
   this.sample <- as.matrix(Cmcmc$mvSamples)# save the 250iterations
   save(this.sample, RunTime, file=paste(”bite_”, nb, "RData”, sep=””))
}

Now imagine the MCMC is still running but 25 bites were finished. We can check the results of the 25 first bites (6250 iterations:

bites <- 25
MCMC <- list()
for(nb in 1: bites){
  load(paste(”bite_”, nb, "RData”, sep=””))
  MCMC[[nb]] <- this.sample
}
MCMCsamples <- as.mcmc(do.call(rbind, MCMC)))

2.7.9 ( suggesting a new section)

  • Sometimes it might be useful to restart a model (for example to add more iterations) ran some time ago. Following this link, you can save and restart the model exactly where you left it.

Feedbacks by Rémi Fraysse on NIMBLE chapter


title: "Nimble Book"
output: html_document

knitr::opts_chunk$set(echo = TRUE, 
                      eval = FALSE)

2.3

- histogram

mcmc.output %>%
  as_tibble() %>%
  ggplot() +
  geom_histogram(aes(x = chain1[,"theta"]), color = "white") +
  labs(x = "survival probability")

Je maîtrise pas vraiment les subtilités des tibbles mais j'ai été surpris qu'on puisse y mettre une liste. J'ai rapidement essayé et dès qu'on monitor plus d'une variable ça fonctionne toujours mais le tibble a des comportements étranges:

mcmc.output<-list(chain1 = matrix(ncol = 2, data = rnorm(8000)),
                  chain2 = matrix(ncol = 2, data = rnorm(8000)))

colnames(mcmc.output$chain1) <- c("theta","theta2")
colnames(mcmc.output$chain2) <- c("theta","theta2")

tst <- as_tibble(mcmc.output)

dim(tst)          # [1] 4000    2
dim(tst$chain1)   # [1] 4000    2
dim(tst$chain2)   # [1] 4000    2

Pour la dimension de tst, je m'attendais pas à 4000, 2 et View(tst) fait un peu n'importe quoi aussi...

Bref juste pour dire que simplement :

mcmc.output$chain1 %>%
  as_tibble() %>%
  ggplot() +
  geom_histogram(aes(x = theta), color = "white") +
  labs(x = "survival probability")

me semble plus propre et si le livre se destine à des gens qui ne maitrisent pas les subtilités de R (et de tibble comme moi) ça peut rendre confus...

- "ggmcmc20 and basicMCMCplots21. Shall I demonstrate these other options?"

Bof, MCMCvis fait déjà un bon taf, donner les nom des packages semble suffisant. A la limite un lien vers les vignettes pour les curieux.

2.5

- "Give example? Provide negative initial value for theta, or released in data < survived."

Pas sûr que ça soit très utile. Peut être juste une phrase pour dire que en effet quand le modèle nous donne une log-vraisemblance positive ou NA (pour un noeud particulier ou pour tout le modèle) c'est que en effet il y a un problème.

- "Note that models and nimbleFunctions need to be compiled before they can be used to specify a project."

Peut-être une phrase pour dire que le "project" c'est le modèle survival et qu'on n'a pas choisi ce nom de projet au hasard, qu'on n'a pas le choix.

- "From here, you can obtain numerical summaries with samplesSummary()"

Ou avec MCMCvis (MCMCsummary) comme dans la partie 2.3

2.6

- "Say something on how default samplers are chosen by NIMBLE?"

J'avoue que je maîtrise pas les options de base de NIMBLE comme j'ai réécrit tous mes samplers. S'il y a plus à dire que juste "de base NIMBLE utilise des random walk" ça peut être intéressant.

- "several constraints need to be respected [...]"

Autre contrainte : la fonction run du sampler ne peut pas prendre d'arguments, tout ce qui sert doit être contenu dans le modèle ou être donné en contrôle.

2.7

2.7.2 Indexing

Est-ce que ce point aurait pas plutôt sa place au moment d'écrire le code du modèle ?
Même si l'exemple est très simple et n'utilise pas d'indexe, ç'est quelque chose d'assez important comme détail.

Débuger un sampler perso

Pour débugger un code maison, on peut lancer la chaine en code R (au lieu du code compilé), en cas d'erreur on a droit à un message R au lieu d'un crash de RStudio (ou d'un message illisible de nimble). L'inconvénient c'est que NIMBLE peut interpréter du code légèrement différemment de R et la chaine non compilée va faire son travail parfaitement alors que la compilée va échouer...

survivalMCMC$run(500)
samples <- as.matrix(survivalMCMC$mvSamples)

Et pour débugger un sampler en particulier on peut entrer la fonction run du sampler en mode débug (les sampler sont numérotés, le numéro donné par la méthode printSamplers de la chaine survivalMCMC)

debug(survivalMCMC$samplerFunctions[[1]]$run) # debug sampler 1
survivalMCMC$run(10)

Maud Quéroué's feedbacks on NIMBLE chapter

The ability in NIMBLE to access the nodes of your model and to evaluate the model likelihood can help you in identifying bugs in your code. For example, if we provide a negative initial value for theta, survival$calculate() returns NA:

survival <- nimbleModel(code = model,
                        data = my.data,
                        inits = list(theta = -0.5))
survival$calculate()

As another example, if we convey in the data the information that more animals survived than were released, we'll get an infinity value for the log-likelood:

my.data <- list(survived = 61, released = 57)
initial.values <- list(theta = 0.5)

survival <- nimbleModel(code = model,
                        data = my.data,
                        inits = initial.values)
survival$calculate()

As a check that the model is correctly initialized and that your code is without bugs, the call to model$calculate() should return a number and not NA or -Inf.

my.data <- list(survived = 19, released = 57)
initial.values <- list(theta = 0.5)
survival <- nimbleModel(code = model,
                        data = my.data,
                        inits = initial.values)

New chapter on extensions of the CJS/AS models

Consider a new chapter, say Chapter 6, in Transitions e.g. Extensions or something like that with Hidden semi-Markov models and continuous-time HMM and memory model expressed as HMM.

feedbacks on chapters 2 & 3 by Matt Silk

Figured I would share a few of things I noticed while I was working
through the nimble book to help with refining it! This will be far from
comprehensive but hopefully is helpful...

In the contents bar on the left: "Fundations" should be "Foundations" right?

The n= argument in your rmybinom function currently doesn't do anything.
It didn't impact your use of it but was noticeable when I played around
with the function a bit

In the my_metropolis function the annotation if the same for the if and
the else at the end of the function even though they are doing different
things?

In Section 2.5 I would say "speed up convergence" rather than "fasten
convergence"

In Section 3.6.4 there is an "unknows" that should be "unknowns"

Section 3.10 starts with some french! And 3.10.1 also ends with some
french. Presumably this managed to dodge being translated ;-)

Just some very small things as you've done a great job but figured they
were worth sharing!

Introduce distributions

First time a new distribution is used, make sure I introduce it. OK for beta, gamma, Dirichlet. What about uniform, binomial, bernoulli, etc.

comments on NIMBLE chapter by Fitsum

  1. Thinning: In practice, people still use thinning as one of the strategies to reduce autocorrelation. It may be worth mentioning the pros and cons of thinning. The paper you cited is very useful.
  2. Histograms for theta and lifespan: It would be good if you could mention the package(s) needed for producing these figures.
  3. Regarding "Shall I demonstrate these other options?", I don't think you need to show the other options. Instead, it would be nice if you could add the autocorrelation plot.
  4. Section 2.4.3: Can you provide the binomial distribution formula here? That would help to understand better the "dmybinom" function.
  5. Section 2.6. Can you provide references for further reading about different types of samplers?
  6. Section 2.7.8. It would be great if you could provide some examples here.

Initial values

Would be great to come up with a function to generate initial values, including for latent states. And/or reiterate the advice that if you don't need the latent states, then a marginalized likelihood is fine (nimbleEcology) and you don't have the latent states anymore, so no initial values to pick for these. See https://r-nimble.org/html_manual/cha-mcmc.html#sec:initMCMC

Transition/Observation matrices can be split in several steps

In chapter 5, show how to split observation matrix into capture then assignment conditional on capture? Matrix product and all. Also when first introducing the AS model, the transition matrix can be split int survival then movement conditional on survival. Say it's a matrix product and remind how it is done (?). Say it's inefficient in general to compute the product because evaluated at each MCMC, but could be useful when modelling.

Get formal authorization for data sharing

It's already i think, but double check with data owners that they're ok i share their work through the book. Also mention there will be a data package. They're acknowledger first time the data is used, and in the Preface as well.

In passing, ask Gilbert whether he has an individual cov to replace wing length (which is fake, i simulated these data).

tweak chapter names

rename covariate chapter into fixed/random effects
split model selection/validation in two chapters?

Give easy access to full code (and data)

Put all codes in a single Rmd with plain R code. Add link in Preface and elsewhere (in Case studies at least).

Should it have its own GitHub repo?

I need to finish up data package.

How/why do multievent models work?

In workshops, attendees are always struggling to understand how/why multievent work (breeding/disease state examples). How to explain clearly? With simulations? How to give the intuition of how/why it works.

Cleaning

Remove leftover.R and speed.Rmd once I'm not using these files anymore.

Feedback on NIMBLE chapter by Mahdieh Tourani

From Mahdieh:

Awesome tutorial on Nimble! It's really easy to follow and it covers many of the common beginner loop holes. A couple of minor comments : I like the posterior predictive sampling in nimble and simulating data from a compiled model for calculating Bayesian p value (e.g., https://r-nimble.org/posterior-predictive-sampling-and-other-post-mcmc-use-of-samples-in-nimble).

Olivier: I'm planning to talk about Bayesian p-values in a specific chapter, but I could forward reference simulations in the NIMBLE chapter, thanks!

feedbacks Matthieu Paquet on NIMBLE chapter

Bon et bien voilà je n'ai pas pu me retenir bien longtemps, j'ai lu le chapitre sur NIMBLE et je l'ai trouvé excellent!

Franchement je n'ai pas grand chose à dire d'autre. Voici quelques points que j'ai noté si jamais.

Dans 2.4.3 User-defined distributions
"You need to write functions for density (d) and simulation (r) for your distribution."
Cela donne l'impression que l'on doit systématiquement écrire les deux. Mais dans l'exemple dessous, seule la fonction de densité est utilisée dans le code NIMBLE, non? Peut être donner aussi un exemple d'utilisation de la fonction "rmybinom" dans NIMBLE? Après peut être que je n'ai simplement pas compris!

Un détail que j'ai déja remarqué dans la documentation NIMBLE mais je n'ai jamais compris (c'est surement logique, mais je n'ai jamais osé demander...):

Lorsque l'on compile un modèle un crée un nouvel objet. Ici on crée "Csurvival" qui correspond à la version compliée de "survival" (Csurvival <- compileNimble(survival))

Mais ensuite on ne fait appel qu'à "survival":
survivalConf <- configureMCMC(survival)
survivalMCMC <- buildMCMC(survivalConf)
CsurvivalMCMC <- compileNimble(survivalMCMC, project = survival)

Est ce que NIMBLE fait automatiquement le lien avec "Csurvival"? Désolé si ma question n'est pas claire, et je ne pense pas qu'il y ait besoin d'expliquer cela dans le livre!

Autrement juste une typo, tu as oublié un "v":
"We use two "v"alues 2022 and 666 to set the seed in workflow()"

Concernant mes "tips and tricks" (merci, j'ai appris comment rédure les temps de compilation grace à toi!), c'est surement trop spécifique pour être dans le livre mais voila deux choses en plus que j'apprécie:

  1. Pouvoir utiliser le modèle à la fois pour simuler des données, puis ensuite pour les ajuster au modèle (le tout sans avoir à réécrire le modèle!)
    Exemple:
    nodesToSim <- model$getDependencies(c("parameter_name1","parameter_name2"),
    self = F, downstream = T)
    #Compile the model
    Cmodel <- compileNimble(model)
    ##simulate
    Cmodel$simulate(nodesToSim)

  2. Autrement je me sers parfois de "if else" dans le code du modèle pour avoir des modèles alternatifs, par exemple avec/sans certaines covariables, afin d'éviter de devoir réécrire la quasi intégralité du code du modèle en double. On voit aussi mieux ce qui change entre les deux versions du modèle, et on évite des erreurs.

Example:
if(densitydependence){
log(fledg.rate.p[t])<-mu.fledg.rate.p+dd.fledg.rate.p*N.rec.v[t]
}else{
log(fledg.rate.p[t])<-mu.fledg.rate.p
}#ifelse

et après on peut spécifier "densitydependence=TRUE/FALSE".

(il y a un exemple pour les deux cas dans le code que j'ai partagé avec toi sur github si jamais)

Voilà, merci encore pour le partage et à bientôt pour plus,

Add trick on using known states as data

When latent states are known, these can be used as data, it avoids sampler to be assigned and MCMC to be run on them. Perry mentions that during a workshop. I think this is mentionned in one of Marc's books as a trick given by Andy.

covariates chapter

  • Include explanations/examples on how to deal with one and several categorical covariates, and combination of categorical and continuous covariates. See chapter 6 of Marc's book « Introduction to WinBUGS for Ecologists », and https://mbjoseph.github.io/posts/2018-12-27-the-five-elements-ninjas-approach-to-teaching-design-matrices/. See also #6

  • RJMCMC, Lasso, wAIC covariate selection, here or in a specific section on model selection/validation

  • Splines (by hand, or w/ jagams)

  • Path analysis, sem

  • Missing data (e.g. in time-varying individual covariates) ; multistate models, imputation (Simon's paper)

Feedbacks on Bayes/MCMC chapter by Patrícia Rodrigues

I read the chapter and overall I think that:

  • there is a good balance of text, images, formulas, and code. I found this really nice.
  • the text flows very nicely and you make the speech quite relatable (e.g. I don’t know about you, but I need to think twice for not messing the letters around.) logical and accessible. You keep the reader's attention throughout.
  • the purple boxes with key point/summary work well and are helpful.
  • likewise, legends of figures are also quite informative. Especially Figures 1.14 and 1.15, one looks at the plots and with your help knows exactly what to look for and how to interpret it.
  • one small comment on the Metropolis algorithm section and the use of the word “locations” on point 3: this got me a bit lost. Before moving to point 4, I scrolled back to see if I had missed something and if you had mentioned something about recapture locations. I now read the whole thing and it makes sense (my bad here), and I see locations related to values (candidate and current) in the chain (you even use “where” and “move”). Perhaps introducing the “location” word as well from point 1. Something like, “this is a starting value, or a starting location”.
  • there are small typos but I guess you are not after these at this stage, but let me know if I should send them along.
  • content wise I cannot really comment :D  - except that maybe ask if you have planned for a small introduction or brief overview on the capture-recapture methodology, types of data, assumptions and so on? Something for the introduction section under foundations?

Feedback by Cyrus Kavwele

From Cyrus Kavwele

First of all. Huge congratulations on new book (nimble) you are working on. I am an ecologist and currently learning Bayesian stats however, I find it difficult especially when I have to write the model. I would like to suggest you include an example with several covariates where some are factors and others numerical and provide clear interpretation of the results. Once more, thanks a lot. ^Cyrus.

Olivier: Hi Cyrus, thank you so much for your feedback! You're making a very good point. I'm planning to talk about covariates in a dedicated chapter, and I'll make sure to include discrete and continuous covariates. Thanks again! See also #11

Covariates stuff

There a few things I could improve when I first introduce covariates:

  • I could show how to use values of the covariate de-standardized, useful in figures to have the original unit.

  • I should mention interaction bw sex and wing length w/ logit(phi[i]) <- beta[1] + beta[2] * sex[i] + beta[3] * winglength[i] + beta[4] * sex[i] * winglength[i] and explain.

More about WAIC

I find my section on WAIC weak, and I definitely need to do better. Among other things:

Purple boxes

The purple boxes are helpful from some feedbacks I had. Now they're ok in the PDF but do not appear anymore in HTML. I need to fix that. Also, I should add boxes in other chapters than 1-2-3.

Feedback on Bayes/MCMC chapter by S. Bauduin

  • C'est quoi des "marginal probabilities" ( Pr(A∣B) Pr ( A ∣ B )  using marginal probabilities Pr(A) Pr ( A )  and Pr(B) Pr ( B )  and Pr(B∣A)) ? J'imagine que les marginale probabilities c'est Pr(A) et Pr(B) (et pas Pr(B/A)) mais c'est pas evident.
  • Des fois tu utilises P et des fois Pr pour les prob. Je ne sais pas si c'est important mais ça serait mieux que ça soit tout le temps la même chose
  • L'exemple avec data et hypothesis ne m'aide pas plus lol, ça m'embrouille encore plus. Je trouvais plus clair avec l'exemple concret avec le dé.
  • Pourquoi préciser "male" statisticians, c'est pas comme si il y avait une opposition avec female statisticians
  • C'est top l'équation avec les couleurs
  • Je ne parlerais pas de sensitivity analyses ici.
  • Je ne comprends toujours pas ce qu'est Pr(Data) mais bon ... :)
  • Le petit 9 serait bien apprécié avec un exemple concret parce que là je suis perdue :)
  • J'ai du mal à comprendre ce qu'est une " joint posterior distribution" quand il y a plusieurs paramètres
  • Je ne comprends pas ce que ça veut dire qu'une Markov Chain est "irreductible et aperiodic"
  • La convergence par rapport à la météo, je ne comprends pas pourquoi il y a une convergence vers ces chiffres là. Quelles sont les autres données ou les règles qui permettent de converger vers ces chiffres là en particulier.

J'avoue que j'ai été un peu larguée à partir de 1.4 (je suivais la réflexion générale mais sans comprendre les détails et le code) et par contre j'ai complètement décroché à 1.6.2.

Les exemples concrets (ex : dé, météo) aident vachement.

Use nimbleEcology and/or my own dHMM likelihood?

There are several tricks that we need to know to use nimbleEcology::d(D)HMM(o)(), including i) can't accomodate individuals with first == K and ii) doesn't condition on first capture. I wonder whether I should use my own likelihood w/ forward algo. See extracts of discussions from the NIMBLE users mailing list below.

What nimble is trying to tell you is that the dHMM distribution expects a dimension=1 node for the "value", which means a vector. Generally, "y[i,(first[i]+1):K]" will be a vector, except when first[i] = K for some value(s) of i, in which case that reduces to a single value, or a scalar, or a dimension=0 quantity. Unfortunately, at this point, nimble cannot distinguish between dimension=0 scalars, and vectors of length 1, which is causing the error you're seeing.

What` you'll have to do to get around this, since capture histories of length=1 generally do not contribute anything to inference, is to remove the capture histories for which first[i]=K from the dataset - remove the individuals that were first sighted on the final sampling occasion K.

This generally won't affect inference, unless you were also doing inference on the initial state probabilities "init[1:9]", which it appears you are not, since those appear to be hard-coded as c(1,0,0,0,....) in your model, meaning you condition on the initial state of capture being state 1, and also individuals being observed in that first time period. So removing these individuals won't affect the inferences from your model, it will just require a little bit of data manipulation, and changing the value of N.

I think Daniel's response is exactly correct. dHMM doesn't condition on first capture, meaning that we expect you to have at least two observations of each individual. Since a one-observation individual has a simpler likelihood you could probably handle those separately; that's another workaround that could work for you. Or, you could include the first instance of each individual and provide y[i, first[i] : K] as Daniel suggested. If you want to condition on first capture I think you could do that pretty easily by hard-coding your detection matrix for the first event for each individual. Let me know if this is unclear and if you want any support going in one of these directions!

See solution for the CJS model by Jay Rotella here https://groups.google.com/g/nimble-users/c/_anpyNTx1_I/m/z2JMHAgmAAAJ

I think Daniel's response is exactly correct. dHMM doesn't condition on first capture, meaning that we expect you to have at least two observations of each individual. Since a one-observation individual has a simpler likelihood you could probably handle those separately; that's another workaround that could work for you. Or, you could include the first instance of each individual and provide y[i, first[i] : K] as Daniel suggested. If you want to condition on first capture I think you could do that pretty easily by hard-coding your detection matrix for the first event for each individual. Let me know if this is unclear and if you want any support going in one of these directions!

Stuff to do on Nimble chapter

  • Mention nimbleEcology, or illustrate w/ an example.
  • Explain how MCMC performance should be measured w/ effective sample size / computation time instead of just computation time.

Other books

Existing books that cover capture-recapture models with Bayesian statistics by Kéry and Schaub/Royle, McCrea & Morgan, King et al, McCarthy, etc. should be cited somewhere (see my book proposal for a full list).

Too many bird examples?

In chapters on the CJS/AS models, I use exclusively bird examples (I re-use the same examples as in the original papers). Is it a problem? Make sure that I'm illustrating methods with other groups in the case studies (mammals, insects, amphibians, reptiles, I should be able to use plants as well with the orchid dataset (?)).

feedbacks Fitsum on Bayesian stats & MCMC chapter

  1. It would be great if you could add a section on commonly used probability distributions and highlight the importance of these distributions in Bayesian Statistics. I have seen many students using Bayesian mark-recapture/occupancy models with little or no understanding of probability distributions.
  2. Can you cover Gibbs sampling with an example in Section 1.5?
  3. You used different notation for the likelihood (Pr(data|theta) and L(data|theta)). See Pr(data).
  4. You may mention the conjugate prior when you present the classical binomial-beta example.

Random effect

Explain/illustrate how to choose/elicit prior on SD of random effect.

Index

To be done, at the end, once the book is stabilized.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.