**Specifying a new sampling distribution**

Suppose we wish to use a sampling distribution that is not included in the list of standard distributions, in which an observation x[i] contributes a likelihood term L[i]. We may use the "zeros trick": a Poisson(phi) observation of zero has likelihood exp(-phi), so if our observed data is a set of 0's, and* *phi[i] is set to -log(L[i]), we will obtain the correct likelihood contribution. (Note that phi[i] should always be > 0 as it is a Poisson mean, and so we may need to add a suitable constant to ensure that it is positive.) This trick is illustrated by an example new-sampling in which a normal likelihood is constructed (using the zeros trick) and compared to the standard analysis.

C <- 10000 # this just has to be large enough to ensure all phi[i]'s > 0

for (i in 1:N) {

zeros[i] <- 0

phi[i] <- -log(L[i]) + C

zeros[i] ~ dpois(phi[i])

}

This trick allows arbitrary sampling distributions to be used, and is particularly suitable when, say, dealing with truncated distributions.

A new observation x.pred can be predicted by specifying it as missing in the data-file and assigning it a uniform prior, e.g.

x.pred ~ dflat() # improper uniform prior on new x

However our example shows that this method can be very inefficient and give a very high MC error.

An alternative to using 'zeros' is the "ones trick", where the data is a set of 1's assumed to be the results of Bernoulli trials with probabilities p[i]. By making each p[i] proportional to L[i] (i.e. by specifying a scaling constant large enough to ensure all p[i]'s are < 1) the required likelihood term is provided.

C <- 10000 # this just has to be large enough to ensure all p[i]'s < 1

for (i in 1:N) {

ones[i] <- 1

p[i] <- L[i] / C

ones[i] ~ dbern(p[i])

}