Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
> benchmark( + for (t in 1:100){ + x=sort(rnorm(N));fx=dnorm(x) + imp1=dnorm(x,sd=.5)/fx}) replicas elapsed relative user.child sys.child 1 100 7.948 7.94 0.012 > benchmark( + for (t in 1:100){ + x=sort(rnorm(N));hatf=density(x) + hatfx=approx(hatf$x,hatf$y, x)$y + imp2=dnorm(x,sd=.5)/hatfx}) replicas elapsed relative user.child sys.child 1 100 19.272 18.473 0.94 > benchmark( + for (t in 1:100){ + x=sort(rnorm(N));hatf=density(x) + hatfx=approx(hatf$x,hatf$y, x)$y + bw=hatf$bw + for (i in 1:N) Kx[i]=1-sum((dnorm(x[i], + mean=x[-i],sd=bw)-hatfx[i])^2)/NmoNmt/hatfx[i]^2 + imp3=dnorm(x,sd=.5)*Kx/hatfx}) replicas elapsed relative user.child sys.child 1 100 11378.38 7610.037 17.239
which follows from the O(n) cost in deriving the kernel estimate for all observations (and I did not even use the leave-one-out option…) The R computation of the variance is certainly not optimal, far from it, but those enormous values give an indication of the added cost of the step, which does not even seem productive in terms of variance reduction… [Warning: the comparison is only done over one model and one target integrand, thus does not pretend at generality!]
Filed under: Books, R, Statistics Tagged: Bernoulli, importance sampling, leave-one-out calibration, non-parametric kernel estimation, R, unbiased estimation, variance correction
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.