[This article was first published on Wiekvoet, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This is the third post with LaplacesDemon tuning. same problem, different algorithms. For introduction and other code see this post. The current post takes algorithms Independence Metropolis to Reflective Slice Sampler.Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Independence Metropolis
Independence Metropolis expects first a run of e.g. LaplaceApproximation and to be fed the results of that. It should be noted that LaplaceApproximation() did not think it has converged, but I continued anyway.LA <- LaplaceApproximation(Model,
parm=Initial.Values,
Data=MyData)
LA$Summary1[,1]# mode
LA$Covar # covariance
Fit <- LaplacesDemon(Model,
Data=MyData,
Covar=LA$Covar,
Algorithm=’IM’,
Specs=list(mu=LA$Summary1[,1]),
Initial.Values = Initial.Values
)
Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
Covar = LA$Covar, Algorithm = “IM”, Specs = list(mu = LA$Summary1[,
1]))
Acceptance Rate: 0.4799
Algorithm: Independence Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
beta[1] beta[2]
1.3472722602 0.0009681572
Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
All Stationary
Dbar 43.136 43.136
pD 0.244 0.244
DIC 43.380 43.380
Initial Values:
[1] -10 0
Iterations: 10000
Log(Marginal Likelihood): -22.97697
Minutes of run-time: 0.05
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 10
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10
Summary of All Samples
Mean SD MCSE ESS LB Median
beta[1] -11.2837801 1.16059157 0.035012520 1000 -13.56153 -11.3195620
beta[2] 0.2797468 0.02984741 0.000896778 1000 0.21760 0.2803396
Deviance 43.1362705 0.69853281 0.023083591 1000 42.45458 42.9472450
LP -30.3781418 0.34871038 0.011551050 1000 -31.29227 -30.2797709
UB
beta[1] -8.8565459
beta[2] 0.3395894
Deviance 44.9102538
LP -30.0383777
Summary of Stationary Samples
Mean SD MCSE ESS LB Median
beta[1] -11.2837801 1.16059157 0.035012520 1000 -13.56153 -11.3195620
beta[2] 0.2797468 0.02984741 0.000896778 1000 0.21760 0.2803396
Deviance 43.1362705 0.69853281 0.023083591 1000 42.45458 42.9472450
LP -30.3781418 0.34871038 0.011551050 1000 -31.29227 -30.2797709
UB
beta[1] -8.8565459
beta[2] 0.3395894
Deviance 44.9102538
LP -30.0383777
Interchain Adaptation
This works on cluster computers only, hence is skipped.Metropolis-Adjusted Langevin Algorithm
It has specs, the defaults are used. Somehow it ends up proposing thinning 1000 so that is taken.Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
Iterations = 60000, Status = 2000, Thinning = 1000, Algorithm = “MALA”,
Specs = list(A = 1e+07, alpha.star = 0.574, gamma = 1, delta = 1,
epsilon = c(1e-06, 1e-07)))
Acceptance Rate: 0.6699
Algorithm: Metropolis-Adjusted Langevin Algorithm
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
beta[1] beta[2]
3.974723987 0.002746137
Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
All Stationary
Dbar 44.871 45.323
pD 3.009 3.719
DIC 47.880 49.043
Initial Values:
[1] -10 0
Iterations: 60000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.51
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 30
Recommended Burn-In of Un-thinned Samples: 30000
Recommended Thinning: 1000
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 60
Thinning: 1000
Summary of All Samples
Mean SD MCSE ESS LB Median
beta[1] -11.5549438 2.12680454 0.262494275 60 -15.4489211 -11.6879298
beta[2] 0.2862209 0.05387208 0.006838854 60 0.1775704 0.2869203
Deviance 44.8713764 2.45311900 0.298991760 60 42.5123135 44.0150320
LP -31.2503453 1.22704758 0.149839992 60 -34.4462764 -30.8362411
UB
beta[1] -7.2809707
beta[2] 0.3806284
Deviance 51.2537115
LP -30.0625250
Summary of Stationary Samples
Mean SD MCSE ESS LB Median
beta[1] -11.4682552 2.28578268 0.340770357 30 -14.6449856 -11.7783155
beta[2] 0.2821535 0.05793802 0.008335686 30 0.1672378 0.2887207
Deviance 45.3234960 2.72744685 0.429600835 30 42.4870991 44.3327542
LP -31.4757075 1.36111721 0.149839992 30 -34.8777641 -30.9992113
UB
beta[1] -6.9659465
beta[2] 0.3579202
Deviance 52.1258699
LP -30.0519847
Metropolis-Coupled Markov Chain Monte Carlo
This algorithm is suitable for multi-modal distributions. Thus not for this specific problem. It also requires at least two cores, which brings it just within my current computing capability. The two cores are used for keeping two chain between which some swapping is done. From examination of my processor occupancy it did not tax either core to the max. As a consequence the algorithm does not run very fast. From what I read the approach is that one makes a first run, takes the covariance of this run as base for the second run and so on. The first run can be started with a small diagonal covariance matrix. There was still quite some change from run one to two, so I did a third run as final. I did not take a fourth run to do the recommended thinning.Fit1 <- LaplacesDemon(Model,
Data=MyData,
Algorithm=’MCMCMC’,
Covar=diag(.001,nrow=2),
Specs=list(lambda=1,CPUs=2,Packages=NULL,
Dyn.libs=NULL),
Initial.Values = Initial.Values
)
Fit2 <- LaplacesDemon(Model,
Data=MyData,
Algorithm=’MCMCMC’,
Covar=var(Fit1$Posterior2),
Specs=list(lambda=1,CPUs=2,Packages=NULL,
Dyn.libs=NULL),
Initial.Values = apply(Fit1$Posterior2,2,median)
)
Fit3 <- LaplacesDemon(Model,
Data=MyData,
Algorithm=’MCMCMC’,
Covar=var(Fit2$Posterior2),
Specs=list(lambda=1,CPUs=2,Packages=NULL,
Dyn.libs=NULL),
Initial.Values = apply(Fit2$Posterior2,2,median)
)
Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = apply(Fit2$Posterior2,
2, median), Covar = var(Fit2$Posterior2), Algorithm = “MCMCMC”,
Specs = list(lambda = 1, CPUs = 2, Packages = NULL, Dyn.libs = NULL))
Acceptance Rate: 0.5628
Algorithm: Metropolis-Coupled Markov Chain Monte Carlo
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
beta[1] beta[2]
3.857317095 0.002593814
Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
All Stationary
Dbar 44.441 44.441
pD 1.975 1.975
DIC 46.416 46.416
Initial Values:
beta[1] beta[2]
-11.5172818 0.2863507
Iterations: 10000
Log(Marginal Likelihood): -22.56206
Minutes of run-time: 7.65
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 20
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10
Summary of All Samples
Mean SD MCSE ESS LB Median
beta[1] -11.7429958 2.08203881 0.083365310 767.1824 -16.6468867 -11.5906172
beta[2] 0.2913961 0.05394533 0.002179448 771.8377 0.2007124 0.2880483
Deviance 44.4411762 1.98744454 0.080077425 764.5512 42.4935429 43.8303185
LP -31.0373786 1.00385816 0.040582713 761.9922 -33.7860907 -30.7307093
UB
beta[1] -8.1852016
beta[2] 0.4208138
Deviance 49.7875520
LP -30.0545128
Summary of Stationary Samples
Mean SD MCSE ESS LB Median
beta[1] -11.7429958 2.08203881 0.083365310 767.1824 -16.6468867 -11.5906172
beta[2] 0.2913961 0.05394533 0.002179448 771.8377 0.2007124 0.2880483
Deviance 44.4411762 1.98744454 0.080077425 764.5512 42.4935429 43.8303185
LP -31.0373786 1.00385816 0.040582713 761.9922 -33.7860907 -30.7307093
UB
beta[1] -8.1852016
beta[2] 0.4208138
Deviance 49.7875520
LP -30.0545128
Multiple-Try Metropolis
The first run took half an hour. During the run I could see LP of way from the target distribution. On a whim, I did a second run, with a covariance matrix from LaplaceApproximation() as added information. This run was better but took another half hour.Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
Covar = LA$Covar, Algorithm = “MTM”, Specs = list(K = 4,
CPUs = 2, Packages = NULL, Dyn.libs = NULL))
Acceptance Rate: 0.61155
Algorithm: Multiple-Try Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
beta[1] beta[2]
21.36403719 0.01435576
Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
All Stationary
Dbar 52.923 51.254
pD 372.901 50.142
DIC 425.824 101.396
Initial Values:
[1] -10 0
Iterations: 10000
Log(Marginal Likelihood): -48.98845
Minutes of run-time: 29.71
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 600
Recommended Burn-In of Un-thinned Samples: 6000
Recommended Thinning: 250
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10
Summary of All Samples
Mean SD MCSE ESS LB Median
beta[1] -12.3004483 4.6238653 1.0357315 25.37966 -22.0066143 -11.4795448
beta[2] 0.3046245 0.1194878 0.0265865 26.73186 0.1186699 0.2820595
Deviance 52.9226543 27.3093850 1.3945959 616.53888 42.6646332 48.6409842
LP -35.2933429 13.6579453 0.6986543 615.73612 -51.0643998 -33.1211349
UB
beta[1] -5.1607724
beta[2] 0.5596043
Deviance 84.2719610
LP -30.1367784
Summary of Stationary Samples
Mean SD MCSE ESS LB Median
beta[1] -12.4106362 4.446175 1.5145833 11.35760 -23.4916627 -11.4983716
beta[2] 0.3073741 0.114014 0.0387409 13.37635 0.1412665 0.2842726
Deviance 51.2540481 10.014169 0.9717966 64.31246 42.5915491 48.1501360
LP -34.4595816 5.031687 0.6986543 61.84508 -50.7860991 -32.8935736
UB
beta[1] -5.8450580
beta[2] 0.5940932
Deviance 83.6594896
LP -30.1021917
NUTS
It gave an error hence no results.pCN
To get close to the target acceptance rate a beta of 0.015 was used. It seemed that with these settings it was not able to determine the target distribution.Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
Iterations = 80000, Status = 2000, Thinning = 30, Algorithm = “pCN”,
Specs = list(beta = 0.015))
Acceptance Rate: 0.24766
Algorithm: Preconditioned Crank-Nicolson
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
beta[1] beta[2]
2.835066 2.835066
Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
All Stationary
Dbar 54.321 NA
pD 17.889 NA
DIC 72.209 NA
Initial Values:
[1] -10 0
Iterations: 80000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.19
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 2666
Recommended Burn-In of Un-thinned Samples: 79980
Recommended Thinning: 34
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 2666
Thinning: 30
Summary of All Samples
Mean SD MCSE ESS LB Median
beta[1] -5.9954631 1.51357515 0.333997311 3.200112 -9.78692631 -5.660027
beta[2] 0.1446406 0.03873508 0.008466866 3.168661 0.09195439 0.135685
Deviance 54.3206874 5.98143077 1.127801083 3.341341 43.55137352 54.443521
LP -35.9251051 2.98165148 0.562093182 3.349492 -41.03715268 -35.982949
UB
beta[1] -4.021785
beta[2] 0.239560
Deviance 64.568537
LP -30.570165
Summary of Stationary Samples
Mean SD MCSE ESS LB Median UB
beta[1] NA NA NA NA NA NA NA
beta[2] NA NA NA NA NA NA NA
Oblique Hyperrectangle Slice Sampler
I tried to get close or over to the number of discarded samples for the stationary run. As before, that point varied. I did not feel like running a thinning of 390, so stopped here. Just to be precise, spec variable A refers to the number of thinned samples, which translates, in this case to 12000 un-thinned samples.Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
Iterations = 20000, Status = 2000, Thinning = 30, Algorithm = “OHSS”,
Specs = list(A = 400, n = 0))
Acceptance Rate: 1
Algorithm: Oblique Hyperrectangle Slice Sampler
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
beta[1] beta[2]
2.141405086 0.001421053
Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
All Stationary
Dbar 44.670 44.670
pD 4.873 4.873
DIC 49.543 49.543
Initial Values:
[1] -10 0
Iterations: 20000
Log(Marginal Likelihood): -22.7557
Minutes of run-time: 0.17
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 390
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 666
Thinning: 30
Summary of All Samples
Mean SD MCSE ESS LB Median
beta[1] -11.723257 2.12131635 0.364333658 63.12568 -15.6882712 -11.7426108
beta[2] 0.291164 0.05530033 0.009836046 53.03589 0.1892168 0.2903171
Deviance 44.669607 3.12190842 0.485345818 68.11717 42.4967868 43.7647781
LP -31.151444 1.55954405 0.242282520 68.32591 -34.0145598 -30.6944097
UB
beta[1] -7.6953473
beta[2] 0.3978456
Deviance 50.3649578
LP -30.0590489
Summary of Stationary Samples
Mean SD MCSE ESS LB Median
beta[1] -11.723257 2.12131635 0.364333658 63.12568 -15.6882712 -11.7426108
beta[2] 0.291164 0.05530033 0.009836046 53.03589 0.1892168 0.2903171
Deviance 44.669607 3.12190842 0.485345818 68.11717 42.4967868 43.7647781
LP -31.151444 1.55954405 0.242282520 68.32591 -34.0145598 -30.6944097
UB
beta[1] -7.6953473
beta[2] 0.3978456
Deviance 50.3649578
LP -30.0590489
Random Drive Metropolis-Hastings
The manual states ‘RDMH fails in the obscure case when the origin has positive probability‘. That obscure case includes the initial values of the parameters, as I had. Apart from that, it could not determine a point where sampling was stationary. Call:LaplacesDemon(Model = Model, Data = MyData, Initial.Values = c(-10,
-0.1), Iterations = 60000, Status = 2000, Thinning = 30,
Algorithm = “RDMH”)
Acceptance Rate: 0.10038
Algorithm: Random Dive Metropolis-Hastings
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
beta[1] beta[2]
2.706896237 0.001829059
Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
All Stationary
Dbar 44.946 NA
pD 4.953 NA
DIC 49.899 NA
Initial Values:
[1] -10.0 -0.1
Iterations: 60000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.28
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 2000
Recommended Burn-In of Un-thinned Samples: 60000
Recommended Thinning: 33
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 2000
Thinning: 30
Summary of All Samples
Mean SD MCSE ESS LB Median
beta[1] -9.95876 1.64567600 0.376396772 13.94920 -12.6734418 -10.1840709
beta[2] 0.24500 0.04207685 0.009640869 14.62719 0.1510536 0.2512075
Deviance 44.94647 3.14729140 0.488713442 61.09106 42.4926465 43.9340089
LP -31.26984 1.56429369 0.241990158 62.00287 -35.1596359 -30.7595949
UB
beta[1] -6.3057001
beta[2] 0.3146071
Deviance 52.7751852
LP -30.0571998
Summary of Stationary Samples
Mean SD MCSE ESS LB Median UB
beta[1] NA NA NA NA NA NA NA
beta[2] NA NA NA NA NA NA NA
Random-Walk Metropolis
This algorithm required tuning of the covariance matrix, which can be done from previous runs. Hence I started with diagonal and did three subsequent runs. The second run had no stationary samples, so I took all. I did not think setting Thinning to 20 worth the effort of making a fourth run for this exercise (but would have if the result were important, rather than training myself on these algorithms).Fit1 <- LaplacesDemon(Model,
Data=MyData,
Algorithm=’RWM’,
Covar=diag(.001,nrow=2),
Initial.Values = Initial.Values
)
Fit2 <- LaplacesDemon(Model,
Data=MyData,
Algorithm=’RWM’,
Covar=var(Fit1$Posterior2),
Initial.Values = apply(Fit1$Posterior2,2,median)
)
Fit3 <- LaplacesDemon(Model,
Data=MyData,
Algorithm=’RWM’,
Covar=var(Fit2$Posterior1),
Initial.Values = apply(Fit2$Posterior1,2,median)
)
Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = apply(Fit2$Posterior1,
2, median), Covar = var(Fit2$Posterior1), Algorithm = “RWM”)
Acceptance Rate: 0.5369
Algorithm: Random-Walk Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
beta[1] beta[2]
4.852137624 0.003251339
Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
All Stationary
Dbar 44.466 44.466
pD 1.974 1.974
DIC 46.440 46.440
Initial Values:
beta[1] beta[2]
-11.7220763 0.2901719
Iterations: 10000
Log(Marginal Likelihood): -23.33119
Minutes of run-time: 0.03
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 20
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10
Summary of All Samples
Mean SD MCSE ESS LB Median
beta[1] -11.7487675 2.08295510 0.090017378 801.7285 -16.0709597 -11.634496
beta[2] 0.2916762 0.05365258 0.002304645 808.2413 0.1890284 0.289995
Deviance 44.4658467 1.98718601 0.088248062 692.4438 42.5001258 43.836932
LP -31.0497836 0.99971429 0.044403544 690.7914 -33.6598820 -30.735747
UB
beta[1] -7.737687
beta[2] 0.398553
Deviance 49.737799
LP -30.058261
Summary of Stationary Samples
Mean SD MCSE ESS LB Median
beta[1] -11.7487675 2.08295510 0.090017378 801.7285 -16.0709597 -11.634496
beta[2] 0.2916762 0.05365258 0.002304645 808.2413 0.1890284 0.289995
Deviance 44.4658467 1.98718601 0.088248062 692.4438 42.5001258 43.836932
LP -31.0497836 0.99971429 0.044403544 690.7914 -33.6598820 -30.735747
UB
beta[1] -7.737687
beta[2] 0.398553
Deviance 49.737799
LP -30.058261
Reflective Slice Sampler
The manual describes this as a difficult algorithm to tune. Indeed it seems that my runs did not give the desired result.Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
Iterations = 80000, Status = 2000, Thinning = 30, Algorithm = “RSS”,
Specs = list(m = 5, w = 0.05 * c(0.1, 0.002)))
Acceptance Rate: 1
Algorithm: Reflective Slice Sampler
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
beta[1] beta[2]
2.275190007 0.002086958
Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
All Stationary
Dbar 49.146 43.048
pD 1498.707 0.109
DIC 1547.853 43.158
Initial Values:
[1] -10 0
Iterations: 80000
Log(Marginal Likelihood): -20.72254
Minutes of run-time: 1.2
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 2128
Recommended Burn-In of Un-thinned Samples: 63840
Recommended Thinning: 34
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 2666
Thinning: 30
Summary of All Samples
Mean SD MCSE ESS LB Median
beta[1] -12.0360018 1.50814086 0.326429373 5.612834 -14.1006967 -12.4484258
beta[2] 0.2963282 0.04532995 0.009314235 13.964938 0.2059285 0.3089917
Deviance 49.1455311 54.74864402 8.132535667 63.021710 42.4742009 43.1081636
LP -33.3920123 27.37147853 4.065587914 63.029321 -31.5206568 -30.3749779
UB
beta[1] -8.702122
beta[2] 0.350354
Deviance 45.428853
LP -30.045841
Summary of Stationary Samples
Mean SD MCSE ESS LB Median
beta[1] -12.5560430 0.63515155 0.228104908 8.833540 -13.8216246 -12.5461040
beta[2] 0.3114146 0.01605936 0.005788805 6.079825 0.2801353 0.3112473
Deviance 43.0483178 0.46778514 0.105615710 21.341014 42.4904018 42.9445137
LP -30.3488683 0.24012061 4.065587914 20.065718 -30.9052213 -30.2957432
UB
beta[1] -11.2869843
beta[2] 0.3434128
Deviance 44.1323841
LP -30.0562172
To leave a comment for the author, please follow the link and comment on their blog: Wiekvoet.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.