Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
There is actually a deeper entry found on Technology Review. (Which may have been the source for the paper in the technology tribune of La Recherche.) The article mentions that the generator satisfied all benchmarks of “randomness” maintained by NIST. Those statistical tests sound much more reassuring than the entropy check mentioned by La Recherche, as they essentially reproduce Marsaglia’s DieHard benchmark… I remain rather skeptical about physical devices, as compared with mathematical functions, because of (a) non-reproducibility which is a negative feature despite what the paper says and of (b) instability of the device, which means that proven uniformity at time t does not induce uniformity at time t+1. Nonetheless, if the gains in execution are gigantic, it may be worth the approximation for most applications. But please stop using “true” in conjunction with randomness!!!
Filed under: Books, R, Statistics, University life Tagged: black matter, cosmology, DieHard, Intel, La Recherche, Marsaglia, pseudo-random generator, Technology Review
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.