Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I have yet another Meetup talk to announce:
On Wednesday, October 26th, I’ll be talking about ‘Decoding The Black Box’ at the Frankfurt Data Science Meetup.
Particularly cool with this meetup is that they will livestream the event at www.youtube.com/c/FrankfurtDataScience!
TALK#2: DECODING THE BLACK BOX
And finally we will have with us Dr.Shirin Glander, whom we were inviting for a long time back. Shirin lives in Münster and works as a Data Scientist at codecentric, she has lots of practical experience. Besides crunching data, she trains her creativity by sketching information. Visit her blog and you will find lots of interesting stuff there, like experimenting with Keras, TensorFlow, LIME, caret, lots of R and also her beautiful sketches. We recommend: www.shirin-glander.de Besides all that she is an organiser of MünsteR – R User Group: www.meetup.com/Munster-R-Users-Group
Traditional ML workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex ML models are essentially black boxes and too complicated to understand, we need to use approximations, like LIME.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.