Highlights from Shiny in Production (2024)

[This article was first published on The Jumping Rivers Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Hot on the heels of Shiny in Production 2022 & 2023, we were excited to dive back into all things Shiny for a third consecutive year. In this post we recap the highlights from the two days of talks and workshops.

Workshops

As with previous iterations of the conference, we began on Day 1 with an afternoon of insightful workshops:

  • Level up your plots: Tips, tricks and resources for crafting compelling visualisations with R and ggplot2: Following her stand-out talk at Shiny in Production 2023, we were delighted to welcome back Cara Thompson for both a talk AND a workshop this year! Cara’s hands-on workshop offered attendees a chance to craft appealing and informative visualisations of their data without compromising on accessibility.

  • Building Responsive Shiny Applications: Our very own Shiny expert, Pedro Silva, shared some responsive design principles and best practices for Shiny developers to build fluid web pages that run on various screen sizes from desktops to mobile devices.

  • Asynchronous Shiny: Our data scientist and trainer, Russ Hyde, introduced the idea of asynchronous programming, providing attendees with the basics to solve between-session and within-session blocking in a Shiny app.

  • Building Apps for Humans: Osheen MacOscar (another JR data scientist!) explored the basics of human-computer interaction and outlined how layout, colour, size and motion in a Shiny interface can be used to enhance the user experience.

Noticing a trend here? At Jumping Rivers we offer training and upskilling in all things Shiny! If you’re keen to learn more about Shiny (or data science more generally) check out our full list of available training courses here.


Photo of all main speakers Photo of all lightning talk speakers

Talks

On Day 2 we enjoyed talks from some fabulous speakers across a range of industries!

Keynote: Cara Thompson (Data visualisation consultant)

Data-To-Wow: Leveraging Shiny as a no-code solution for high-end parameterised visualisations

The vast majority of data visualisations start from the data, and while you may not know exactly how the final image will look at the start, you can tweak and refine your way to a result that looks good. But Cara had a slightly different challenge: take an existing data visualisation the client has designed, and recreate it in {ggplot2} so the plots can be quickly generated from any future data.

Cara guided us through how she tackled some of the challenges encountered along the way, such as creating your own {ggplot2} geom in order to draw straight lines between points when the plot uses polar coordinates. There were also lessons in why we shouldn’t always rely on a single numerical summary like the mean in a plot, when the raw data has the potential to show us patterns we’d ordinarily lose.

But in order to be useful for the client, all this hard-work needs to be easy-to-use. And when the client has no prior experience with running R code or using an IDE like RStudio, Shiny becomes a valuable tool for allowing anyone to run R code, without needing to know how to run the R code. To make it as easy as possible to run the Shiny application, Cara provided her client with a desktop shortcut; click on it to automatically execute the Shiny application in a background R process, and displays the Shiny app in a web browser as normal. The net result is the client can locally run the Shiny app on their computer, just like they would any other software application.

Pedro Silva (Jumping Rivers)

Convincing IT that R packages are safe

When IT departments are responsible for ensuring the security and integrity of the systems they manage, it’s understandable that IT managers will be reluctant to install software if they can avoid it. Combine this with the nature of open-source software often being maintained by thousands of volunteers—with some operating entirely under online pseudonyms—and they can also start to view some software with great scepticism. The issue becomes even more serious when you work in a heavily regulated industry—such as banking, pharmaceuticals or critical national infrastructure—where the systems could be scrutinised in an audit or the consequences of a compromised system can be severe.

Pedro provided an insight into the need to validate R packages, and the solutions Jumping Rivers is currently working on with organisations in industry. The aim is to provide information that summarises the risk of using any R package on CRAN based on the quality of its development. Users can specify additional and stricter test criteria for what should be checked and apply weighting of what testing criteria should have a greater influence on the final risk summary scores.

With this information, organisations will be able to determine if the package they want to use is safe enough to use. Where packages are identified to carry too much risk, they can invest time in fixing the issues in the weakest areas of the package, such as increasing test coverage on some of the package methods.

Pedro rounded off the talk by demonstrating the use of Quarto to generate summary reports and a Shiny dashboard that allows users to explore breakdowns of validation scores from different packages.

Vikki Richardson (Audit Scotland)

Faster than a Speeding Arrow – R Shiny Optimisation In Practice

When the size of your data is large enough to cause considerable loading times, it’s time to start optimising how your data is handled. Vicky talked us through how her team went about cutting their application loading times.

There are many strategies for trying to make a more performant application. The most obvious is to throw more compute resources at the problem; have more instances of the application run so concurrent users each get a faster experience. But it doesn’t necessarily solve the underlying issue, it inflates your compute costs, and as a solution, it lacks a certain elegance.

Data caching from the {memoise} package can provide great speed-up, but with Arrow, there was more to be found. With Vicky explaining how they were able to interface with Apache Arrow commands using {dplyr} syntax, and highlighting some of the drawbacks with this method such as when certain {dplyr} verbs aren’t supported by Apache Arrow, the end result was certainly impressive. The combination of caching and use of Apache Arrow saw data processing times slashed from several minutes to under 2 seconds.

Gareth Burns (Exploristics)

Shiny in Secondary Education: Supplementing traditional learning resources to allow students to explore statistical concepts

Gareth describes this project as “a passion project that wouldn’t have been successful without Shiny”. It all started with a call to action by Steve Mallett, that led to Gareth volunteering his time towards the development of a Shiny application that could be used for science communication in secondary schools, that makes learning fun, engaging and interactive.

The app addressed a number of issues with how the existing workshops were performed, such as removing the need for most physical materials to be sent to different locations. There were valuable lessons along the way too, such as the importance of making your application robust to the mischievous minds of secondary-level students who will find creative ways to break your work, and ensuring your data visualisations will be understandable to your target audience. And ideas for minimising potential human data-input errors by having the data captured within the application itself.

The live demonstration of the Shiny application showcased some well-designed custom-made widgets and modules, crafted using self-made HTML, CSS and JavaScript. And inspiration for making the application more engaging to young students was provided to gamify the activity. The slides can be found here and the (messy) code is on Github.

Tan Ho (Zelus Analytics)

A minimum viable Shiny infrastructure for serving 95,000 monthly users

How do you support many – many – users of a Shiny app? Tan took us through the lifetime of the DynastyProcess Fantasy Football app. This was originally built by Tan and his friend Joe Sydlowski. Neither of them had made a Shiny app, and Tan had never written any R, before this was built and within 2 years of running they had 200,000 unique users per month. A series of top tips were presented to ensure that your app keeps running, grows with its audience, and gets you that data science job that you dreamed of. 1) Try running your own shiny server, this cheap option could help you scale up your app when you need to. 2) Don’t do too much inside your app either, try pushing as much data processing outside of your app as possible. 3) Log everything and 4) listen to your users. But most importantly, start from where you are because “there’s always much to learn”.

Talk materials available here

Katy Morgan (Government Internal Audit Agency)

More than just a chat bot: Tailoring the use of Generative AI within Government Internal Audit Agency with user-friendly R shiny applications

Katy presented an insight into three Shiny apps that are used while making government audits. These are used at different stages of the audit process and make use of ChatGPT. For example, when thinking about the risks within a project, what are the possible causes, events associated with, and consequences of those risks? A series of predefined ChatGPT prompts are used to suggest text that expert auditors can make use of within their workflow. The apps are deployed on Azure app service and make use of the Golem framework and docker to simplify development, deployment and authentication.

Lightning Talks

This year also featured a segment for lightning talks with a twist: all speakers would be presenting from auto-scrolling slides which would change every 20 seconds! This turned out to be much less dramatic than anticipated, with our lightning speakers all staying perfectly on time…

Here’s a brief synopsis of each talk.

The SK8 Project: A scalable institutional architecture for managing and hosting Shiny applications

David Carayon (INRAE) started his talk by noting that, while Shiny is a great tool for building web apps, it’s not always easy to share these with colleagues. In particular, paid solutions such as Posit and AWS are not always feasible to Shiny users. Enter SK8, which offers a cost-effective solution for deploying Shiny apps to the web using Kubernetes. The deployment process involves an automated CI/CD pipeline which unpacks the app dependencies and creates a Dockerfile which is deployed by Kubernetes to the web. In the space of just a few years, the service has grown to 100 deployed applications! The talk materials can be found here.

Monitoring and improving Posit Workbench usage behaviour at Public Health Scotland

At PHS there are over 450 active users of Posit. Alasdair Morgan showed us how he has been reporting the Posit Workbench usage habits with an aim to keep costs down by avoiding wastage of the allocated resources. User activity is tracked by Azure logs and reported using R Markdown. These reports include hard hitting visualisations of the proportion of allocated memory and CPUs that are actually being used. Remembering that this was a Shiny conference, Alasdair showed us a dashboard highlighting some of the worst offenders! (anonymised of course…)

Alasdair’s light-hearted examination of the user habits at PHS went down very well with our audience and went on to take the prize for best lightning talk!

Using Google Lighthouse to analyse Shiny Applications

Fresh from his workshop the day before, Osheen MacOscar introduced us to Google Lighthouse, an open source tool for assessing various metrics of web-based apps including load speed, interactivity and accessibility. Selecting 134 Shiny apps from Appsilon, Osheen showed that only 40 apps were listed as having “good” performance. Osheen went on to show that as complexity is added (such as interactive plots and widgets) performance can decrease due to slower load times. However, this is not always a bad thing since widgets can also improve the user experience. In summary, Lighthouse is a great tool for assessing apps but we shouldn’t let it stop us from adding (useful) complexity to our apps.

Risk Assessment as a Service (Project Roll-out)

Another of our data scientists, Astrid Radermacher, discussed the importance of risk assessment in various industries and our efforts at JR to roll out package validation as an automated service. Our process involves assessing the package (checking if it is secure and well maintained) and generating a report. If the package passes our checks, it can be included in the client’s regulated environment. Otherwise we can offer manual remediation. Having largely focused on package assessment, our next steps will be on improving package remediation and scaling the automated service to different user operating systems. We look forward to onboarding additional clients with the service in early 2025 and releasing to open source in the longer term.

Chagas Diagnostic Algorithms: an online application to estimate cost and effectiveness of diagnostic algorithms for Chagas disease

Juan Vallarta (FIND) began by outlining the challenges in diagnosing Chagas disease (a global disease which is particularly prevalent in Latin America). Diagnosis is often financially and logistically challenging and can either be conducted in a lab setting or more rapidly onsite. He then presented an online tool for estimating the cost and effectiveness of different diagnostic algorithms, taking into account the sensitivity and specificity of each test. The results can be viewed in a user interface and downloaded into an HTML report. The app has been deployed globally not just for Chagas, but other diseases including covid and mpox.

rainbowR

Our final lightning talk was given by Ella Kaye (University of Warwick) who spoke about rainbowR, which aims to connect, support and promote LGBTQ+ users within the R community. Since 2017 the rainbowR community has grown to over 130 members and runs monthly meetups. The community also manages a social data project (tidyRainbow) which collates LGBTQ+ datasets. To find out how to join and contribute, check out rainbowr.org/.

What happens next?

We want to say thank you to the sponsors of the event for your support in making it possible!

Thanks also to our speakers and attendees who travelled from near and far to make it another memorable conference! Check out our YouTube channel where the talk recordings will be released in the coming weeks!

We’re already planning on running Shiny in Production again! The 2025 edition will run on the 8th & 9th of October and you can already grab your super early bird tickets here. We can’t wait to share more details with you soon!

Sponsors

NICD logo

R Consortium logo
RSS logo
CRC Press logo

For updates and revisions to this article, see the original post

To leave a comment for the author, please follow the link and comment on their blog: The Jumping Rivers Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)