Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Needs database and async coverage #597

Open
abeburnett opened this issue Dec 8, 2022 · 0 comments
Open

Needs database and async coverage #597

abeburnett opened this issue Dec 8, 2022 · 0 comments

Comments

@abeburnett
Copy link

Very good book, Hadley! Say, I'm running into issues with using Shiny to display very large data (data sets running up to hundreds of thousands of rows). I've been looking for resources online which could help address strategies for handling big data, particularly when we're querying data stores like Snowflake. Clearly the most obvious strategies are filtering, limiting, and sampling, but what about after you've done all of that, and now you're trying to figure out how to build the most performant Shiny app possible. Strategies like async/futures might help (if the same large data set is used in other areas of the app, e.g., on other tabs maybe lazy load it?), maybe strategies like chunking data loading from Snowflake (at least where you don't need all the data all at once), or maybe strategies for how you can cache a default data set (which is smaller and therefore will load faster), but then load live data from the DB when a different option is selected in a dropdown.

Anyway, I have yet to see any kind of robust coverage of how to handle large to huge data with R/Shiny--and just generally how to make performant Shiny apps. I'd love it if you'd add more on this to your book. Thanks for your consideration!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant