How to Deploy Shiny App That Uses Local Data

how to deploy shiny app that uses local data

You may want to add a subdirectory in your shiny folder called "Data" and put proj.csv there.

Then, in your server.r put:

data<-read.csv("./Data/proj.csv")

That will make it clear where the data is when the app is deployed to the ShinyApps service.

Deploying shiny apps with local dataset

Put your dataset in a subdirectory of your shiny app directory (and change you code accordingly). Be sure to make the path to the data a relative path (not an absolute path - this generates a warning). This has worked well for me.

Deploying an app on a shiny server; load data once and make readily available to multiple users/sessions

You can retrieve the data inside the global.R file. All the objects created inside global.R will be available for all the sessions of your app.

Here is an explanation of the Scoping rules for Shiny apps.

Another solution, since your data seems to be the same all the time, is to manually retrieve the data and save it in a local file. Then just load the data inside the server from the local file, this will be faster than accessing the DB.

write.csv and read.csv in Shiny App shared on shinyapps.io

There's no possibility of using directories in shinyapp.io. An easy fix is to place an upload button inside the app, perform all the manipulations you need and finally download the result with a download button again. Getting the data from a remote server is also a good option.

As shown in this Article

"Local vs remote storage

Before diving into the different storage methods, one important distinction to understand is local storage vs remote storage.

Local storage means saving a file on the same machine that is running the Shiny application. Functions like write.csv(), write.table(), and saveRDS() implement local storage because they will save a file on the machine running the app. Local storage is generally faster than remote storage, but it should only be used if you always have access to the machine that saves the files.

Remote storage means saving data on another server, usually a reliable hosted server such as Dropbox, Amazon, or a hosted database. One big advantage of using hosted remote storage solutions is that they are much more reliable and can generally be more trusted to keep your data alive and not corrupted.

When going through the different storage type options below, keep in mind that if your Shiny app is hosted on shinyapps.io, you will have to use a remote storage method for the time being. In the meantime, using local storage is only an option if you’re hosting your own Shiny Server. If you want to host your own server, here is a guide that describes in detail how to set up your own Shiny Server."

Shinyapp.io to read a local file that update its content every 5 minutes

I haven't extensive experience with shiny deployments on shinyapps.io
but I'll try to keep this as general as possible.The main limitation lies on being unable to schedule a CRON job to grab data from your
machine on a schedule. Hence I would consider the following:

  • Push your data on a storage provider (Dropbox will be used as an example ) each 5 minutes using a CRON job
  • Grab the data in your Shiny dashboard.

Below you can find couple example around Dropbox and Google Drive but you can easily it apply pretty much the same concepts for AWS and GCP (although you'll have to fiddle with passing secrets or encrypting your auth tokens).

Dropbox example

rdrop2 offer an easy to use wrapper around Dropbox API. Below you can find a
simple example on how to push and retrieve a text file from an account (from rdrop2 readme file).

library(rdrop2)

# Authenticate and save token for later use2
token <- drop_auth()
saveRDS(token, "~/dropbox_token.rds")

# Create a folder
drop_create('upload_test')
# You can also create a public folder if data is not sensitive
# drop_create('public/upload_test')

# Upload the file in the freshly created folder
drop_upload("~/mtcars.csv", path = "upload_test")

## Retrieveing your file is as simple as
drop_download("upload_test/mtcars.csv", local_path = "~/new_file.csv")

Implementing it in Shiny

The cleanest way to apply the example above in Shiny would be to place data acquisition
in a global.R file that will be imported into your Shiny application before running:

global.R:

library(rdrop2)

# Authenticate and save token for later use2
token <- drop_auth(rdstoken = "dropbox_token.rds")

# Retrieveing your file is as simple as
drop_download("upload_test/mtcars.csv", local_path = "data.csv",
overwrite = TRUE)
drop_df <- read.csv("data.csv", sep = ",")

print("Downloaded and imported data!")

Your app.R file will look something like this:

library(shiny)
source("global.R")

ui <- fluidPage(

# Application title
titlePanel("Pulling data from Dropbox"),

mainPanel(
tableOutput("df_output")
)
)

server <- function(input, output) {

output$df_output <- renderTable({
drop_df
})
}

shinyApp(ui = ui, server = server)

Deploy to shinyapps

You can then deploy your app as usual (including the auth token).
Sample Image

Scheduling data upload

Since your data gets refreshed every 5 mintues on your local machine, it'll be needed
to have an upload schedule with that cadence. Here I'll be using the cronR package but
using crontab on Linux will work just fine.

library(cronR)

cron_add(source("data_upload.R"), frequency = "*/5 * * * *",
description = "Push data to Dropbox")

plumber api

As @Chris mentioned, calling an API might be an option, especially if data will be needed outside of R scripts and Shiny dashboards. Below you can find a short endpoint one could call to retrieve data in csv format. Shinyapps.io doesn't support hosting plumber api, hence you'd have to host it on your favorite cloud provider.

library(plumber)
library(rdrop2)

#* @apiTitle Plumber Example API

#* Echo dropbox .csv file
#* @get /get-data
function(req, res) {
auth_token <- drop_auth(rdstoken = "token.rds")

drop_download('upload_test/mtcars.csv', dtoken = auth_token,
local_path = "mtcars.csv", overwrite = TRUE)

include_file("mtcars.csv", res, 'text/csv')
}

Building and starting the service with:

r <- plumb("plumber.R")  
r$run()

Deploying shiny apps with local dataset

Put your dataset in a subdirectory of your shiny app directory (and change you code accordingly). Be sure to make the path to the data a relative path (not an absolute path - this generates a warning). This has worked well for me.

Deploy Shiny App on shinyapps.io with blpconnect() function to fetch real time data from Bloomberg

There is no way to fetch this data on the Shinyapps.io-Server. Since the rblpapi is accessing your local bloomberg instance, which is not running on the shinyapps.io-server, it is not possible for the app to access the data on the server, and this is the reason why your local app version is able to fetch the data.
Futhermore, check the Bloomberg license agreement. They are very strict and percise where you are allowed to access and use the provided data. I highly question that you are allowed to use their real time data in an external cloud based app.



Related Topics



Leave a reply



Submit