Read Gzipped CSV Directly from a Url in R

Read gzipped csv directly from a url in R

I am almost certain I answered this question once before. The upshot is that Connections API of R (file(), url(), pipe(), ...) can do decompression on the fly, I do not think you can do it for remote http objects.

So do the very two-step you have described: use download.file() with a tempfile() result as second argument to fetch the compressed file, and then read from it. As tempfile() object, it will get cleaned up automatically at the end of your R session so the one minor fix I can suggest is to skip the unlink() (but then I like explicit cleanups, so you may as well keep it).

Edit: Got it:

con <- gzcon(url(paste("http://dumps.wikimedia.org/other/articlefeedback/",
"aa_combined-20110321.csv.gz", sep="")))
txt <- readLines(con)
dat <- read.csv(textConnection(txt))

dim(dat)
# [1] 1490 19

summary(dat[,1:3])
# aa_page_id page_namespace page_title
# Min. : 324 Min. :0 United_States : 79
# 1st Qu.: 88568 1st Qu.:0 2011_NBA_Playoffs : 52
# Median : 2445733 Median :0 IPad_2 : 43
# Mean : 8279600 Mean :0 IPod_Touch : 38
# 3rd Qu.:16179920 3rd Qu.:0 True_Grit_(2010_film): 38
# Max. :31230028 Max. :0 IPhone_4 : 26
# (Other) :1214

The key was the hint the gzcon help that it can put decompression around an existing stream. We then need the slight detour of readLines and reading via textConnection from that as read.csv wants to go back and forth in the data (to validate column width, I presume).

R: readr: How to read a file that is provided via URL and gzipped

version 0.2.2 of reader can handle both nicely. The error was with older version.

Downloading and extracting .gz data file using R

Some additional options, with base R:

url <- "http://cbio.mskcc.org/microrna_data/human_predictions_S_C_aug2010.txt.gz"
tmp <- tempfile()
##
download.file(url,tmp)
##
data <- read.csv(
gzfile(tmp),
sep="\t",
header=TRUE,
stringsAsFactors=FALSE)
names(data)[1] <- sub("X\\.","",names(data)[1])
##
R> head(data)
mirbase_acc mirna_name gene_id gene_symbol transcript_id ext_transcript_id mirna_alignment
1 MIMAT0000062 hsa-let-7a 5270 SERPINE2 uc002vnu.2 NM_006216 uuGAUAUGUUGGAUGAU-GGAGu
2 MIMAT0000062 hsa-let-7a 494188 FBXO47 uc002hrc.2 NM_001008777 uugaUA-UGUU--GGAUGAUGGAGu
3 MIMAT0000062 hsa-let-7a 80025 PANK2 uc002wkc.2 NM_153638 uugauaUGUUGG-AUGAUGGAgu
4 MIMAT0000062 hsa-let-7a 26036 ZNF451 uc003pdp.2 AK027074 uuGAUAUGUUGGAUGAUGGAGu
5 MIMAT0000062 hsa-let-7a 586 BCAT1 uc001rgd.3 NM_005504 uugaUAUGUUGGAUGAUGGAGu
6 MIMAT0000062 hsa-let-7a 22903 BTBD3 uc002wnz.2 NM_014962 uuGAUAUGUUGGAU-GAUGG-AGu
alignment gene_alignment mirna_start mirna_end gene_start gene_end
1 | :|: ||:|| ||| |||| aaCGGUGAAAUCU-CUAGCCUCu 2 21 495 516
2 || |||: ::||||||||: acaaAUCACAGUUUUUACUACCUUc 2 19 459 483
3 |::||: |||||||| aauuucAUGACUGUACUACCUga 3 17 77 99
4 || || | | ||||||| ccCUCUAGA---UUCUACCUCa 2 21 1282 1300
5 :|| |: |||||||| guagGUAAAGGAAACUACCUCa 2 19 6410 6431
6 || || ||| || ||||| || uaCUUUAAAACAUAUCUACCAUCu 2 21 2265 2288
genome_coordinates conservation align_score seed_cat energy mirsvr_score
1 [hg19:2:224840068-224840089:-] 0.5684 122 0 -14.73 -0.7269
2 [hg19:17:37092945-37092969:-] 0.6464 140 0 -16.38 -0.1156
3 [hg19:20:3904018-3904040:+] 0.6522 139 0 -16.04 -0.2066
4 [hg19:6:56966300-56966318:+] 0.7627 144 7 -14.51 -0.8609
5 [hg19:12:24964511-24964532:-] 0.6775 150 7 -15.09 -0.2735
6 [hg19:20:11906579-11906602:+] 0.5740 131 0 -12.59 -0.2540

Or if you are on a Unix-like system, you could obtain the .txt file (either outside of R or using system or system2 from within R) like this:

[nathan@nrussell tmp]$ url="http://cbio.mskcc.org/microrna_data/human_predictions_S_C_aug2010.txt.gz"
[nathan@nrussell tmp]$ wget "$url" && gunzip human_predictions_S_C_aug2010.txt.gz

and then proceed as above, where you are reading in human_predictions_S_C_aug2010.txt from wherever wget and gunzip were executed,

data <- read.csv(
"~/tmp/human_predictions_S_C_aug2010.txt",
stringsAsFactors=FALSE,
header=TRUE,
sep="\t")

in my case.



Related Topics



Leave a reply



Submit