How to Find Useful R Tutorials with Various Implementations

R For Loop unable to store the data

either you work with it as a matrix :

holder<-matrix(0,nrow=3,ncol=3)
for(i in 1:3){
apple<-c(i+1, i*2, i^3)
holder[,i]<-apple # columnwise, that's how sapply does it too
}

Or you use lists:

holder <- vector('list',3)
for(i in 1:3){
apple<-c(i+1, i*2, i^3)
holder[[i]]<-apple
}

Or you just do it the R way :

holder <- sapply(1:3,function(i) c(i+1, i*2,i^3))
holder.list <- sapply(1:3,function(i) c(i+1, i*2,i^3),simplify=FALSE)

On a sidenote : if you struggle with this very basic problem in R, I strongly recommend you to browse through any of the introductions you find on the web. You get a list of them at :

Where can I find useful R tutorials with various implementations?

How to implement q-learning in R?

This post is by no means a complete implementation of Q-learning in R. It is an attempt to answer the OP with regards to the description of the algorithm in the website linked in the post and in Wikipedia.

The assumption here is that the reward matrix R is as described in the website. Namely that it encodes reward values for possible actions as non-negative numbers, and -1's in the matrix represent null values (i.e., where there is no possible action to transition to that state).

With this setup, an R implementation of the Q update is:

Q[cs,ns] <- Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]) - Q[cs,ns])

where

  1. cs is the current state at the current point in the path.
  2. ns is the new state based on a (randomly) chosen action at the current state. This action is chosen from the collection of possible actions at the current state (i.e., for which R[cs,] > -1). Since the state transition itself is deterministic here, the action is the transition to the new state.
  3. For this action resulting in ns, we want to add its maximum (future) value over all possible actions that can be taken at ns. This is the so-called Max[Q(next state, all actions)] term in the linked website and the "estimate of optimal future value" in Wikipedia. To compute this, we want to maximize over the ns-th row of Q but consider only columns of Q for which columns of R at the corresponding ns-th row are valid actions (i.e., for which R[ns,] > -1). Therefore, this is:

    max(Q[ns, which(R[ns,] > -1)])

    An interpretation of this value is a one-step look ahead value or an estimate of the cost-to-go in dynamic programming.

  4. The equation in the linked website is the special case in which alpha, the learning rate, is 1. We can view the equation in Wikipedia as:

    Q[cs,ns] <- (1-alpha)*Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]))

    where alpha "interpolates" between the old value Q[cs,ns] and the learned value R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]). As noted in Wikipedia,

    In fully deterministic environments, a learning rate of alpha=1 is optimal

Putting it all together into a function:

q.learn <- function(R, N, alpha, gamma, tgt.state) {
## initialize Q to be zero matrix, same size as R
Q <- matrix(rep(0,length(R)), nrow=nrow(R))
## loop over episodes
for (i in 1:N) {
## for each episode, choose an initial state at random
cs <- sample(1:nrow(R), 1)
## iterate until we get to the tgt.state
while (1) {
## choose next state from possible actions at current state
## Note: if only one possible action, then choose it;
## otherwise, choose one at random
next.states <- which(R[cs,] > -1)
if (length(next.states)==1)
ns <- next.states
else
ns <- sample(next.states,1)
## this is the update
Q[cs,ns] <- Q[cs,ns] + alpha*(R[cs,ns] + gamma*max(Q[ns, which(R[ns,] > -1)]) - Q[cs,ns])
## break out of while loop if target state is reached
## otherwise, set next.state as current.state and repeat
if (ns == tgt.state) break
cs <- ns
}
}
## return resulting Q normalized by max value
return(100*Q/max(Q))
}

where the input parameters are:

  1. R is the rewards matrix as defined in the blog
  2. N is the number of episodes to iterate
  3. alpha is the learning rate
  4. gamma is the discount factor
  5. tgt.state is the target state of the problem.

Using the example in the linked website as a test:

N <- 1000
alpha <- 1
gamma <- 0.8
tgt.state <- 6
R <- matrix(c(-1,-1,-1,-1,0,-1,-1,-1,-1,0,-1,0,-1,-1,-1,0,-1,-1,-1,0,0,-1,0,-1,0,-1,-1,0,-1,0,-1,100,-1,-1,100,100),nrow=6)
print(R)
## [,1] [,2] [,3] [,4] [,5] [,6]
##[1,] -1 -1 -1 -1 0 -1
##[2,] -1 -1 -1 0 -1 100
##[3,] -1 -1 -1 0 -1 -1
##[4,] -1 0 0 -1 0 -1
##[5,] 0 -1 -1 0 -1 100
##[6,] -1 0 -1 -1 0 100

Q <- q.learn(R,iter,alpha,gamma,tgt.state)
print(Q)
## [,1] [,2] [,3] [,4] [,5] [,6]
##[1,] 0 0 0.0 0 80 0.00000
##[2,] 0 0 0.0 64 0 100.00000
##[3,] 0 0 0.0 64 0 0.00000
##[4,] 0 80 51.2 0 80 0.00000
##[5,] 64 0 0.0 64 0 100.00000
##[6,] 0 80 0.0 0 80 99.99994

How to get help in R?

Getting help on a function that you know the name of

Use ? or, equivalently, help.

?mean
help(mean) # same

For non-standard names use quotes or backquotes; see An Introduction to R: Getting help with functions and features:

For a feature specified by special characters, the argument must be enclosed in double or single quotes, making it a “character string”: This is also necessary for a few words with syntactic meaning including if, for and function."

?`if`
?"if" # same
help("if") # same

There are also help pages for datasets, general topics and some packages.

?iris
?Syntax
?lubridate

Use the example function to see examples of how to use it.

example(paste)
example(`for`)

The demo function gives longer demonstrations of how to use a function.

demo()                           # all demos in loaded pkgs
demo(package = .packages(all.available = TRUE)) # all demos
demo(plotmath)
demo(graphics)


Finding a function that you don't know the name of

Use ?? or, equivalently, help.search.

??regression
help.search("regression")

Again, non-standard names and phrases need to be quoted.

??"logistic regression"

apropos finds functions and variables in the current session-space (but not in installed but not-loaded packages) that match a regular expression.

apropos("z$") # all fns ending with "z"

rseek.org is an R search engine with a Firefox plugin.

RSiteSearch searches several sites directly from R.

findFn in sos wraps RSiteSearch returning the results as a HTML table.

RSiteSearch("logistic regression")

library(sos)
findFn("logistic regression")


Finding packages

available.packages tells you all the packages that are available in the repositories that you set via setRepositories. installed.packages tells you all the packages that you have installed in all the libraries specified in .libPaths. library (without any arguments) is similar, returning the names and tag-line of installed packages.

View(available.packages())
View(installed.packages())
library()
.libPaths()

Similarly, data with no arguments tells you which datasets are available on your machine.

data()

search tells you which packages have been loaded.

search()

packageDescription shows you the contents of a package's DESCRIPTION file. Likewise news read the NEWS file.

packageDescription("utils")    
news(package = "ggplot2")


Getting help on variables

ls lists the variables in an environment.

ls()                 # global environment
ls(all.names = TRUE) # including names beginning with '.'
ls("package:sp") # everything for the sp package

Most variables can be inspected using str or summary.

str(sleep)
summary(sleep)

ls.str is like a combination of ls and str.

ls.str()
ls.str("package:grDevices")
lsf.str("package:grDevices") # only functions

For large variables (particularly data frames), the head function is useful for displaying the first few rows.

head(sleep)

args shows you the arguments for a function.

args(read.csv)


General learning about R

The Info page is a very comprehensive set of links to free R resources.

Many topics in R are documented via vignettes, listed with browseVignettes.

browseVignettes()
vignette("intro_sp", package = "sp")

By combining vignette with edit, you can get its code chunks in an editor.

edit(vignette("intro_sp",package="sp"))    

Convert c('a' 'c', 'b') to c('a', 'a', 'c', 'c', 'b', 'b') in R?

Use ?rep

vec <- c('a', 'b', 'c')
rep(vec, each = 2)

Where can I find good tutorials on writing audio DSP filters (lowpass, etc)?

The link you really want from MusicDSP is http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt

I also recommend getting Lyon's Understanding Digital Signal Processing. I am a bit biased, though, since I was a reviewer for the second edition (but I think a third edition came out recently).

Also check out Digital Audio Signal Processing and DAFX:Digital Audio Effects, both by Udo Zölzer.

implementation of metaheuristics algorithms in R

I'm not familiar with metaheuristics as a field, but the pseudocode as you've given it actually translates fairly easily into R syntax:

# I never metaheuristic I didn't like
metah <- function(S, quality, tweak, n, outer.limit, threshold)
{
outer.n <- 0
repeat {
outer.n <- outer.n + 1
R <- tweak(S)
for(i in seq_len(n - 1))
{
W <- tweak(S)
if(quality(W) > quality(R))
R <- W
}
if(quality(R) > quality(S))
S <- R
if(quality(S) >= threshold || outer.n >= outer.limit)
break
}
S
}

Now all you have to do is provide suitable functions for quality and tweak.

For example, suppose we want to fit a linear regression. In this case, we have a vector of responses y, and a matrix of vectors X. The solution S would be the vector of candidate coefficients at each step, and the "quality" is the squared error loss: sum((y - yhat)^2). Note that here, the lower the quality, the better.

For tweak, we might use a normal distribution of perturbations from the current solution S, with a user-specified covariance matrix.

This can then be coded up as

require(MASS) # for mvrnorm

quality <- function(S, y, X)
sum((y - X %*% S)^2)

tweak <- function(S, sigma=rep(1, length(s))
S + mvrnorm(length(S), 0, sigma)

metah <- function(y, X, quality, tweak, n, outer.limit, threshold)
{
outer.n <- 0
S <- rep(1, ncol(X))
repeat {
outer.n <- outer.n + 1
R <- tweak(S)
for(i in seq_len(n - 1))
{
W <- tweak(S)
if(quality(W, y, X) < quality(R, y, X)) # note reversed comparison!
R <- W
}
if(quality(R, y, X) < quality(S, y, X))
S <- R
if(quality(S) <= threshold || outer.n >= outer.limit)
break
}
S
}

Further improvements might be:

  1. Replace the inner loop for(i in ...) with vectorised code using *apply

  2. let the distribution of tweaks vary depending on the characteristics of the solution, instead of hard-coding it as above (in particular, sigma should vary based on the scale of your X variables)

  3. express threshold in terms of your progress toward a minimum, for example how far each candidate solution has moved from the previous iteration.

Plotting in R software, how to magnify axis values in a large PNG file

I suspect that you need to change it when you plot the graph, for example:

 par(mar=c(4,5,1,1))
plot(rnorm(30), xlab= "Big font", ylab = "Big font", cex.lab = 2, cex.axis = 1.5)

Example:

EDITED 1: To change the title size:

par(mar=c(5,5,4,1))
plot(rnorm(30), xlab= "Big font", ylab = "Big font",
cex.lab = 2, cex.axis = 1.5, cex.main=3, main="Big Font")

Sample Image

EDITED 2:
Shaded plot area. Not sure if this is the best way to do it. There may be simpler and more elegant ways to shade the plot area.

a = rnorm(30)
par(mar=c(5,5,4,1))
plot(a, xlab= "Big font", ylab = "Big font", type="n",
cex.lab = 2, cex.axis = 1.5, cex.main=3, main="Big Font")
x <- par("usr")
rect(x[1], x[3], x[2], x[4], col = "grey")
points(a, pch=19)

Sample Image



Related Topics



Leave a reply



Submit