Simulate an Ar(1) Process with Uniform Innovations

Simulate an AR(1) process with uniform innovations

We want to use R base function arima.sim for this task, and no extra libraries are required.

By default, arima.sim generates ARIMA with innovations ~ N(0,1). If we want to change this, we need to control the rand.gen or innov argument. For example, you want innovations from uniform distributions U[-0.5, 0.5], we can do either of the following:

arima.sim(model=list(ar=.75), n=100, rand.gen = runif, min = -0.5, max = 0.5)

arima.sim(model=list(ar=.75), n = 100, innov = runif(100, -0.5, 0.5))

Example

set.seed(0)
y <- arima.sim(model=list(ar=.75), n = 100, innov = runif(100, -0.5, 0.5))
ts.plot(y)

Sample Image


In case we want to have explicit control on y[0], we can just shift the above time series such that it starts from y[0]. Suppose y0 is our desired starting value, we can do

y <- y - y[1] + y0

For example, starting from y0 = 1:

y <- y - y[1] + 1
ts.plot(y)

Sample Image

Recursive calculation of a simulated variable

This is a typical autoregressive process, which can be generated using of filter with "recursive" method.

e <- rnorm(156, 0, 0.001)
filter(x = c(0, e), filter = 0.5, method = "recursive")[-1]

Let's consider a small example with length 5 only:

set.seed(0)

e <- rnorm(5, 0, 0.1)
# [1] 0.12629543 -0.03262334 0.13297993 0.12724293 0.04146414

x <- filter(x = c(0, e), filter = 0.5, method = "recursive")

x[-1]
# [1] 0.12629543 0.03052438 0.14824212 0.20136399 0.14214614

filter is the workhorse of arima.sim, however, it is simply a computational routine with written C code and does not require the process to be stationary. Readers interested in arima.sim may continue to read:

  • Simulate a time series
  • Simulate an AR(1) process with uniform innovations

ar(1) simulation with non-zero mean

Your last option is okay to get the desired mean, "mu". It generates data from the model:

(y[t] - mu) = phi * (y[t-1] - mu) + \epsilon[t], epsilon[t] ~ N(0, sigma=21),
t=1,2,...,n.

Your first approach sets an intercept, "alpha", rather than a mean:

y[t] = alpha + phi * y[t-1] + epsilon[t].

Your second option sets the starting value y[0] equal to 300. As long as |phi|<1 the influence of this initial value will vanish after a few periods and will have no effect
on the level of the series.

Edit

The value of the standard deviation that you observe in the simulated data is correct. Be aware that the variance of the AR(1) process, y[t], is not equal the variance of the innovations, epsilon[t]. The variance of the AR(1) process, sigma^2_y, can be obtained obtained as follows:

Var(y[t]) = Var(alpha) + phi^2 * Var(y[t-1]) + Var(epsilon[t])

As the process is stationary Var(y[t]) = Var(t[t-1]) which we call sigma^2_y. Thus, we get:

sigma^2_y = 0 + phi^2 * sigma^2_y + sigma^2_epsilon
sigma^2_y = sigma^2_epsilon / (1 - phi^2)

For the values of the parameters that you are using you have:

sigma_y = sqrt(21^2 / (1 - 0.8^2)) = 35.

Why Rcpp openmp parallized codes sometimes gave: Segmentation fault (core dumped)

Here a possible use of RMatrix that should help with the segmentation faults:

// [[Rcpp::depends(RcppParallel)]]
#include <RcppParallel.h>
#include <Rcpp.h>
#include <omp.h>

using namespace Rcpp;
// [[Rcpp::export]]
float fsvalue(NumericMatrix x1, NumericMatrix x2,int n_cpu=2) {
RcppParallel::RMatrix<double> X1(x1);
RcppParallel::RMatrix<double> X2(x2);
... [your code as above] ...


Related Topics



Leave a reply



Submit