Are Eigenvectors Returned by R Function Eigen() Wrong

Are eigenvectors returned by R function eigen() wrong?

Some paper work tells you

  • the eigenvector for eigenvalue 3 is (-s, s) for any non-zero real value s;
  • the eigenvector for eigenvalue 1 is (t, t) for any non-zero real value t.

Scaling eigenvectors to unit-length gives

s = ± sqrt(0.5) = ±0.7071068
t = ± sqrt(0.5) = ±0.7071068

Scaling is good because if the matrix is real symmetric, the matrix of eigenvectors is orthonormal, so that its inverse is its transpose. Taking your real symmetric matrix a for example:

a <- matrix(c(2, -1, -1, 2), 2)
# [,1] [,2]
#[1,] 2 -1
#[2,] -1 2

E <- eigen(a)

d <- E[[1]]
#[1] 3 1

u <- E[[2]]
# [,1] [,2]
#[1,] -0.7071068 -0.7071068
#[2,] 0.7071068 -0.7071068

u %*% diag(d) %*% solve(u) ## don't do this stupid computation in practice
# [,1] [,2]
#[1,] 2 -1
#[2,] -1 2

u %*% diag(d) %*% t(u) ## don't do this stupid computation in practice
# [,1] [,2]
#[1,] 2 -1
#[2,] -1 2

crossprod(u)
# [,1] [,2]
#[1,] 1 0
#[2,] 0 1

tcrossprod(u)
# [,1] [,2]
#[1,] 1 0
#[2,] 0 1

How to find eigenvectors using textbook method

The textbook method is to solve the homogenous system: (A - λI)x = 0 for the Null Space basis. The NullSpace function in my this answer would be helpful.

## your matrix
a <- matrix(c(2, -1, -1, 2), 2)

## knowing that eigenvalues are 3 and 1

## eigenvector for eigenvalue 3
NullSpace(a - diag(3, nrow(a)))
# [,1]
#[1,] -1
#[2,] 1

## eigenvector for eigenvalue 1
NullSpace(a - diag(1, nrow(a)))
# [,1]
#[1,] 1
#[2,] 1

As you can see, they are not "normalized". By contrasts, pracma::nullspace gives "normalized" eigenvectors, so you get something consistent with the output of eigen (up to possible sign flipping):

library(pracma)

nullspace(a - diag(3, nrow(a)))
# [,1]
#[1,] -0.7071068
#[2,] 0.7071068

nullspace(a - diag(1, nrow(a)))
# [,1]
#[1,] 0.7071068
#[2,] 0.7071068

eigen() and the correct eigenvectors

The eigen vectors returned by R are normalized (for the square-norm). If V is a eigen vector then s * V is a eigen vector as well for any non-zero scalar s. If you want the stationary distribution as in your link, divide by the sum:

V / sum(V)

and you will get (1/4, 1/2, 1/4).

So:

ev <- eigen(t(C))$vectors
ev / colSums(ev)

to get all the solutions in one shot.

Sign of eigenvectors change depending on specification of the symmetric argument for symmetric matrices

Linear algebra libraries like LAPACK contain multiple subroutines for carrying out operations like eigendecompositions. The particular subroutine used in any given case may depend on the type of matrix being decomposed, and the pieces of that decomposition needed by the user.

As you can see in this snippet from eigen's code, it dispatches different LAPACK subroutines depending on whether symmetric=TRUE or symmetric=FALSE (and also, on whether the matrix is real or complex).

if (symmetric) {
z <- if (!complex.x)
.Internal(La_rs(x, only.values))
else .Internal(La_rs_cmplx(x, only.values))
ord <- rev(seq_along(z$values))
}
else {
z <- if (!complex.x)
.Internal(La_rg(x, only.values))
else .Internal(La_rg_cmplx(x, only.values))
ord <- sort.list(Mod(z$values), decreasing = TRUE)
}

Based on pointers in ?eigen, La_rs() (used when symmetric=TRUE) appears to refer to dsyevr while La_rg() refers to dgeev.

To learn exactly why those two algorithms switch some of the signs of the eigenvectors of the matrix you've handed to eigen(), you'd have to dig into the FORTRAN code used to implement them. (Since, as others have noted, the sign is irrelevant, I'm guessing you won't want to dig quite that deep ;).

A function for calculating the eigenvalues of a matrix in R

Here is one solution using uniroot.all:

library(rootSolve)
myeig <- function(mat){
myeig1 <- function(lambda) {
y = mat
diag(y) = diag(mat) - lambda
return(det(y))
}

myeig2 <- function(lambda){
sapply(lambda, myeig1)
}
uniroot.all(myeig2, c(-10, 10))
}

R > x <- matrix(rnorm(9), 3)
R > eigen(x)$values
[1] -1.77461906 -1.21589769 -0.01010515
R > myeig(x)
[1] -1.77462211 -1.21589767 -0.01009019

Find eigenvector for a given eigenvalue R

eigen function doesn't give you what you are looking for?

> B <- matrix(1:9, 3)
> eigen(B)
$values
[1] 1.611684e+01 -1.116844e+00 -4.054214e-16

$vectors
[,1] [,2] [,3]
[1,] -0.4645473 -0.8829060 0.4082483
[2,] -0.5707955 -0.2395204 -0.8164966
[3,] -0.6770438 0.4038651 0.4082483


Related Topics



Leave a reply



Submit