Extract Sub- and Superdiagonal of a Matrix in R

Extract sub- and superdiagonal of a matrix in R

Using diag. For the superdiagonal, you just discard the last row and first column. For the subdiagonal, discard first row, last column:

m <- matrix(1:9,nrow=3)

> m
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
> diag(m)
[1] 1 5 9
> diag(m[-nrow(m),-1])
[1] 4 8
> diag(m[-1,-ncol(m)])
[1] 2 6

Add new rows to a matrix subdiagonal in R using vector values

Here's an attempt all in one go:

head(rbind(x, diag(y)),-1)

This basically creates a square matrix from y with the values on the diagonal, pushes it down by one row by rbind-ing x, then drops the extra row from the bottom of the newly created matrix.

Or a bit of creative indexing using row and col can do it too:

m <- matrix(0,nrow=7,ncol=7)
m[row(m)-col(m)==1] <- head(y,-1)
m[1,] <- x

Both returning:

#     [,1] [,2] [,3] [,4] [,5] [,6] [,7]
#[1,] 0.00 0.0 0.0 0.0 0.0 0.5 0.5
#[2,] 0.75 0.0 0.0 0.0 0.0 0.0 0.0
#[3,] 0.00 0.8 0.0 0.0 0.0 0.0 0.0
#[4,] 0.00 0.0 0.8 0.0 0.0 0.0 0.0
#[5,] 0.00 0.0 0.0 0.9 0.0 0.0 0.0
#[6,] 0.00 0.0 0.0 0.0 0.6 0.0 0.0
#[7,] 0.00 0.0 0.0 0.0 0.0 0.8 0.0

Looping through diagonal+1 of a matrix

You can index using a matrix.

eg

m <- matrix(1:25, ncol = 5)

The off diagonals can be accessed using

offd <- cbind(1:4,2:5)

m[offd]

## [1] 6 12 18 24

You could create a function that does this

offdiag <- function(m, offset){
i <- seq_len(nrow(m)-offset)
j <- i + offset
m[cbind(i,j)]

}

offdiag(m, 1)
## [1] 6 12 18 24
offdiag(m, 2)
[1] 11 17 23
offdiag(m, 3)
## [1] 16 22
offdiag(m, 4)
## [1] 21

Best way of storing 100 matrices in R?

Writing from my phone, but you can try this:

as.data.frame(lapply(matrixList, function(M) diag(M[, -1]) ))

Alternatively, if they are all exactly 10x10, you can substitute this as the 'function(M)` above

    M[(1:9)*11] 

Sum of antidiagonal of a matrix

Using

m <- matrix(c(2, 3, 1, 4, 2, 5, 1, 3, 7), 3)

1) Reverse the rows as shown (or the columns - not shown), take the diagonal and sum:

sum(diag(m[nrow(m):1, ]))
## [1] 4

2) or use row and col like this:

sum(m[c(row(m) + col(m) - nrow(m) == 1)])
## [1] 4

This generalizes to other anti-diagonals since row(m) + col(m) - nrow(m) is constant along all anti-diagonals. For such a generalization it might be more convenient to write the part within c(...) as row(m) + col(m) - nrow(m) - 1 == 0 since then replacing 0 with -1 uses the superdiagonal and with +1 uses the subdiagonal. -2 and 2 use the second superdiagonal and subdiagonal respectively and so on.

3) or use this sequence of indexes:

n <- nrow(m)
sum(m[seq(n, by = n-1, length = n)])
## [1] 4

4) or use outer like this:

n <- nrow(m)
sum(m[!c(outer(1:n, n:1, "-"))])
## [1] 4

This one generalizes nicely to other anti-diagonals too as outer(1:n, n:1, "-") is constant along anti-diagonals. We can write m[outer(1:n, n:1) == 0] and if we replace 0 with -1 we get the super anti-diagonal and with +1 we get the sub anti-diagonal. -2 and 2 give the super super and sub sub antidiagonals. For example sum(m[c(outer(1:n, n:1, "-") == 1)]) is the sum of the sub anti-diagonal.

Selecting specific elements from a matrix all at once

Indexing can be done with 2 column matrices. After converting those row and column numbers to a valid R object (rather than Matlab-style expressions):

> idxs <- gsub("\\]",")", gsub("\\[", "c(",  "[1,2], [2,3], [3,4], [4,5] ,[5,6]") )
# I edited the string value that idxs returned:
> midx <- rbind( c(1,2), c(2,3), c(3,4), c(4,5) ,c(5,6) )
> mat <- matrix(scan(), nrow=6)
1: 0.000000 3.772139 6.367721 8.978718 12.197210 13.401126
7: 3.772139 0.000000 3.755554 5.935946 9.592700 11.664533
13: 6.367721 3.755554 0.000000 5.999409 9.324764 11.991269
19: 8.978718 5.935946 5.999409 0.000000 3.810169 6.762802
25: 12.197210 9.592700 9.324764 3.810169 0.000000 3.796884
31: 13.401126 11.664533 11.991269 6.762802 3.796884 0.000000
37:
Read 36 items
> mat[midx]
[1] 3.772139 3.755554 5.999409 3.810169 3.796884

If your goal were to index the super-diagonal that could be accomplished more generally:

> mat[col(mat)==row(mat)+1]
[1] 3.772139 3.755554 5.999409 3.810169 3.796884

mean diagonal minus one

Here's a possibility

mean(m[col(m) == (row(m) - 1)])
## [1] 3.5

The idea here is to get the column and row indices and then select only the values when column == row - 1 (just below the diagonal- as the diagonal is col == row)


Data

m <- structure(c(0L, 2L, 1L, 1L, 1L, 1L, 0L, 3L, 1L, 1L, 1L, 1L, 0L, 
4L, 1L, 1L, 1L, 1L, 0L, 5L, 1L, 1L, 1L, 1L, 0L), .Dim = c(5L,
5L), .Dimnames = list(c("a1", "a2", "a3", "a4", "a5"), c("a1",
"a2", "a3", "a4", "a5")))

Square Root of a Singular Matrix in R

Your question relates to two distinct problems:

  1. Inverse of a matrix
  2. Square root of a matrix

Inverse

The inverse does not exist for singular matrices. In some applications, the Moore-Penrose or some other generalised inverse may be taken as a suitable substitute for the inverse. However, note that computer numerics will incur rounding errors in most cases; and these errors may make a singular matrix appear regular to the computer or vice versa.

If A always exhibits the the block structure of the matrix you give, I suggest to consider only its non-diagonal block

A3 = A[ c( 1, 3, 4 ), c( 1, 3, 4 ) ]

A3
[,1] [,2] [,3]
[1,] 6.041358e+11 5.850849e+13 6.767194e+13
[2,] 5.850849e+13 1.066390e+16 1.087463e+16
[3,] 6.767194e+13 1.087463e+16 1.131599e+16

instead of all of A for better efficiency and less rounding issues. The remaining 1-diagonal entries would remain 1 in the inverse of the square root, so no need to clutter the calculation with them. To get an impression of the impact of this simplification, note that R can calculate

A3inv = solve(A3)

while it could not calculate

Ainv = solve(A)

But we will not need A3inverse, as will become evident below.

Square root

As a general rule, the square root of a matrix A will only exist if the matrix has a diagonal Jordan normal form (https://en.wikipedia.org/wiki/Jordan_normal_form). Hence, there is no truly general solution of the problem as you require.

Fortunately, like “most” (real or complex) matrices are invertible, “most” (real or complex) matrices have a diagonal complex Jordan normal form. In this case, the Jordan normal form

A3 = T·J·T⁻¹

can be calculated in R as such:

X = eigen(A3)
T = X$vectors
J = Diagonal( x=X$values )

To test this recipe, compare

Tinv = solve(T)
T %*% J %*% Tinv

with A3. They should match (up to rounding errors) if A3 has a diagonal Jordan normal form.

Since J is diagonal, its squareroot is simply the diagonal matrix of the square roots

Jsqrt = Diagonal( x=sqrt( X$values ) )

so that Jsqrt·Jsqrt = J. Moreover, this implies

(T·Jsqrt·T⁻¹)² = T·Jsqrt·T⁻¹·T·Jsqrt·T⁻¹ = T·Jsqrt·Jsqrt·T⁻¹ = T·J·T⁻¹ = A3

so that in fact we obtain

√A3 = T·Jsqrt·T⁻¹

or in R code

A3sqrt = T %*% Jsqrt %*% Tinv

To test this, calculate

A3sqrt %*% A3sqrt

and compare with A3.

Square root of the inverse

The square root of the inverse (or, equally, the inverse of the sqare root) can be calculated easily once a diagonal Jordan normal form has been calculated. Instead of J use

Jinvsqrt = Diagonal( x=1/sqrt( X$values ) )

and calculate, analogously to above,

A3invsqrt = T %*% Jinvsqrt %*% Tinv

and observe

A3·A3invsqrt² = … = T·(J/√J/√J)·T⁻¹ = 1

the unit matrix so that A3invsqrt is the desired result.

In case A3 is not invertible, a generalised inverse (not necessarily the Moore-Penrose one) can be calculated by replacing all undefined entries in Jinvsqrt by 0, but as I said above, this should be done with suitable care in the light of the overall application and its stability against rounding errors.

In case A3 does not have a diagonal Jordan normal form, there is no square root, so the above formulas will yield some other result. In order not to run into this case at times of bad luck, best implement a test whether

A3invsqrt %*% A3 %*% A3invsqrt

is close enough to what you would consider a 1 matrix (this only applies if A3 was invertible in the first place).

PS: Note that you can prefix a sign ± for each diagonal entry of Jinvsqrt to your liking.



Related Topics



Leave a reply



Submit