Large-Scale Regression in R with a Sparse Feature Matrix

large-scale regression in R with a sparse feature matrix

Don't know about SparseM but the MatrixModels package has an unexported lm.fit.sparse function that you can use. See ?MatrixModels:::lm.fit.sparse. Here is an example:

Create the data:

y <- rnorm(30)
x <- factor(sample(letters, 30, replace=TRUE))
X <- as(x, "sparseMatrix")
class(X)
# [1] "dgCMatrix"
# attr(,"package")
# [1] "Matrix"
dim(X)
# [1] 18 30

Run the regression:

MatrixModels:::lm.fit.sparse(t(X), y)
# [1] -0.17499968 -0.89293312 -0.43585172 0.17233007 -0.11899582 0.56610302
# [7] 1.19654666 -1.66783581 -0.28511569 -0.11859264 -0.04037503 0.04826549
# [13] -0.06039113 -0.46127034 -1.22106064 -0.48729092 -0.28524498 1.81681527

For comparison:

lm(y~x-1)

# Call:
# lm(formula = y ~ x - 1)
#
# Coefficients:
# xa xb xd xe xf xg xh xj
# -0.17500 -0.89293 -0.43585 0.17233 -0.11900 0.56610 1.19655 -1.66784
# xm xq xr xt xu xv xw xx
# -0.28512 -0.11859 -0.04038 0.04827 -0.06039 -0.46127 -1.22106 -0.48729
# xy xz
# -0.28524 1.81682

How to consistently standardize sparse feature matrix in scikit-learn?

Digging through the sklearn code, it looks like when fit_intercept=True and normalize=True, the coefficients estimated on the normalized data are projected back to the original scale of the data. This is similar to the way glmnet in R handles standardization. The relevant code snippet is the method _set_intercept of LinearModel, see https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/base.py#L158. So predictions on unseen data use coefficients in the original scale, i.e., normalize=True is safe to use.

Ordinary least squares with glmnet and lm

If you check out these two posts, you will get a sense as to why you are not getting the same results.

In essence, glmnet penalized maximum likelihood using a regularization path to estimate the model. lm solves the least squares problem using QR decomposition. So the estimates will never be exactly the same.

However, note in the manual for ?glmnet under "lambda":

WARNING: use with care. Do not supply a single value for lambda (for
predictions after CV use predict() instead). Supply instead a
decreasing sequence of lambda values. glmnet relies on its warms
starts for speed, and its often faster to fit a whole path than
compute a single fit.

You can do (at least) three things to get the coefficients closer so the difference is trivial--(1) have a range of values for lambda, (2) decrease the threshold value thres, and (3) increase the max number of iterations.

library(glmnet)
options(scipen = 999)

X = model.matrix(mpg ~ 0 + ., data = mtcars)
y = as.matrix(mtcars["mpg"])
lfit <- glmnet(X, y, lambda = rev(0:99), thres = 1E-10)
lmfit <- lm(y ~ X)
coef(lfit, s = 0) - coef(lmfit)
11 x 1 Matrix of class "dgeMatrix"
1
(Intercept) 0.004293053125
cyl -0.000361655351
disp -0.000002631747
hp 0.000006447138
drat -0.000065394578
wt 0.000180943607
qsec -0.000079480187
vs -0.000462099248
am -0.000248796353
gear -0.000222035415
carb -0.000071164178

X = model.matrix(mpg ~ 0 + . + . * disp, data = mtcars)
y = as.matrix(mtcars["mpg"])
lfit <- glmnet(X, y, lambda = rev(0:99), thres = 1E-12, maxit = 10^7)
lmfit <- glm(y ~ X)
coef(lfit, s = 0) - coef(lmfit)
20 x 1 Matrix of class "dgeMatrix"
1
(Intercept) -0.3174019115228
cyl 0.0414909318817
disp 0.0020032493403
hp 0.0001834076765
drat 0.0188376047769
wt -0.0120601219002
qsec 0.0019991131315
vs 0.0636756040430
am 0.0439343002375
gear -0.0161102501755
carb -0.0088921918062
cyl:disp -0.0002714213271
disp:hp -0.0000001211365
disp:drat -0.0000859742667
disp:wt 0.0000462418947
disp:qsec -0.0000175276420
disp:vs -0.0004657059892
disp:am -0.0003517289096
disp:gear 0.0001629963377
disp:carb 0.0000085312911

Some of the differences for the interacted model are probably non-trivial, but closer.

Large-scale pseudoinverse

You might be better off using a block iterative algorithm that converges directly to the least squares solution than computing the least squares solution through the pseudoinverse. See "Applied Iterative Methods" by Charlie Byrne. These algorithms are closely related to the Krylov subspace methods, but are tuned for easy computation. You can get an introduction by looking at chapter 3 of this preprint of another of his books.



Related Topics



Leave a reply



Submit