Linear Regression with a known fixed intercept in R
You could subtract the explicit intercept from the regressand and then fit the intercept-free model:
> intercept <- 1.0
> fit <- lm(I(x - intercept) ~ 0 + y, lin)
> summary(fit)
The 0 +
suppresses the fitting of the intercept by lm
.
edit To plot the fit, use
> abline(intercept, coef(fit))
P.S. The variables in your model look the wrong way round: it's usually y ~ x
, not x ~ y
(i.e. the regressand should go on the left and the regressor(s) on the right).
How to fit a known linear equation to my data in R?
Let's be clear what you are asking here. You have an existing model, which is "the modelled
values are the expected value of the measured
values", or in other words, measured = modelled + e
, where e
are the normally distributed residuals.
You say that the "optimal fit" should be a straight line with intercept 0 and slope 1, which is another way of saying the same thing.
The thing is, this "optimal fit" is not the optimal fit for your actual data, as we can easily see by doing:
summary(lm(measured ~ modelled))
#>
#> Call:
#> lm(formula = measured ~ modelled)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -103.328 -39.130 -4.881 40.428 114.829
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 23.09461 13.11026 1.762 0.083 .
#> modelled 0.91143 0.07052 12.924 <2e-16 ***
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> Residual standard error: 55.13 on 63 degrees of freedom
#> Multiple R-squared: 0.7261, Adjusted R-squared: 0.7218
#> F-statistic: 167 on 1 and 63 DF, p-value: < 2.2e-16
This shows us the line that would produce the optimal fit to your data in terms of reducing the sum of the squared residuals.
But I guess what you are asking is "How well do my data fit the model measured = modelled + e
?"
Trying to coerce lm
into giving you a fixed intercept and slope probably isn't the best way to answer this question. Remember, the p value for the slope only tells you whether the actual slope is significantly different from 0. The above model already confirms that. If you want to know the r-squared of measured = modelled + e
, you just need to know the proportion of the variance of measured
that is explained by modelled
. In other words:
1 - var(measured - modelled) / var(measured)
#> [1] 0.7192672
This is pretty close to the r squared from the lm
call.
I think you have sufficient evidence to say that your data is consistent with the model measured = modelled
, in that the slope in the lm
model includes the value 1 within its 95% confidence interval, and the intercept contains the value 0 within its 95% confidence interval.
Impose Constraint on Intercept in Linear Regression Using R
A linear model.
m0 <- lm(wt ~ qsec + hp + disp, data = mtcars)
m0
#
# Call:
# lm(formula = wt ~ qsec + hp + disp, data = mtcars)
#
# Coefficients:
# (Intercept) qsec hp disp
# -2.450047 0.201713 0.003466 0.006755
Force the intercept to be zero.
m1 <- lm(wt ~ qsec + hp + disp - 1, data = mtcars)
m1
#
# Call:
# lm(formula = wt ~ qsec + hp + disp - 1, data = mtcars)
#
# Coefficients:
# qsec hp disp
# 0.0842281 0.0002622 0.0072967
You can use nls
to apply limits to the paramaters (in this case the lower limit).
m1n <- nls(wt ~ a + b1 * qsec + b2 * hp + b3 * disp,
data = mtcars,
start = list(a = 1, b1 = 1, b2 = 1, b3 = 1),
lower = c(0, -Inf, -Inf, -Inf), algorithm = "port")
m1n
# Nonlinear regression model
# model: wt ~ a + b1 * qsec + b2 * hp + b3 * disp
# data: mtcars
# a b1 b2 b3
# 0.0000000 0.0842281 0.0002622 0.0072967
# residual sum-of-squares: 4.926
#
# Algorithm "port", convergence message: relative convergence (4)
See here for other example solutions.
Fitting a polynomial with a known intercept
lm(y~-1+x+I(x^2)+offset(k))
should do it.
-1
suppresses the otherwise automatically added intercept termx
adds a linear termI(x^2)
adds a quadratic term; theI()
is required so that R interprets^2
as squaring, rather than taking an interaction betweenx
and itself (which by formula rules would be equivalent tox
alone)offset(k)
adds the known constant intercept
I don't know whether poly(x,2)-1
would work to eliminate the intercept; you can try it and see. Subtracting the offset from your data should work fine, but offset(k)
might be slightly more explicit. You might have to make k
a vector (i.e. replicate it to the length of the data set, or better include it as a column in the data set and pass the data with data=...
Linear regression with specified slope
I suppose one approach would be to subtract 1.5*x
from y
and then fit y
using only an intercept term:
mod2 <- lm(I(y-1.5*x)~1)
plot(x, y)
abline(mod2$coefficients, 1.5)
This represents the best linear fit with fixed slope 1.5. Of course, this fit is not very visually appealing because the simulated slope is 1 while the fixed slope is 1.5.
Linear Regression with a known fixed intercept in Accord.NET
Set ols.UseIntercept = False
, add the desired intercept value to each opt
value, and run ols.Learn()
. The resulting Slope
will be calculated for the intercept you want.
Related Topics
Create a Matrix of Scatterplots (Pairs() Equivalent) in Ggplot2
How to Directly Select the Same Column from All Nested Lists Within a List
How to Parametrize Function Calls in Dplyr 0.7
Conditional Coloring of Cells in Table
Identify All Objects of Given Class for Further Processing
Format for Ordinal Dates (Day of Month with Suffixes -St, -Nd, -Rd, -Th)
Geom_Text How to Position the Text on Bar as I Want
Data Table Merge Based on Date Ranges
Can Dcast Be Used Without an Aggregate Function
Concatenate Several Columns to Comma Separated Strings by Group
How to Select a Cran Mirror in R
Using Dynamic Column Names in 'Data.Table'
Extract a Column from a Data.Table as a Vector, by Position
Can't Download Data from Yahoo Finance Using Quantmod in R
Formatting Reactive Data.Frames in Shiny
Forward and Backward Fill Data Frame in R