I recently blogged about doing LLS in GNU Octave, getting not only the best fit parameters, but also the standard error associated with those estimates (assuming the error in the data is normally distributed).
Octave as an built-in function to do ordinary least squares: ols. While it does not directly report the standard errors associated with the regressed parameters, it is relatively straightforward to get them from the variables returned.
Here is how you would solve the same example as before:
Nobs = 10;
x = linspace(0,1,Nobs)';
y = 1 + 2 * x + 0.05 * randn(size(x));
X = [ones(Nobs,1) x];
[beta sigma] = ols(y,X)
yest = X * beta;
p = length(beta); % #parameters
varb = sigma * ((X'*X)\eye(p)); % variance of "b"
se = sqrt(diag(varb)) % standard errors
Octave as an built-in function to do ordinary least squares: ols. While it does not directly report the standard errors associated with the regressed parameters, it is relatively straightforward to get them from the variables returned.
Here is how you would solve the same example as before:
Nobs = 10;
x = linspace(0,1,Nobs)';
y = 1 + 2 * x + 0.05 * randn(size(x));
X = [ones(Nobs,1) x];
[beta sigma] = ols(y,X)
yest = X * beta;
p = length(beta); % #parameters
varb = sigma * ((X'*X)\eye(p)); % variance of "b"
se = sqrt(diag(varb)) % standard errors
No comments:
Post a Comment