CURM Thirtieth Meeting, 4/30/2009

From www.norsemathology.org

Jump to: navigation, search

Contents

Agenda

Announcements

Grayson: I need a signature for meal reimbursement.

New Business

For this time:

  • Katie agreed to look at the regressions for wing size versus mass for the wasps to see if, at some point, the wasps' wings don't grow with mass. She'll do this by doing progressive regressions, and see if the slope is effectively 3 early on, but grows as the larger wasps are included in the analysis. here's an animation of the results...
  • Grayson has carried out some regression on the data provided by Dow.
  • I've prepared the next phase of the standard error discussion: into the non-linear realm!
  • Jon will be meeting with us at 4:30, for about a half-hour.

Non-linear Regression

  • Today let's re-examine an example of how to do non-linear regression, and consider how to handle the standard errors.
    • First of all, how are standard Errors handled in the linear case?
    • We haven't talked enough about standard errors. Whenever we're doing non-linear regression, we end up with parameter estimates, but in many cases we don't have error estimates. There are approximations available, however.
    • Non-linear Regression Primer (includes the best and most useful description of the estimation standard errors, under Hessian). Excerpts:
      • "Hessian Matrix and Standard Errors. The matrix of second-order (partial) derivatives is also called the Hessian matrix. It turns out that the inverse of the Hessian matrix approximates the variance/covariance matrix of parameter estimates. Intuitively, there should be an inverse relationship between the second-order derivative for a parameter and its standard error: If the change of the slope around the minimum of the function is very sharp, then the second-order derivative will be large; however, the parameter estimate will be quite stable in the sense that the minimum with respect to the parameter is clearly identifiable. If the second-order derivative is nearly zero, then the change in the slope around the minimum is zero, meaning that we can practically move the parameter in any direction without greatly affecting the loss function. Thus, the standard error of the parameter will be very large."
      • "The Hessian matrix (and asymptotic standard errors for the parameters) can be computed via finite difference approximation. This procedure yields very precise asymptotic standard errors for all estimation methods."
    • Wikipedia introduces the Hessian in the linear regression case
    • This site provided some helpful info, including the formula for the approximate standard errors (the following is a vector equation):

\underline{SE_0}=\sqrt{2\frac{S(\Theta_0)}{N-p}diagonal(H^{-1})}

where

  • \left.S(\Theta_0)\right. is the sum of squared errors (which is what we minimize) for the estimated parameter values \left.\Theta_0\right.;
  • N is the length of the data vector x;
  • p is the length of the parameter vector Θ0;
  • diagonal extracts the diagonal of a matrix as a vector; and
  • H is the Hessian matrix.

This code dumped into this site illustrates that the approximation works (or follow the link in this file).

A quote from that site: "The standard errors are the square roots of the diagonal elements of the covariance." Thomas Lumley.

http://www.orbitals.com/self/least/least.htm has some useful information.

Old Business

Links

Personal tools