CURM Twenty-ninth Meeting, 4/23/2009

From www.norsemathology.org

Jump to: navigation, search

Contents

Agenda

For next time:

  • Katie has agreed to look at the regressions for wing size versus mass for the wasps to see if, at some point, the wasps' wings don't grow with mass. She'll do this by doing progressive regressions, and see if the slope is effectively 3 early on, but grows as the larger wasps are included in the analysis.
  • Grayson is going to do some regression on the data provided by Dow.
  • I'm going to prepare the next phase of the standard error discussion: into the non-linear realm!

Announcements

I don't know if you caught the email, but Suzanne is looking into the meal reimbursement.

New Business

So we've received several papers from Dr. Hastings, and some idea of the direction he wants to go. I think that he's going to want to incorporate more than he initially indicated (i.e. only the opportunistic versus selective predation data and analysis).

  • Katie has taken on the task of summarizing the papers he sent, looking for any references we might use, or other material that we might incorporate.
  • Grayson is still looking at the "Happy Meal" picture.

Non-linear Regression

  • Today let's re-examine an example of how to do non-linear regression, and consider how to handle the standard errors.
    • First of all, how are standard Errors handled in the linear case?
    • We haven't talked enough about standard errors. Whenever we're doing non-linear regression, we end up with parameter estimates, but in many cases we don't have error estimates. There are approximations available, however.
    • Non-linear Regression Primer (includes the best and most useful description of the estimation standard errors, under Hessian). Excerpts:
      • "Hessian Matrix and Standard Errors. The matrix of second-order (partial) derivatives is also called the Hessian matrix. It turns out that the inverse of the Hessian matrix approximates the variance/covariance matrix of parameter estimates. Intuitively, there should be an inverse relationship between the second-order derivative for a parameter and its standard error: If the change of the slope around the minimum of the function is very sharp, then the second-order derivative will be large; however, the parameter estimate will be quite stable in the sense that the minimum with respect to the parameter is clearly identifiable. If the second-order derivative is nearly zero, then the change in the slope around the minimum is zero, meaning that we can practically move the parameter in any direction without greatly affecting the loss function. Thus, the standard error of the parameter will be very large."
      • "The Hessian matrix (and asymptotic standard errors for the parameters) can be computed via finite difference approximation. This procedure yields very precise asymptotic standard errors for all estimation methods."
    • Wikipedia introduces the Hessian in the linear regression case
    • This site provided some helpful info, including the formula for the approximate standard errors (the following is a vector equation):

\underline{SE_0}=\sqrt{2\frac{S(\Theta_0)}{N-p}diagonal(H^{-1})}

where

  • \left.S(\Theta_0)\right. is the sum of squared errors (which is what we minimize) for the estimated parameter values \left.\Theta_0\right.;
  • N is the length of the data vector x;
  • p is the length of the parameter vector Θ0;
  • diagonal extracts the diagonal of a matrix as a vector; and
  • H is the Hessian matrix.

This code dumped into this site illustrates that the approximation works (or follow the link in this file).

A quote from that site: "The standard errors are the square roots of the diagonal elements of the covariance." Thomas Lumley.

http://www.orbitals.com/self/least/least.htm has some useful information.

Old Business

Links

Personal tools