CURM Twenty-eighth Meeting, 4/16/2009

From www.norsemathology.org

Jump to: navigation, search

Contents

Agenda

Announcements

Good job at the Celebration: sorry you didn't have more traffic.

So Katie: will you consider Mathfest? Grayson says he's willing. More details in the April/May MAA Focus (there will be a great presentation by Persi Diaconis).

New Business

Today we want to talk about our joint paper with Dr. Hastings. We need to decide on a direction and the extent of our coverage:

Original proposal:

We start with a narrowly focussed paper presenting the evidence that, at these two locations, female cicada killer hunting is not opportunistic, but is selective. The discussion will contain some predictions based on this conclusion; we could test these predictions this summer. BTW, we now have permission to return to both Florida study sites in June/July. Additionally, we have located another site where the tiny Neocicada predominate. We will be able to excavate a few nests this time.

Target for the first paper: Florida Entomologist. Target date for submission: June 10. Chuck and I will do most of the writing, but I will need lots of help from you with the analysis section of the methods and with the results section.

My suggestion:

I think that you might consider adding in our argument against the "one-if-by" dogma, as well as the mass (or volume) provisioning model that attempts to explain our modification (maybe "filled if by female, half if by male"). So we might consider working on a second companion paper (as an alternative to embellishing the "narrowly focussed" paper. I assume you feel that it's important to make this point concerning selectivity, and to emphasize it as much as possible -- rather than allowing it to get lost in a sea of other results).


Future Business

Non-linear Regression

  • Today let's re-examine an example of how to do non-linear regression, and consider how to handle the standard errors.
  • "The standard errors are the square roots of the diagonal elements of the covariance." Thomas Lumley.
    • We haven't talked enough about standard errors. Whenever we're doing non-linear regression, we end up with parameter estimates, but in many cases we don't have error estimates. There are approximations available, however.
    • Non-linear Regression Primer (includes the best and most useful description of the estimation standard errors, under Hessian). Excerpts:
      • "Hessian Matrix and Standard Errors. The matrix of second-order (partial) derivatives is also called the Hessian matrix. It turns out that the inverse of the Hessian matrix approximates the variance/covariance matrix of parameter estimates. Intuitively, there should be an inverse relationship between the second-order derivative for a parameter and its standard error: If the change of the slope around the minimum of the function is very sharp, then the second-order derivative will be large; however, the parameter estimate will be quite stable in the sense that the minimum with respect to the parameter is clearly identifiable. If the second-order derivative is nearly zero, then the change in the slope around the minimum is zero, meaning that we can practically move the parameter in any direction without greatly affecting the loss function. Thus, the standard error of the parameter will be very large."
      • "The Hessian matrix (and asymptotic standard errors for the parameters) can be computed via finite difference approximation. This procedure yields very precise asymptotic standard errors for all estimation methods."
    • Wikipedia introduces the Hessian in the linear regression case
    • This site provided some helpful info, including the formula for the approximate standard errors (the following is a vector equation):

\underline{SE_0}=\sqrt{2\frac{S(\Theta_0)}{N-p}diagonal(H^{-1})}

where

  • \left.S(\Theta_0)\right. is the sum of squared errors (which is what we minimize) for the estimated parameter values \left.\Theta_0\right.;
  • N is the length of the data vector x;
  • p is the length of the parameter vector Θ0;
  • diagonal extracts the diagonal of a matrix as a vector; and
  • H is the Hessian matrix.

This code dumped into this site illustrates that the approximation works.

http://www.orbitals.com/self/least/least.htm has some useful information.

Old Business

Links

Personal tools