# Agenda

## Announcements

Good job in Frankfort! I hope that you appreciated the praise given by Betty Mayfield, of Hood College. She is First Vice President of the MAA, and she has served as chair of the Committee on Undergraduate Student Activities and Chapters. So when she says that you should present in Portland at MathFest, she means it (she said to me "They might even win a prize!").

I hope that you'll consider Mathfest: I got these details from the Feb/March MAA Focus:

Call for Student Papers

The deadline for receipt of applications for student papers is Friday, June 12, 2009. Students may not apply for funding from both MAA and PME. Every student paper session room will be equipped with a standard overhead projector, a computer projector (presenters must provide their own laptops or have access to one), and a screen. Each student talk is 15 minutes in length.

MAA Sessions

Students who wish to present at the MAA Student Paper Sessions at MathFest 2009 in Portland, Oregon, must be sponsored by a faculty advisor familiar with the work to be presented. Some funding to cover costs (up to $600) for student presenters is available. At most one student from each institution or REU can receive full funding; additional such students may be funded at a lower rate. All presenters are expected to take full part in the meeting and attend indicated activities sponsored for students on all three days of the conference. Nomination forms and more detailed information for the MAA Student Paper Sessions will be available at http://www.maa.org/students/undergrad/ by mid February, 2009. Pi Mu Epsilon Sessions Pi Mu Epsilon student speakers must be nominated by their chapter advisors. Application forms for PME student speakers will be available by March 1, 2009 on the PME web site http://www.pme-math.org or can be obtained from PME SecretaryTreasurer Leo Schneider, who can be reached by email at leo@jcu.edu. A PME student speaker who attends all the Pi Mu Epsilon activities is eligible for transportation reimbursement up to$600, and up to five speakers per Chapter may be eligible for full or partial reimbursement.

The latter looks like the way to go. Check out http://www.pme-math.org/conferences/national/2009/call2009.html for the details.

For Next Wednesday, 4/8, the Celebration of Student Research: We need to turn slides into a poster

### Non-linear Regression

• Today let's re-examine an example of how to do non-linear regression, and consider how to handle the standard errors.
• We haven't talked enough about standard errors. Whenever we're doing non-linear regression, we end up with parameter estimates, but in many cases we don't have error estimates. There are approximations available, however.
• Non-linear Regression Primer (includes the best and most useful description of the estimation standard errors, under Hessian). Excerpts:
• "Hessian Matrix and Standard Errors. The matrix of second-order (partial) derivatives is also called the Hessian matrix. It turns out that the inverse of the Hessian matrix approximates the variance/covariance matrix of parameter estimates. Intuitively, there should be an inverse relationship between the second-order derivative for a parameter and its standard error: If the change of the slope around the minimum of the function is very sharp, then the second-order derivative will be large; however, the parameter estimate will be quite stable in the sense that the minimum with respect to the parameter is clearly identifiable. If the second-order derivative is nearly zero, then the change in the slope around the minimum is zero, meaning that we can practically move the parameter in any direction without greatly affecting the loss function. Thus, the standard error of the parameter will be very large."
• "The Hessian matrix (and asymptotic standard errors for the parameters) can be computed via finite difference approximation. This procedure yields very precise asymptotic standard errors for all estimation methods."
• Wikipedia introduces the Hessian in the linear regression case
• This site provided some helpful info, including the formula for the approximate standard errors (the following is a vector equation):

$\underline{SE_0}=\sqrt{2\frac{S(\Theta_0)}{N-p}diagonal(H^{-1})}$

where

• $\left.S(\Theta_0)\right.$ is the sum of squared errors (which is what we minimize) for the estimated parameter values $\left.\Theta_0\right.$;
• N is the length of the data vector x;
• p is the length of the parameter vector Θ0;
• diagonal extracts the diagonal of a matrix as a vector; and
• H is the Hessian matrix.

This code dumped into this site illustrates that the approximation works.

### Our Paper

• Think about what aspects of the talk and poster will translate to the paper
• What do we need to add?

### Ugly numerical simulation

I continue to work on simulate_both_pop.lsp: a simulation including all three populations of cicadas.

• Upshot: we can get some very interesting dynamics out of incorporating all three populations.

### Analytical Approach to the Simulations Already Undertaken

• I'd like to look at a couple of questions related to this:
• Do you see how to incorporate the non-opportunistic features?
• How do we incorporate the mass-provisioning?
• We're more interested in the combo meals.