# Taylor Series as Approximations

### From www.norsemathology.org

## Contents |

## Taylor Series Generalize Tangent Lines as Approximation

Rather than stop at a linear function as an approximation, we let the degree of our approximation increase (provided the necessary derivatives exist), until we have an approximation of the form

This polynomial will achieve perfect agreement with the function at in all derivatives up to the (including the -- that is, the function itself):

## Taylor Series Error

### The form of the Taylor Series Error

By stopping the process, we introduce an error:

Taylor's Theorem asserts that

We can show this using the first principle of mathematical induction. First of all, we need to assume that the function is differentiable a sufficient number of times. We'll start by showing that the result holds for :

**Base Case**:

Hence the result holds in the base case.

**Implication**: Assume that the result holds for , and shows that it holds for . **Note**: we need to assume that the function is times differentiable on the interval of interest, or (depending on the relative positions of and ).

Consider

We evaluate the integral via integration by parts. We want to step down the power on the polynomial term by differentiation, and decrease the derivative term by integration (i.e., from to ), which dictates our choices of the functions *u* and *v* in the integration by parts:

But we notice that

So the result holds for .

**Conclusion**: The first principle of induction assures us that this result will hold (provided the necessary derivatives exist).

### Alternate Taylor Series Error

The error is related to the derivative:

where .

We can pass from the integral to the form above by invoking the First mean value theorem for integration (see [1], from which the following is borrowed):

The **first mean value theorem for integration** states

- If
*G*: [*a*,*b*] →**R**is a continuous function and φ : [*a*,*b*] →**R**is an integrable positive function, then there exists a number*x*in (*a*,*b*) such that

In this case, choose as function *G* the derivative term; the function φ is the polynomial term (which is easy to integrate):

By the way, the polynomial term may be positive or negative: what is important is that it holds its sign fixed (we can just factor out a negative sign, then, if necessary).

### Bounding the Taylor Series Error

In particular, if we can bound the error in the derivative term, , then we can bound the entire expression on the interval of interest:

, where
between *a* and *x*.

This provides us with an **error bound**, telling us how bad the approximation will be **in the worst case**.

## Examples

Consider the function . This is a marvelous function, because it's equal to all of its derivatives. Hence the Taylor series expansion about is very simply computed:

Hence, for example

Thus the degree Taylor polynomial that agrees best with expanded about zero is

and the error of approximating using this polynomial is going to be bounded by

Hence, if we should desire an approximation of to within a certain of its true value, then we should choose the degree polynomial such that

In other words, find the first value of *n* that makes the inequality true above.

**Problem #31, p. 499, Rogawski:**

We're trying to approximate the function with a Taylor polynomial about 0 (i.e., a Maclaurin polynomial). We know that

so that

The claim is that for a given value of the derivative term .

In this problem, , and . Find *n*.

## Application: Muller's Method

One of the most important applications of the linearization is in root-finding: that is, finding zeros of a non-linear function via Newton's method. Taylor series allow us to generalize this technique. Since we can easily find the roots of a quadratic, why not introduce the quadratic approximation, rather than the linearization? Why not use the "quadraticization"?

Given function , and a guess for a root . We would first check to see if . If it is, we're done; otherwise, we might try to **improve** the guess. How so?

Use the quadraticization , and find a zero of it:

This is Muller's method, an iterative scheme for improving the approximation of a root. Notice that there are two roots: how would you choose the root you want?

(By the way: the form of the root given above is not necessarily the best formula to use to **compute** the root. It is, however, formally the correct value of the root.)

### For example:

- Let ; then and .
- Guess: (pretty bad guess! )
- Improved guess:
- Now do it again! After another go,
- Once more:
- Once more:

(get a visualization of this process).

It's like a miracle... but it's just mathematics!

Muller's method generally converges faster than Newton's method, and can produce complex zeros (which Newton's method can never do, starting from a real-valued function).