# Taylor Series as Approximations

## Taylor Series Generalize Tangent Lines as Approximation

Rather than stop at a linear function as an approximation, we let the degree of our approximation increase (provided the necessary derivatives exist), until we have an approximation of the form

${\displaystyle \left.T_{n}(x)=f(a)+f^{\prime }(a)(x-a)+{\frac {f^{(2)}(a)}{2!}}(x-a)^{2}+\ldots +{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}\right.}$

This polynomial will achieve perfect agreement with the function ${\displaystyle \left.f\right.}$ at ${\displaystyle \left.x=a\right.}$ in all derivatives up to the ${\displaystyle \left.n^{th}\right.}$ (including the ${\displaystyle \left.0^{th}\right.}$ -- that is, the function itself):

${\displaystyle \left.{\begin{cases}T_{n}(a)=f(a)\\T_{n}^{\prime }(a)=f^{\prime }(a)\\\vdots \\T_{n}^{(n)}(a)=f^{(n)}(a)\end{cases}}\right.}$

## Taylor Series Error

### The form of the Taylor Series Error

By stopping the process, we introduce an error:

${\displaystyle \left.\left.f(x)=T_{n}(x)+R_{n}(x)\right.\right.}$

Taylor's Theorem asserts that

${\displaystyle \left.R_{n}(x)={\frac {1}{n!}}\int _{a}^{x}(x-u)^{n}f^{(n+1)}(u)du\right.}$

We can show this using the first principle of mathematical induction. First of all, we need to assume that the function ${\displaystyle \left.\left.f\right.\right.}$ is differentiable a sufficient number of times. We'll start by showing that the result holds for ${\displaystyle \left.n=0\right.}$:

Base Case: ${\displaystyle \left.n=0\right.}$

${\displaystyle \left.R_{0}(x)={\frac {1}{0!}}\int _{a}^{x}(x-u)^{0}f^{(0+1)}(u)du=\int _{a}^{x}f^{\prime }(u)du=f(x)-f(a)\equiv f(x)-T_{0}(x)\right.}$

Hence the result holds in the base case.

Implication: Assume that the result holds for ${\displaystyle \left.n=k\right.}$, and shows that it holds for ${\displaystyle \left.n=k+1\right.}$. Note: we need to assume that the function is ${\displaystyle \left.k+2\right.}$ times differentiable on the interval of interest, ${\displaystyle \left.(a,x)\right.}$ or ${\displaystyle \left.(x,a)\right.}$ (depending on the relative positions of ${\displaystyle \left.a\right.}$ and ${\displaystyle \left.x\right.}$).

Consider ${\displaystyle \left.R_{k+1}(x)={\frac {1}{(k+1)!}}\int _{a}^{x}(x-u)^{k+1}f^{(k+2)}(u)du\right.}$

We evaluate the integral via integration by parts. We want to step down the power on the polynomial term by differentiation, and decrease the derivative term by integration (i.e., from ${\displaystyle \left.k+2\right.}$ to ${\displaystyle \left.k+1\right.}$), which dictates our choices of the functions u and v in the integration by parts:

${\displaystyle \left.R_{k+1}(x)={\frac {1}{(k+1)!}}\left[(x-u)^{k+1}f^{(k+1)}(u)|_{a}^{x}+(k+1)\int _{a}^{x}(x-u)^{k}f^{(k+1)}(u)du\right]\right.}$

But we notice that

${\displaystyle \left.R_{k+1}(x)=-{\frac {1}{(k+1)!}}(x-a)^{k+1}f^{(k+1)}(a)+\left[(f(x)-T_{k}(x)\right]\equiv f(x)-T_{k+1}(x)\right.}$

So the result holds for ${\displaystyle \left.k+1\right.}$.

Conclusion: The first principle of induction assures us that this result will hold (provided the necessary derivatives exist).

### Alternate Taylor Series Error

The error is related to the ${\displaystyle \left.(n+1)^{th}\right.}$ derivative:

${\displaystyle \left.R_{n}(x)={\frac {1}{n!}}\int _{a}^{x}(x-u)^{n}f^{(n+1)}(u)du={\frac {f^{(n+1)}(\xi (x))(x-a)^{n+1}}{(n+1)!}}\right.}$

where ${\displaystyle \left.\xi (x)\in [a,x]\right.}$.

We can pass from the integral to the form above by invoking the First mean value theorem for integration (see [1], from which the following is borrowed):

The first mean value theorem for integration states

If G : [a, b] → R is a continuous function and φ : [a, b] → R is an integrable positive function, then there exists a number x in (a, b) such that
${\displaystyle \left.\int _{a}^{b}G(t)\varphi (t)\,dt=G(x)\int _{a}^{b}\varphi (t)\,dt.\right.}$

In this case, choose as function G the derivative term; the function φ is the polynomial term (which is easy to integrate): ${\displaystyle \left.R_{n}(x)=f^{(n+1)}(\xi (x)){\frac {1}{n!}}\int _{a}^{x}(x-u)^{n}du=f^{(n+1)}(\xi (x)){\frac {(x-a)^{n+1}}{(n+1)!}}\right.}$

By the way, the polynomial term may be positive or negative: what is important is that it holds its sign fixed (we can just factor out a negative sign, then, if necessary).

### Bounding the Taylor Series Error

In particular, if we can bound the error in the derivative term, ${\displaystyle \left.f^{(n+1)}(u)\right.}$, then we can bound the entire expression on the interval of interest:

${\displaystyle \left.|R_{n}(x)|\leq K{\frac {|x-a|^{n+1}}{(n+1)!}}\right.}$, where ${\displaystyle \left.|f^{(n+1)}(u)|\leq K\right.}$ ${\displaystyle \left.\forall u\right.}$ between a and x.

This provides us with an error bound, telling us how bad the approximation will be in the worst case.

## Examples

Consider the function ${\displaystyle \left.\left.f(x)=e^{x}\right.\right.}$. This is a marvelous function, because it's equal to all of its derivatives. Hence the Taylor series expansion about ${\displaystyle \left.x=0\right.}$ is very simply computed:

${\displaystyle \left.e^{x}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\ldots \right.}$

Hence, for example

${\displaystyle \left.e^{-1}=1+-1+{\frac {1}{2!}}+{\frac {-1}{3!}}+{\frac {1}{4!}}+\ldots \right.}$

Thus the ${\displaystyle \left.n^{th}\right.}$ degree Taylor polynomial that agrees best with ${\displaystyle \left.f(x)\right.}$ expanded about zero is

${\displaystyle \left.e^{x}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\ldots +{\frac {x^{n}}{n!}}\right.}$

and the error of approximating ${\displaystyle \left.\left.e^{-1}\right.\right.}$ using this polynomial is going to be bounded by

${\displaystyle \left.|R_{n}(-1)|={\frac {1}{(n+1)!}}\right.}$

Hence, if we should desire an approximation of ${\displaystyle \left.\left.e^{-1}\right.\right.}$ to within a certain ${\displaystyle \left.\left.\epsilon \right.\right.}$ of its true value, then we should choose the ${\displaystyle \left.n^{th}\right.}$ degree polynomial such that

${\displaystyle \left.|R_{n}(-1)|={\frac {1}{(n+1)!}}<\epsilon \right.}$

In other words, find the first value of n that makes the inequality true above.

Problem #31, p. 499, Rogawski:

We're trying to approximate the function ${\displaystyle \left.\left.f(x)=e^{x}\right.\right.}$ with a Taylor polynomial about 0 (i.e., a Maclaurin polynomial). We know that

${\displaystyle \left.e^{x}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\ldots +{\frac {x^{n}}{n!}}+{\frac {e^{\xi (x)}x^{n+1}}{{(n+1)}!}}\right.}$

so that

${\displaystyle \left.|e^{x}-\left(1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\ldots +{\frac {x^{n}}{n!}}\right)|=|{\frac {e^{\xi (x)}x^{n+1}}{{(n+1)}!}}|\right.}$

The claim is that for a given value of ${\displaystyle \left.x=c\right.}$ the derivative term ${\displaystyle \left.e^{\xi (c)}\leq {e^{c}}\right.}$.

In this problem, ${\displaystyle \left.\left.c=.1\right.\right.}$, and ${\displaystyle \left.\left.\epsilon =10^{-5}\right.\right.}$. Find n.

## Application: Muller's Method

One of the most important applications of the linearization ${\displaystyle \left.L(x)\right.}$ is in root-finding: that is, finding zeros of a non-linear function via Newton's method. Taylor series allow us to generalize this technique. Since we can easily find the roots of a quadratic, why not introduce the quadratic approximation, rather than the linearization? Why not use the "quadraticization"?

Given function ${\displaystyle \left.f(x)\right.}$, and a guess for a root ${\displaystyle \left.x_{0}\right.}$. We would first check to see if ${\displaystyle \left.f(x_{0})=0\right.}$. If it is, we're done; otherwise, we might try to improve the guess. How so?

Use the quadraticization ${\displaystyle \left.Q(x)=f(x_{0})+f^{\prime }(x_{0})(x-x_{0})+{\frac {f^{\prime \prime }(x_{0})}{2}}(x-x_{0})^{2}\right.}$, and find a zero of it:

${\displaystyle \left.x=x_{0}+{\frac {-f^{\prime }(x_{0})\pm {\sqrt {(f^{\prime }(x_{0}))^{2}-2f(x_{0})f^{\prime \prime }(x_{0})}}}{f^{\prime \prime }(x_{0})}}\right.}$

This is Muller's method, an iterative scheme for improving the approximation of a root. Notice that there are two roots: how would you choose the root you want?

(By the way: the form of the root given above is not necessarily the best formula to use to compute the root. It is, however, formally the correct value of the root.)

### For example:

• Let ${\displaystyle \left.f(x)={\sqrt {x}}-x\right.}$; then ${\displaystyle \left.f^{\prime }(x)={\frac {1}{2}}x^{-1/2}-1\right.}$ and ${\displaystyle \left.f^{\prime \prime }(x)={\frac {-1}{4}}x^{-3/2}\right.}$.
• Guess: ${\displaystyle \left.x_{0}=4\right.}$ (pretty bad guess! ${\displaystyle \left.f(4)=-2\right.}$)
• Improved guess: ${\displaystyle \left.x_{1}\approx 1.1660104885167257\right.}$
• Now do it again! After another go, ${\displaystyle \left.x_{2}\approx 1.0004246508122367\right.}$
• Once more: ${\displaystyle \left.x_{3}\approx 1.0000000000095643\right.}$
• Once more: ${\displaystyle \left.x_{4}\approx 1.0\right.}$

It's like a miracle... but it's just mathematics!

Muller's method generally converges faster than Newton's method, and can produce complex zeros (which Newton's method can never do, starting from a real-valued function).