Lesson Video: Lagrange Error Bound | Nagwa Lesson Video: Lagrange Error Bound | Nagwa

Lesson Video: Lagrange Error Bound Mathematics • Higher Education

In this video, we will learn how to use the Lagrange error bound (Taylor’s theorem with remainder) to find the maximum error when using Taylor polynomial approximations.

11:33

Video Transcript

In this video, we’re going to learn how to use the Lagrange error bound to find the maximum error when approximating using Taylor polynomials. We’ll not only learn how we can use the formula to find these error bounds, but also how to find the least degree of a Taylor polynomial required to give a certain degree of accuracy.

The whole point in developing Taylor series is that they replace more complicated functions with polynomial-like expressions. And the properties of Taylor series make them especially useful when doing calculus. Now remember, a Taylor series for a function 𝑓 about 𝑎 is given by the sum from 𝑛 equals zero to ∞ of the 𝑛th derivative of 𝑓 evaluated at 𝑎 over 𝑛 factorial times 𝑥 minus 𝑎 to the 𝑛th power. So the first few terms are 𝑓 of 𝑎 plus 𝑓 prime of 𝑎 over one factorial times 𝑥 minus 𝑎 plus 𝑓 double prime of 𝑎 over two factorial times 𝑥 minus 𝑎 squared and so on.

The biggest issue we have is that these series involve infinitely many terms. And in reality, we usually only use the first few. And so of course, this means we lose some accuracy. We get a very good estimate for a function, but we can’t model it exactly. Let’s try to visualise this. Here is the graph of a function in the variable 𝑥, 𝑓 of 𝑥. We can add the graph of its Taylor approximation 𝑡 sub 𝑛 of 𝑥 about 𝑎. Now, we can approximate the value of 𝑓 of 𝑏 by finding the value of 𝑇 𝑛 of 𝑏. But of course, we can see that there is some error in this. We call this error 𝑅 𝑛 of 𝑥. It’s the remainder term; it’s the difference between our estimation and the actual value of the function at that point.

And so we define an error bound. This is called the Lagrange error bound. And it allows us to see exactly how accurate our estimate is. We begin by letting 𝑡 sub 𝑛 of 𝑥 be the 𝑛th order Taylor polynomial for our function 𝑓 about 𝑎, such that 𝑓 of 𝑥 is equal to the 𝑛th order Taylor polynomial plus 𝑅 𝑛 of 𝑥. That’s our remainder term. The difference between 𝑓 of 𝑥 and 𝑇 sub 𝑛 of 𝑥 then is 𝑅 sub 𝑛 of 𝑥, called the remainder term or the Lagrange error.

𝑅 sub 𝑛 of 𝑥 satisfies the following criteria. If the absolute value of the 𝑛 plus oneth derivative of [𝑥] is less than or equal to sum 𝑚, then the absolute value of 𝑅 sub 𝑛 of 𝑥 is less than or equal to the absolute value of 𝑚 over 𝑛 plus one factorial times 𝑥 minus 𝑎 to the power of 𝑛 plus one. Note that we can choose any value of 𝑚 as long as it’s greater than or equal to the maximum absolute value of the 𝑛 plus oneth derivative of 𝑓 of 𝑥 when 𝑥 is on the interval from 𝑎 to 𝑏.

In practice, we choose 𝑚 to be the maximum of the absolute value of the 𝑛 plus oneth derivative on this interval because it constrains our error bound to be as small as possible. We could, however, choose 𝑚 to be larger if we wanted. And this would mean our error bound is larger than it has to be, but still allowed. Now, whilst this might sound complicated, they’re usually a nice way to decide what 𝑚 should be. And at this stage, it’s probably best just to see what this looks like.

Find the Lagrange error bound when using the second Taylor polynomial of the function 𝑓 of 𝑥 equals the square root of 𝑥 at 𝑥 equals four to approximate the value square root of five. Round your answer to five decimal places.

We begin by recalling that the Lagrange error bound satisfies the criteria that if the absolute value of the 𝑛 plus oneth derivative of 𝑓 of 𝑥 is less than or equal to sum 𝑚, then the absolute value of the remainder term 𝑅 sub 𝑛 of 𝑥 is less than or equal to the absolute value of 𝑚 over 𝑛 plus one factorial times 𝑥 minus 𝑎 to the power of 𝑛 plus one. We’re going to begin by defining each part of our question. We’re going to be using the second Taylor polynomial; that’s 𝑇 sub two of 𝑥. And that means the remainder term we’re interested in is 𝑅 sub two of 𝑥. We’re finding the polynomial for 𝑓 of 𝑥 at 𝑥 equals four. So we’re going to let 𝑎 be equal to four. And we’re using this to estimate the value of the square root of five.

Since 𝑓 of 𝑥 is equal to the square root of 𝑥, we’re going to say that we can let 𝑥 be equal to five. Now, let’s go back to our definition. Notice that to find 𝑚, we need to find the 𝑚 plus oneth derivative of 𝑓. We’ll write 𝑓 of 𝑥 as 𝑥 to the power of one-half. And then we’re going to differentiate our function two plus one, which is equal to three times. We recall that to differentiate a term of this form, we multiply the entire term by the exponent and then reduce that exponent by one. So the first derivative 𝑓 prime of 𝑥 is equal to a half times 𝑥 to the power of negative one-half. Then the second derivative is negative a half times a half 𝑥 to the power of negative three over two, which is negative one-quarter 𝑥 to the power of negative three over two.

Then the third derivative — remember, this is the one we’re trying to find — is negative three over two times negative one-quarter 𝑥 to the power of negative five over two. We simplifies to three-eighths times 𝑥 to the power of negative five over two. We’ll simplify this a little by writing it as three over eight times 𝑥 to the power of five over two. And then we see that we’re looking to maximise the absolute value of three over eight 𝑥 to the power of five over two on the interval between 𝑎 and 𝑥. That’s between four and five. Now, we should be able to see that we can maximise the absolute value of three over eight 𝑥 to the power of five over two by making our denominator as small as possible.

So we’ll set 𝑥 equal to four to achieve this. And we see that we need to find the absolute value of three over eight times four to the power of five over two. Well, this is actually positive before worrying about the absolute value sign. So we simply have three over 256. And we now have 𝑚. We’re going to substitute it into our error abound formula, with the value of 𝑛 being equal to two. It’s the absolute value of three over 256 over two plus one factorial times 𝑥 minus 𝑎, which is five minus four to the power of two plus one. Well, this is equal to one over 512, which is approximately equal to 0.00195. And so the Lagrange error bound when using the second Taylor polynomial of the function 𝑓 of 𝑥 equals the square root of 𝑥 at 𝑥 equals four to approximate value of the square root of five is 0.00195 correct of five decimal places.

Now, because the error bound we have calculated is so small, we can be reasonably sure that the second Taylor polynomial of our function approximates the value of the square root of five pretty well for inputs in the closed interval four to five. We’ll now consider how we can use the error abound formula to ensure a given level of accuracy from a Maclaurin series.

Determine the least degree of the Maclaurin polynomials 𝑛 needed to approximate the value of sin of 0.3 with an error less than 0.001 using the Maclaurin series of 𝑓 of 𝑥 equals sin of 𝑥.

We begin by recalling that the Lagrange error bound tells us that if the absolute value of the 𝑛 plus oneth derivative of 𝑓 is less than or equal to sum 𝑚, then the absolute value of the remainder term 𝑅 sub 𝑛 of 𝑥 is less than or equal to the absolute value of 𝑚 over 𝑛 plus one factorial times 𝑥 minus 𝑎 to the power of 𝑛 plus one. To answer this question then, we’ll just begin by defining each part. We’re not told the degree of our McLaurin polynomial. So we’ll let 𝑛 be equal to 𝑛? This is what we’re trying to find. We’re told though that 𝑓 of 𝑥 is equal to sin of 𝑥. And we’re finding the Maclaurin series for 𝑓 of 𝑥. That’s the tailor series when 𝑎 is equal to zero.

We’re then going to use this to estimate the value of sin of 0.3. Now, since our function 𝑓 of 𝑥 is equal to sin of 𝑥, we can say that we’re going to let 𝑥 be equal to 0.3. We want to ensure that our error bound is less than 0.001. So we’ll start with this formula. We’re going to replace 𝑥 with 0.3 and 𝑎 with zero. And so we have the absolute value of 𝑚 over 𝑛 plus one factorial times 0.3 minus zero to the power of 𝑛 plus one. And of course, we want this error to be less than 0.001. Let’s replace 0.3 minus zero with 0.3. And then we know that 𝑚 is found by maximising the absolute value of the 𝑛 plus oneth derivative on the interval between 𝑎 and 𝑥.

So let’s find an expression for 𝑚 by considering the 𝑛 plus oneth derivative of our function. We know the 𝑓 of 𝑥 is equal to sin of 𝑥. And when we differentiate sin of 𝑥, we get cos of 𝑥. So the first derivative 𝑓 prime of 𝑥 is cos of 𝑥. Differentiating once more, we get negative sin of 𝑥. And then the third derivative 𝑓 triple prime of 𝑥 is negative cos of 𝑥. We differentiate one more time. And we find that the fourth derivative is sin of 𝑥. And so we see we have a cycle. We want to generalise it. So we’re going to say that, in fact, cos of 𝑥 is equal to sin of 𝑥 plus 𝜋 by two. Remember, that’s simply because the sin and cos graphs are horizontal translations of one another.

We say that negative sin 𝑥 is equal to sin of 𝑥 plus 𝜋, negative cos 𝑥 is sin of 𝑥 plus three 𝜋 by two, and sin of 𝑥 is equal to sin of 𝑥 plus two 𝜋. Now, in fact, if we write 𝜋 as two 𝜋 over two and two 𝜋 as four 𝜋 over two, we see that the 𝑛th derivative of 𝑓 is sin of 𝑥 plus 𝑛𝜋 over two. And so the 𝑛 plus oneth derivative, which is what we’re looking for, is sin of 𝑥 plus 𝑛 plus one 𝜋 over two. We want to maximise the absolute value of sin of 𝑥 plus 𝑛 plus one 𝜋 over two on the closed interval zero to 0.3. Well, the largest value of sin of 𝑥 plus 𝑛 plus one 𝜋 over two is one. So we’re going to let 𝑚 be equal to one.

Now, note that the maximum value of the trigonometric function sign on the interval between zero and 0.3 might not actually be one. But we do know it must be less than or equal to one. By letting 𝑚 be equal to one, we know that our error bound might be larger than it has to be, but that’s still allowed. Then our earlier inequation becomes the absolute value of one over 𝑛 plus one factorial times 0.3 to the power of 𝑛 plus one. And this must be less than or equal to 0.001. Now, in fact, this is always positive. So we no longer need the absolute value signs. And unfortunately, there’s no nice way at this stage to solve this inequation. So instead, we’re going to try some values of 𝑛, knowing, of course, that it can take integer values only.

We’ll begin by letting 𝑛 be equal to one. Then we have one over one plus one factorial times 0.3 to the power of one plus one. That gives us 0.045. Now, actually, that’s not less than 0.001, but we’re close. Next, let’s try 𝑛 equals two. We get one over two plus one factorial times 0.3 to the power of two plus one. That’s 0.0045, again not quite less than 0.001, but, again, getting closer. Let’s try 𝑛 equals three. We get one over three plus one factorial times 0.3 to the power of three plus one. That’s 0.0003375, which is indeed less than 0.001. And we can, therefore, say that the value of 𝑛 which ensures that the Maclaurin series approximates sin of 0.3 with an error less than 0.001 is 𝑛 equals three.

In this video, we’ve learned that Lagrange error bound tells us that if the absolute value of the 𝑛 plus oneth derivative of 𝑓 is less than or equal to sum 𝑚, then the absolute value of 𝑅 sub 𝑛 of 𝑥 — that’s the remainder term — is less than or equal to the absolute value of 𝑚 over 𝑛 plus one factorial times 𝑥 minus 𝑎 to the power of 𝑛 plus one. We saw that we generally choose 𝑚 to be the maximum of the absolute value of the 𝑛 plus oneth derivative on our interval, but that we can choose 𝑚 to be larger if we wanted. And this just means our error bound is larger than it needs to be, but still allowed.

Join Nagwa Classes

Attend live sessions on Nagwa Classes to boost your learning with guidance and advice from an expert teacher!

  • Interactive Sessions
  • Chat & Messaging
  • Realistic Exam Questions

Nagwa uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy