Department of Energy Argonne National Laboratory Office of Science NEWTON's Homepage NEWTON's Homepage
NEWTON, Ask A Scientist!
NEWTON Home Page NEWTON Teachers Visit Our Archives Ask A Question How To Ask A Question Question of the Week Our Expert Scientists Volunteer at NEWTON! Frequently Asked Questions Referencing NEWTON About NEWTON About Ask A Scientist Education At Argonne Natural Logarithms Calculus
Name: Scott
Status: other
Age: 50s
Location: N/A
Country: N/A
Date: N/A 


Question:
I have forgotten how to differentiate and integrate natural logarithms (if I ever knew) and my current statistics class requires it.

In particular, I am supposed to know how to integrate these functions:

a * e^(-b*x) = -a/b*e^(-bx)
a * x * e^(-b*x) = (-a/b^2 - -a*x/b) * e^(-b*x)
a * x^2 * e^(-b*x) = a * ( -2/b^3 - 2*x/b - x^2/b) * e^(-b*x)
a * x * e^(-b*x^2) = -a/(2*b) * e^(-b * x^2)

The fact that I have the answers in no way implies I understand how the integration proceeds.



Replies:
What you have is not the integral per se but the antiderivative. The difference is key. Doing the integral is well-defined algorithmically, something you can easily program a computer to do, but finding the antiderivative is black magic, an art, impossible to program.

The integral of a function can be found by drawing a graph of the function and determining the area enclosed by the curve, the x-axis, and vertical lines at the upper and lower limits of integration. You can see how you might program a computer to do this: divide the area into tiny vertical strips. Approximate each strip as a rectangle of fixed (tiny) width dx and height f(x) = the value of the function at the point x. Sum up the areas dA(x) = f(x) dx of all these little strips,

A(x1,x2) = Sum_{x from x1 to x2} dA(x) = Sum_{x1,x2} f(x) dx (1)

In the limit that the width of each strip becomes infinitismal, dx -> 0, then this sum is defined to be the integral, and indeed the symbol for an integral is an elongated, stylized ``S'' for ``sum.''

Well and good. But you recognize immediately that this has nothing to do with what you are after, with your equations above. So let us consider a more complicated problem: is there, in fact, ANOTHER function F(x), such that

A(x1,x2) = F(x2) - F(x1) ? (2)

That is, is there a function which when evaluated between the limits specified tells us the result of doing the integral? Common sense suggests that in general the answer should be yes. After all, presumably there is a unique answer to the question of what the area under the function is between 0 and x, and that is really all you need to define a function: a one-to-one mapping between x coordinate and the value of the function F(x).

This function would be called the antiderivative, because it ``undoes'' the action of taking the derivative. That is, it is clear (and is the fundamental theorem of integral calculus) that

d F
--- = f(x) (3)
d x

The derivative of F is f itself. Why so? Consider increasing the upper limit of the integral every so slightly, by some tiny fraction dx'. Clearly the change in area is given by


d F
d A = --- (x2) dx' (4)
d x

and from our algorithm above it must also be given by the area of the extra tiny strip we have tacked onto the end of the area defined by the curve and the original limits,

d A = f(x2) dx' (5)

Comparing (4) and (5) establishes (3).

Note, however, that as soon as you proceed to two- and three-dimensional functions, like those in physics that describe motion in three dimensions, then all bets are off, and indeed in this more general case it is typical that there is most definitely NOT an antiderivative for the generic multidimensional function f(x,y,z). Indeed, the entirety of thermodynamics can be said, in a sense, to proceed from the mere assertion that the equations of state (mathematical relations describing observed correlations between pressure, volume, and temperature) must have an antiderivative, called the free energy. But I digress.

How do you find an antiderivative for the generic function f(x)? No one knows! There is no recipe for constructing it, only a check once you have a candidate. Given a function f(x), we do not know how to construct the antiderivative F(x), only how to verify, using (3), whether F is or is not the antiderivative of f.

We do not navigate blindly, however. Experience teaches that most functions are combinations of simpler functions, and a few rules will let you construct the antiderivative of the complex function out of some combination of the antiderivatives of the simpler functions, which can be worked out by trial and error.

For example, inspection alone serves to prove that if

f(x) = x^n

that is x raised to the power n, then a natural candidate for the antiderivative is

F(x) = (n+1)^(-1) x^(n+1).

Is this the only possible candidate? This gets into questions of uniqueness which I am not qualified to address, so let's just assume the answer is yes. (Weasel, weasel, weasel! Too true, alas. But we press on nonetheless. . .)

Now a polynomial is a complex function, but composed entirely of powers, e.g.:

f(x) = a x^2 + b x + c (6)

One rule of integral calculus is that you may integrate terms of a sum independently, and sum the results, that is the antiderivative of a sum of functions is the sum of the antiderivatives of each function, viz.:

F(x) = a 3^(-1) x^3 + b 2^(-1) x^(2) + c x (7)

You can take the derivative of (7) and of each term therein and verify not only that (7) is the antiderivative of (6), but that EACH TERM of (7) is the antiderivative of the corresponding term in (6).

There are many rules like this. Constructing an antiderivative is a question of (1) application of the rules, (2) manipulation of the original function, and (3) recognition of the known antiderivatives of simple functions. For example, let's look at your three functions:

d F
--- = f(x) = a e^(-b x) (8)
d x

First we manipulate (2) by making the substitution y = - b x,

d F d y
--- --- = a e^y (9)
d y d x

I've used the chain rule from differential calculus on the left-hand side. We need to insert dy/dx = -b, like so,

d F
--- = (-a/b) e^y (10)
d y

Now we must recognize that the antiderivative of f(x) = e^x is F(x) = e^x,

F(y) = (-a/b) e^y (11)

and undo our manipulation,

F(x) = (-a/b) e^(- b x) (12)

All done. This is your first equation. Now let us look at the second,

d F
--- = f(x) = x e^(-b x) (13)
d x

We see here a function we can handle, multiplied by another. It's time to use one of the rules, the product rule in particular, which says,

d d g d f
--- f(x) g(x) = f --- + --- g (14)
d x d x d x

If we let df/dx on the RHS be the function of which we already know how to take the antiderivative, e^(-b x), then we have:

d d g
--- [ (-a/b) e^(-b x) g(x) ] = -(a/b) e^(- b x) --- + e^(- b x) g(x) (15)
d x d x

This is progress??? Well, yes it is. Let us now set g(x) equal to the damned newcomer, g(x) = x. That will make the last term on the RHS equal to our problem child the RHS of (13). We also need dg/dx to stick into the first term on the RHS of (15), but dg/dx = 1, so we have

d
--- [ (-a/b) x e^(-b x) ] = -(a/b) e^(- b x) + x e^(- b x) (16)
d x

Rearranging:

d F d
x e^(- b x) = --- = --- [ (-a/b) x e^(-b x) ] + (a/b) e^(- b x) (17)
d x d x

Can you see it coming? Now look at the RHS of (17). The antiderivative of the first term is obvious -- it's what's inside the square brackets. And the antiderivative of the second term we already know, because it's almost exactly our old friend from (8) above. That is, using (12):

d F d d
--- = --- [ (-a/b) x e^(-b x) ] + (1/b) --- [ (-a/b) e^(- b x) ] (18)
d x d x d x

or, using the rule that the sum of antiderivatives is the antiderivative of a sum,

d F d
--- = --- [ (-a/b) x e^(-b x) - (a/b^2) e^(- b x) ] (19)
d x d x

Well, we don't need to be a genius to realize that F must be just exactly what is inside the square brackets. And if you factor out the exponentials you get your second equation.

The third of your equations can be solved similarly, but you need to ``integrate by parts'' (use the product rule) twice.

The fourth of your equations is more troublesome, because the antiderivative of e^(-x^2) is called the error function Erf(x), and cannot be written down in terms of simpler functions (like x raised to powers, or logs, sines and cosines, that kind of thing). So in general integrals of functions containing e^(-x^2) can't be easily done. This one in particular is easy, however, by good luck:

d F
--- = a x e^(-b x^2) (20)
d x

Let y = - b x^2. Then

d F d y
--- --- = a x e^y (21)
d y d x

I need dy/dx = - 2 b x, which gives:

d F
--- = - 0.5 (a/b) e^y (22)
d y

and I use the known antiderivative of e^y,

F(y) = - 0.5 (a/b) e^y (23)

and undo the manipulation,

F(x) = - 0.5 (a/b) e^(- b x^2) (24)

That's your fourth equation. This was lucky, however. If the x were not in front, it would not have canceled on both sides in the transition from (21) to (22), and we would have found the problem insoluble. If it had been an x^2 instead of an x, we would also have been stuck.

You will find a standard college calculus textbook, like Thomas and Finney, chock-a-block full of fun stuff like this. It's worth studying, because while statistics is good fun and on occasion useful, calculus underlies *all* of modern physics, and a great deal else besides -- modern economics, derivative securities -- who knows? perhaps sex, death, and the meaning of life itself. . .

Or maybe not. Good luck.

Dr. Grayce



Click here to return to the Mathematics Archives

NEWTON is an electronic community for Science, Math, and Computer Science K-12 Educators, sponsored and operated by Argonne National Laboratory's Educational Programs, Andrew Skipor, Ph.D., Head of Educational Programs.

For assistance with NEWTON contact a System Operator (help@newton.dep.anl.gov), or at Argonne's Educational Programs

NEWTON AND ASK A SCIENTIST
Educational Programs
Building 360
9700 S. Cass Ave.
Argonne, Illinois
60439-4845, USA
Update: June 2012
Weclome To Newton

Argonne National Laboratory