A subject I am particularly interested in is everything to do with numerical and mathematical algorithms. For this reason I decided that I am going to start a series on numerics. A second series will be on algorithms and discrete maths. The topics I am going to cover will not necessarily be published in any logical order as I will write about stuff that comes to my mind. But I will provide a page

here which will act as a kind of index. In this way I hope that, after some time, I will be able to provide a tutorial on the subjects together with the odd little gem strewn in between.

### Trapezium Method of Numerical Integration

When evaluating an integral using the trapezium rule the function

*f* to be integrated is evaluated at discrete points

*x*_{i} between the integration boundaries

*a* and

*b*
*x*_{i} = *a* + *h**i*
where the interval size

*h* is given by

,

*N* is the number of intervals and

. I will also refer to

*N* as the resolution. In practice

*N* is often chosen to be a power of 2. There is no particular reason for this other than the fact that one commonly increases the resolution by a factor of 2 when improving the accuracy.

Let's abbreviate the notation by defining the discrete function values

Here I used the upper index

*N* to clarify that the discrete function values depend on the resolution. With this definition we can now write the trapezium approximation

*T*_{N} of resolution

*N* to the integral as

What this means geometrically is explained in the following graph. The integral of a function is the area below the graph of that function. The trapezium rule approximates this area by adding the areas of the trapezoids created by the x-axis and the function values at the discrete points. Each trapezoid has the area

Summing these areas up, one obtains the above formula. Let's do an example. We want to integrate the function

sin(π*x*) between the limits 0 and 1. Using N=4 we get

Comparing this with the exact solution

We see that, with this crude approximation, the error is about

Of course one expects this approximation to get better as

*N* increases and thus

*h* decreases. In fact, one can show that, for sufficiently smooth functions

.

The above equation means that the error in the trapezium approximation decreases quadratically with the number of steps.

Let's see this on our example. In the diagram I plotted the error against the resolution on a double logarithmic plot. Up to

*N* = 1*e*6 the curve is almost a straight line with a slope of -2. To show this the green line is the function

. This means that, in order reduce the error by a factor of four, one has to double the resolution.

One interesting feature of the trapezium rule is the fact that it makes the doubling of the resolution easy for us. When going from

*N* to

2*N*, one does not have to evaluate the function at all the

2*N* points. Instead one can use the result

*T*_{N} of the previous approximation. The rule is

This formula makes it easy to successively increase the resolution until the required accuracy is achieved.

Finally I'd like to comment on the behaviour of the error when

*N* gets really large. In the figure above you can see that the error levels off at about

2 × 10^{ − 7} when

*N* increases above 2000. This error is above the numerical accuracy, which is about

10^{ − 8} in my calculations. When

*N* increases further the error starts to increase again until, in the end it is even larger than before. The reason for this large error is that the individual terms in the sum become very small. Adding many small values up results in a loss of accuracy due to rounding errors.