Project 1 for Foundations of Computational Mathematics I at Florida State University.
We compare three addition algorithms and demonstrate their theoretical error bounds. Using various sums, we verify that these sums are correct, and that they are stable, as assured by the mathematics. We also verify that addition is a well conditioned problem.
We numerically verify (1) the conditioning upper bound on addition problems, and (2) the stability of three addition algorithms. The three algorithms in question are the recursive cumulative sum in single precision IEEE arithmetic, the binary fan-in tree, and the cumulative sum again in double precision (algorithms 1, 2, and 3 hereafter). Before the stability analysis, we show that the three algorithms produce correct results (as correct as they can be, given their limitations). The shortcomings of relative to are reproduced analogously using the shortcomings of single precision floats relative to double precision floats.
We pose the problem in the following way: find that satisfies
given fixed [6]. Under the constraint , we may view as a function of . This is well conditioned because
One may bound the absolute error in a similar way [2]. Here, and denote the relative and absolute condition numbers.
The three algorithms merely approximate . In this investigation, we consider two sources of error that prevent us from attaining exactly [1]:
- finite precision representation of real numbers. These not only perturb the data , but also restrict the range of values that may take.
- accumulation of errors due to floating point arithmetic.
If is an approximation of , then two quantities of interest are and . We call the former absolute forward error and the latter relative forward error. We derive upper bounds for each.
In fact, we show that all three algorithms are backwards stable. That is, the errors are no greater than
for some , and for each approximate solution , there exists a vector (small relative to ) that satisfies . Here, denotes unit round off.
It has been shown [2] that if is a vector of floating point numbers, then the errors have the approximate bounds
If instead we require the data to be real valued, then this does not change the bounds (provided that they are within range, i.e. not too large). Indeed for each (within range), there exists and for which . Thus substitution of into does not change the order of
To show that is backwards stable, recall [3] that we may express in the form
where . Hence, actually solves the problem with data
Our backward error (using the norm) is
which is small relative to because each is small.
Assume that for some . If this is not the case, then take and let . For floating point , we claim that
That is, for the binary fan-in tree. For proof, see 1.3.c in the written homework solutions. As was the case before, real valued data does not change the order of ; we may take if we wish to be conservative.
Just like algorithm 1, solves the problem with data . We attain a similar expression for the backwards error (see homework problem 1.3.b.), with the being different, but still small ( ).
Unit round off for double precision floats is times smaller than that of single precision . We therefore neglect the accumulation of rounding error due to double precision addition. This may be problematic when , but we do not consider such sums here. The algorithm in question is
where
It follows that
In particular, . As before, solves the problem with data , and the backward error is small: .
[1] Gallivan, K. A. Set 2: Representation, Conditioning and Error, Lecture Notes. Foundations of Computational Math I, Florida State University.
[2] Gallivan, K. A. Set 3: Finite Precision Arithmetic and Numerical Stability, Lecture Notes. Foundations of Computational Math I, Florida State University.
[3] Gallivan, K. A. Set 4: Numerical Stability Examples, Lecture Notes. Foundations of Computational Math I, Florida State University.
[4] J. D. Hunter, "Matplotlib: A 2D Graphics Environment", Computing in Science & Engineering, vol. 9, no. 3, pp. 90-95, 2007.
[5] Kincaid, D. & Cheney, W. (2002) Numerical Analysis: Mathematics of Scientific Computing. Brooks/Cole.
[6] Quarteroni, A., Sacco, R., & Saleri, R. (2000). Numerical Mathematics. Springer.