PROJECT TITLE :
Arithmetic algorithms for extended precisionusing floating-point expansions - 2016
Several numerical problems need the next computing precision than the one offered by customary floating-purpose (FP) formats. One common way of extending the precision is to represent numbers in a very multiple part format. By using the so-called floating-purpose expansions, real numbers are represented as the unevaluated sum of commonplace machine precision FP numbers. This representation offers the simplicity of using directly offered, hardware implemented and highly optimized, FP operations. It is utilized by multiple-precision libraries such as Bailey's QD or the analogue Graphics Processing Units (GPU) tuned version, GQD. In this article we have a tendency to briefly revisit algorithms for adding and multiplying FP expansions, then we tend to introduce and prove new algorithms for normalizing, dividing and sq. rooting of FP expansions. The new technique used for computing the reciprocal a-1and the sq. root a?v of a FP growth a relies on an adapted Newton-Raphson iteration where the intermediate calculations are done using “truncated” operations (additions, multiplications) involving FP expansions. We tend to provide here a radical error analysis showing that it permits terribly correct computations. More precisely, once q iterations, the computed FP enlargement x=x0+…+x2q-1 satisfies, for the reciprocal algorithm, the relative error bound: ||(x-a-1)/a-one||=a pair of-2q(p-three)-1 and, respectively, for the sq. root one: ||x-one/a?v||=two-2q(p-3)-1/a?v , where p>two is that the precision of the FP representation used (p=twenty four for single precision and p=53 for double precision).
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here