- Denormal number
-
In computer science, denormal numbers or denormalized numbers (now often called subnormal numbers) fill the underflow gap around zero in floating point arithmetic: any non-zero number which is smaller than the smallest normal number is 'sub-normal'.
For example, if the smallest positive 'normal' number is 1×β−n (where β is the base of the floating-point system, usually 2 or 10), then any smaller positive numbers that can be represented are denormal.
The significand (or mantissa) of an IEEE number is the part of a floating point number that represents the significant digits. For a positive normalised number it can be represented as m0.m1m2m3...mp-2mp-1 (where m represents a significant digit and p is the precision, and m0 is non-zero). Notice that for a binary radix, the leading binary digit is one. In a denormal number, since the exponent is the least that it can be, zero is the lead significand digit (0.m1m2m3...mp-2mp-1) in order to represent numbers closer to zero than the smallest normal number.
By filling the underflow gap like this, significant digits are lost, but not to the extent as when doing flush to zero on underflow (losing all significant digits all through the underflow gap). Hence the production of a denormal number is sometimes called gradual underflow because it allows a calculation to lose precision slowly when the result is small.
In IEEE 754-2008, denormal numbers are renamed subnormal numbers, and are supported in both binary and decimal formats. In binary interchange formats, subnormal numbers are encoded with a biased exponent of 0, but are interpreted with the value of the smallest allowed exponent, which is one greater (i.e., as if it were encoded as a 1). In decimal interchange formats they require no special encoding because the format supports unnormalized numbers directly.
Floating-point precisions IEEE 754:
16-bit: Half (binary16)
32-bit: Single (binary32), decimal32
64-bit: Double (binary64), decimal64
128-bit: Quadruple (binary128), decimal128
Other:
Minifloat · Extended precision
Arbitrary precisionContents
Background
Denormal numbers provide the guarantee that addition and subtraction of floating-point numbers never underflows; two nearby floating-point numbers always have a representable non-zero difference. Without gradual underflow, the subtraction a−b can underflow and produce zero even though the values are not equal. This can, in turn, lead to division by zero errors that cannot occur when gradual underflow is used.
Denormal numbers were implemented in the Intel 8087 while the IEEE 754 standard was being written. They were by far the most controversial feature in the K-C-S format proposal that was eventually adopted,[1] but this implementation demonstrated that denormals could be supported in a practical implementation. Some implementations of floating point units do not directly support denormal numbers in hardware, but rather trap to some kind of software support. While this may be transparent to the user, it can result in calculations which produce or consume denormal numbers being much slower than similar calculations on normal numbers.
Performance issues
Some systems handle denormal values in hardware, in the same way as normal values. Others leave the handling of denormal values to system software, only handling normal values and zero in hardware. Handling denormal values in software always leads to a significant decrease in performance. But even when denormal values are entirely computed in hardware, the speed of computation is significantly reduced on most modern processors; in extreme cases, instructions involving denormal operands may run as much as 100 times slower.[2][3]
Some applications need to contain code to avoid denormal numbers, either to maintain accuracy, or in order to avoid the performance penalty in some processors. For instance, in audio processing applications, denormal values usually represent a signal so quiet that it's out of the human hearing range. Because of this, a common measure to avoid denormals on processors where there would be a performance penalty is to cut the signal to zero once it reaches denormal levels or mix in an extremely quiet noise signal.[4] Since the SSE2 processor extension, Intel has provided such a functionality in CPU hardware, which rounds denormalized numbers to zero.[5]
References
- ^ An Interview with the Old Man of Floating-Point Reminiscences elicited from William Kahan by Charles Severance
- ^ Dooley, Isaac; Kale, Laxmikant (2006-09-12). "Quantifying the Interference Caused by Subnormal Floating-Point Values". http://charm.cs.uiuc.edu/papers/SubnormalOSIHPA06.pdf. Retrieved 2010-11-30.
- ^ Fog, Agner. "Instruction tables: Lists of instruction latencies, throughputs and microoperation breakdowns for Intel, AMD and VIA CPUs". http://www.agner.org/optimize/instruction_tables.pdf. Retrieved 2011-01-25.
- ^ Serris, John (2002-04-16). "Pentium 4 denormalization: CPU spikes in audio applications". http://phonophunk.com/articles/pentium4-denormalization.php. Retrieved 2010-09-03.
- ^ Casey, Shawn (2008-10-16). "x87 and SSE Floating Point Assists in IA-32: Flush-To-Zero (FTZ) and Denormals-Are-Zero (DAZ)". http://software.intel.com/en-us/articles/x87-and-sse-floating-point-assists-in-ia-32-flush-to-zero-ftz-and-denormals-are-zero-daz/. Retrieved 2010-09-03.
Further reading
- Eric Schwarz, Martin Schmookler and Son Dao Trong (June 2003). "Hardware Implementations of Denormalized Numbers". Proceedings 16th IEEE Symposium on Computer Arithmetic (Arith16). 16th IEEE Symposium on Computer Arithmetic. IEEE Computer Society. pp. 104–111. ISBN 0-7695-1894-X. http://www.ece.ucdavis.edu/acsel/arithmetic/arith16/papers/ARITH16_Schwarz.pdf.
See also various papers on William Kahan's web site [1] for examples of where denormal numbers help improve the results of calculations.
Categories:- Computer arithmetic
Wikimedia Foundation. 2010.