Normalization

From Computer History Wiki
Revision as of 15:59, 5 September 2018 by Jnc (talk | contribs) (A decent start)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Normalization is an arithmetic operation used in working with floating point ('real') numbers.

In computers, most implementations of floating point separate the numbers info an exponent, and a mantissa (fractional part). A numeric operation may result in the creation of a number with zeros in the most significant bits of the mantissa; further operations with a number of this form may result in un-necessary loss of accuracy (since those bits are bot being used to hold useful information).

The usual response is to shift the mantissa left, to remove the zeros, and adjust the exponent accordingingly; this process is called 'normalization'.

In some implementations, since the most-significant bit would always be a one under this system, that bit is not stored, allowing one more bit of accuracy; this is sometimes referred to as a 'hidden' bit.