Normalization

From Computer History Wiki
(Redirected from Normalisation)
Jump to: navigation, search

Normalization is an arithmetic operation used in working with floating point ('real') numbers.

In computers, most implementations of floating point separate the numbers info an exponent, and a mantissa (fractional part). A numeric operation may result in the creation of a number with zeros in the most significant bits of the mantissa; further operations with a number of this form may result in un-necessary loss of accuracy (since those bits are not being used to hold useful information).

The usual response is to shift the mantissa to the left, removing the zeros, and adjusting the exponent accordingingly; this process is 'normalization'.

In some implementations, since the most-significant bit would always be a one under this system, that bit is not stored, allowing one more bit of accuracy to be retained; this is sometimes referred to as a 'hidden' bit.