Difference between revisions of "Normalization"

From Computer History Wiki
Jump to: navigation, search
m (typo)
m (+car)
 
Line 6: Line 6:
  
 
In some implementations, since the most-significant bit would ''always'' be a one under this system, that bit is not stored, allowing one more bit of accuracy to be retained; this is sometimes referred to as a 'hidden' bit.
 
In some implementations, since the most-significant bit would ''always'' be a one under this system, that bit is not stored, allowing one more bit of accuracy to be retained; this is sometimes referred to as a 'hidden' bit.
 +
 +
[[Category: Theory]]

Latest revision as of 18:37, 14 December 2018

Normalization is an arithmetic operation used in working with floating point ('real') numbers.

In computers, most implementations of floating point separate the numbers info an exponent, and a mantissa (fractional part). A numeric operation may result in the creation of a number with zeros in the most significant bits of the mantissa; further operations with a number of this form may result in un-necessary loss of accuracy (since those bits are not being used to hold useful information).

The usual response is to shift the mantissa to the left, removing the zeros, and adjusting the exponent accordingingly; this process is 'normalization'.

In some implementations, since the most-significant bit would always be a one under this system, that bit is not stored, allowing one more bit of accuracy to be retained; this is sometimes referred to as a 'hidden' bit.