Difference between revisions of "Normalization"

From Computer History Wiki
Jump to: navigation, search
(A decent start)
 
m (+car)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 
'''Normalization''' is an arithmetic operation used in working with [[floating point]] ('real') numbers.
 
'''Normalization''' is an arithmetic operation used in working with [[floating point]] ('real') numbers.
  
In computers, most implementations of floating point separate the numbers info an exponent, and a mantissa (fractional part). A numeric operation may result in the creation of a number with zeros in the most significant bits of the mantissa; further operations with a number of this form may result in un-necessary loss of accuracy (since those bits are bot being used to hold useful information).
+
In computers, most implementations of floating point separate the numbers info an exponent, and a mantissa (fractional part). A numeric operation may result in the creation of a number with zeros in the most significant bits of the mantissa; further operations with a number of this form may result in un-necessary loss of accuracy (since those bits are not being used to hold useful information).
  
The usual response is to shift the mantissa left, to remove the zeros, and adjust the exponent accordingingly; this process is called 'normalization'.  
+
The usual response is to shift the mantissa to the left, removing the zeros, and adjusting the exponent accordingingly; this process is 'normalization'.  
  
In some implementations, since the most-significant bit would ''always'' be a one under this system, that bit is not stored, allowing one more bit of accuracy; this is sometimes referred to as a 'hidden' bit.
+
In some implementations, since the most-significant bit would ''always'' be a one under this system, that bit is not stored, allowing one more bit of accuracy to be retained; this is sometimes referred to as a 'hidden' bit.
 +
 
 +
[[Category: Theory]]

Latest revision as of 19:37, 14 December 2018

Normalization is an arithmetic operation used in working with floating point ('real') numbers.

In computers, most implementations of floating point separate the numbers info an exponent, and a mantissa (fractional part). A numeric operation may result in the creation of a number with zeros in the most significant bits of the mantissa; further operations with a number of this form may result in un-necessary loss of accuracy (since those bits are not being used to hold useful information).

The usual response is to shift the mantissa to the left, removing the zeros, and adjusting the exponent accordingingly; this process is 'normalization'.

In some implementations, since the most-significant bit would always be a one under this system, that bit is not stored, allowing one more bit of accuracy to be retained; this is sometimes referred to as a 'hidden' bit.