Difference between revisions of "Core memory"

From Computer History Wiki
Jump to: navigation, search
(Pretty good writeup, IIMSS - covers all the basics)
 
m (External links: +A Coincident-Current Magnetic Memory Unit)
 
(9 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''Core memory''' is a form of [[main memory]] that uses tiny rings of magnetic material ('cores', hence the name) to store data; magnetized in one direction, such a core represents a '1', and in the other, a '0'.
+
'''Core memory''' is a now-obsolete, but once ubiquitous, form of [[main memory]] which uses tiny rings of magnetic material ('cores', hence the name) to store data; magnetized in one direction, such a core represents a '1' [[bit]], and in the other, a '0'.
  
Core memory was a key breakthrough in the development of computers. Prior to its invention, all forms of main memory had been clumsy and expensive (per bit): cathode ray storage tubes, acoustic delay lines, etc. In addition to being inelegant, they had other issues: it was difficult to make large memories out of them, and most had additional issues (e.g. delay lines could have slow access times).
+
At the very start, it was somewhat expensive to fabricate, since early core memory was assembled by hand, but various companies (principally [[International Business Machines|IBM]]) produced machinery to mass-produce core memory, radically lowering the price.
  
Core memory solved all these problems. It was reliable, cheap, relatively compact, fast - everything that one could want of a main memory. It is difficult to convey just how much better core memory was than its predecessor main memory technologies.
+
==Importance==
  
Before the invention of core memory, there were only a few computers, with extremely small main memories. By the time core finally became obsolete, two decades after its introduction, computers (albeit not personal computers) had become ubiquitoues. It is no understatement to say that core memory made the computer age possible.
+
Core memory was one of a handful of the most key breakthroughs in the development of computers.
  
At the very start, it was somewhat expensive to fabricate, since early core memory was assembled by hand, but various companies (principally IBM) produced machinery to produce core memory, radically lowering the price.
+
Prior to its invention, all forms of main memory had been physically bulky, un-reliable, clumsy, and expensive (per bit): [[cathode ray tube|CRT]] storage [[vacuum tube|tubes]], [[acoustic delay line]]s, etc; it was difficult to make large memories out of them, and most had additional issues (e.g. delay lines and [[drum]]scould have slow [[access time]]s, since they were not [[random access]]).
 +
 
 +
Core memory solved all these problems. It was reliable, relatively cheap, fast, random access, and compact (by the standards of the day) - everything that one could want of a main memory.
 +
 
 +
It is difficult to convey just how much better core memory was than its predecessor main memory technologies. There's a reason they were all quickly dropped in favour of core - which, looked at from today's [[dynamic RAM|DRAM]]-era perspective, seems quaintly dinosaurian. Individual pieces of hardware one can actually ''see'' with the naked eye, for ''each'' bit?
 +
 
 +
But that should give some idea of how much worse everything before it was, that even with its issues (which eventually allowed DRAM to supplant it), it completely replaced them all so quickly.
 +
 
 +
It is no understatement to say that core memory made the computer age possible. Before the invention of  core memory, there were only a few computers, with extremely small main memories. By the time core finally became obsolete, two decades after its introduction, computers (albeit not [[personal computer]]s) had become ubiquitous. Without core, computers would not have advanced (in use, societal importance, technical depth, etc) at the speed they did.
 +
 
 +
It was one of the most consequential steps in the development of computers to what they are today: up there with [[transistor]]s, [[integrated circuit]]s, DRAM and [[microprocessor]]s.
  
 
==Technical details==
 
==Technical details==
  
The ability of a ring of magnetic material to store a bit by being magnetized had long been known, from work with them in transformers. A wired around (or through, as was eventually used) the core could be used to magnetize it in one direction or another, by running a sufficiently large pulse of current through the wire. The problem was how to create large memories using this basic mechanism.
+
The ability of a ring of magnetic material to store a bit by being magnetized had long been known, from work with them in transformers.
 +
 
 +
A wire around (or through, as was eventually used) the core could be used to magnetize it in one direction or another, by running a sufficiently large pulse of [[current]] through the wire. The problem was how to create large memories using this basic mechanism.
 +
 
 +
The answer was the invention of the so-called 'coincident core' concept, in which an two-dimensional [[array]] of cores (a '''plane''') had a wire strung through the cores along each horizontal, and also along each vertical axis; thus, each core has two wires through it, one 'X', and one 'Y'.
  
The answer was the invention of the so-called 'coincident core' concept, in which an two-dimensional array of cores (a 'plane') had a wire strung through the cores along each horizontal, and also along each vertical axis; thus, each core has two wires through it, one 'X', and one 'Y'.
+
A current pulse of '''1/2''' the required size to switch the magnetization (the so-called 'half-select current') was sent through a selected X axis wire, and also through a selected Y axis wire; only at the one core where ''both'' those X and Y wires pass through it do the two pulses add up to a large enough current to 'flip' the magnetization of that core.
  
A current pulse of '''1/2''' the required size to switch the magnetization (the so-called 'half-select current') is sent through a selected X axis wire, and also through a selected Y axis wire; only at the one core where ''both'' those X and Y wires pass through it do the two pulses add up to a large enough current to 'flip' the magnetization of that core.
+
A third wire through the core is also needed, the so-called 'sense' wire, which allows the memory circuitry to detect what the prior value stored in a particular core was.
  
A third wire is also needed, the so-called 'sense' wire, which allows the memory circuitry to detect what the prior value stored in a particular core was. Core memory is 'destructive readout', in that to read the value stored in a particular core, it is written to '0'; in so doing, a different current will be induced in the sense wire, depending on whether the previous value stored in the core was 0 or 1.
+
Core memory was 'destructive readout', in that to read the value stored in a particular core, it is written to 0; in so doing, a different current will be induced in the sense wire, depending on whether the previous value stored in the core was 0 or 1.
  
The destructive nature of the readout does mean the data has to be written back afterwards, but clever designers soon learned to make use of this; for instance, if an instruction wants to increment a particular location in main memory, the location can be read out, with the {{CPU]] telling the memory to wait for the CPU to tell it what value to write back.
+
The destructive nature of the readout does mean the data has to be written back afterwards, but clever designers soon learned to actually make use of this; for instance, if an instruction wanted to increment a particular location in main memory, the location can be read out, with the [[Central Processing Unit|CPU]] telling the memory to wait before the write-back, so the CPU can tell it what value to write back.
  
Since with coincident core setup, only the selected core will flip to 0, a single sense wire can be run through ''all'' the cores in a plane, thereby significantly reducing the wiring/circuit complexity.
+
Since with the coincident core setup, only the selected core will flip to 0, if a plane contains the same bit from all the [[word]]s in the memory (i.e. there are as many planes as there are bits in the machine's word), a single sense wire can be run through ''all'' the cores in a plane, thereby significantly reducing the wiring/circuit complexity.
  
The core array of X*Y cores notionally needs X + Y drivers (in reality, verious switching tricks allow use of far less than that many) - but note that adding ''two'' drivers, one X and one Y, allows use of ''four'' times as many cores; like the later [[DRAM]] which replaced core, as the memories became larger, they became more efficient.
+
The core array of X*Y cores notionally needs X + Y drivers (in reality, various switching tricks allow use of far less than that many) - but note that adding ''two'' drivers, one X and one Y, allows use of ''four'' times as many cores; like the later DRAM which replaced core, as the memories became larger, they became more efficient in support circuitry.
  
 
Finally, to produce Z-bit wide words, one provided Z core arrays in parallel. One additional piece of complexity appears when this is done. Typically, the same X and Y lines are sent in parallel to each core plane. This is fine for reading, when all the bits in a particular word are read; but for writing, when some bits need to be set to 0, and others to be 1, it is problematic.
 
Finally, to produce Z-bit wide words, one provided Z core arrays in parallel. One additional piece of complexity appears when this is done. Typically, the same X and Y lines are sent in parallel to each core plane. This is fine for reading, when all the bits in a particular word are read; but for writing, when some bits need to be set to 0, and others to be 1, it is problematic.
  
The solution is the so-called 'inhibit' line, another line through the core; for bits which are to be left as 0 (remember that reading will set the cores to 0), a 1/2 current will be sent through the inhibit line in the reverse direction, which produces an 'anti-pulse' which prevents the core from being written to 1. Like the sense line, a single inhibit line can be run through all the cores in particular array.
+
The solution is the so-called 'inhibit' line, another line through the core; for bits which are to be left as 0 (remember that reading will set the cores to 0) when re-writing the contents, a 1/2 current will be sent through the inhibit line in the reverse direction, which produces an 'anti-pulse' which prevents the core from being written to 1. Like the sense line, a single inhibit line can be run through all the cores in particular plane.
 +
 
 +
Eventually, core designers realized that the sense line was only used during reading, and the inhibit during writing, so later core memories combined both functions into a single wire, reducing the number of wires needed to run through each core from four to three. With the smaller cores which came into use later on (see below), this was a big help.
 +
 
 +
==Evolution==
 +
 
 +
Early cores were fairly large (several millimeters in diameter), and thus required a lot of current, and time, to switch. Over time cores became much smaller, thereby increasing their speed, and decreasing their energy usage; so small that it was almost impossible to see the holes without magnification.
 +
 
 +
The magnetic characteristics of the material used to make the cores were sensitive to temperature, especially in early cores. Switching cores released energy in the cores, which tended to heat them up, so keeping them in a constant-temperature machine room did not solve the problem.
 +
 
 +
Early IBM core memory was therefore often submerged in an oil bath, to keep it at the design operating point. The oil was circulated by a pump, and a thermostat routed the oil through a radiator, or heater, depending in the temperature.
 +
 
 +
[[Digital Equipment Corporation|DEC]] design practise was to include temperature sensors in the core planes, and adjust the driving electronics as needed. The drive circuitry was therefore more complex, but it avoided the considerable engineering associated with the oil system, which acceptable perhaps in a [[mainframe]], but infeasible for a [[minicomputer]].
 +
 
 +
==Further reading==
 +
 
 +
* Emerson W. Pugh, ''Memories That Shaped an Industry: Decisions Leading to IBM System/360'', MIT Press, Cambridge, 1984
 +
 
 +
==External links=
  
Early cores were fairly large (several millimeters in diameter), and thus required a lot of current, and time, to switch. Over time cores became so small that it was almost impossible to see the holes without magnification, thereby increasing their speed, and decreasing their energy needs.
+
* [http://bitsavers.org/pdf/mit/lincolnLaboratory/MC665/M-SERIES/M-0975_A_COINCIDENT-CURRENT_MAGNETIC_MEMORY_UNIT.pdf A Coincident-Current Magnetic Memory Unit] - a Master's thesis proposal to build the first one!
  
Eventually, core designers realized that the sense line was only used during reading, and the inhibit during writing, so later core memories combined both fuctions into a single wire, reducing the number of wires needed to run through each core from four to three. With the smaller cores, this was a big help.
+
[[Category: Memory Basics]]

Latest revision as of 01:33, 4 March 2024

Core memory is a now-obsolete, but once ubiquitous, form of main memory which uses tiny rings of magnetic material ('cores', hence the name) to store data; magnetized in one direction, such a core represents a '1' bit, and in the other, a '0'.

At the very start, it was somewhat expensive to fabricate, since early core memory was assembled by hand, but various companies (principally IBM) produced machinery to mass-produce core memory, radically lowering the price.

Importance

Core memory was one of a handful of the most key breakthroughs in the development of computers.

Prior to its invention, all forms of main memory had been physically bulky, un-reliable, clumsy, and expensive (per bit): CRT storage tubes, acoustic delay lines, etc; it was difficult to make large memories out of them, and most had additional issues (e.g. delay lines and drumscould have slow access times, since they were not random access).

Core memory solved all these problems. It was reliable, relatively cheap, fast, random access, and compact (by the standards of the day) - everything that one could want of a main memory.

It is difficult to convey just how much better core memory was than its predecessor main memory technologies. There's a reason they were all quickly dropped in favour of core - which, looked at from today's DRAM-era perspective, seems quaintly dinosaurian. Individual pieces of hardware one can actually see with the naked eye, for each bit?

But that should give some idea of how much worse everything before it was, that even with its issues (which eventually allowed DRAM to supplant it), it completely replaced them all so quickly.

It is no understatement to say that core memory made the computer age possible. Before the invention of core memory, there were only a few computers, with extremely small main memories. By the time core finally became obsolete, two decades after its introduction, computers (albeit not personal computers) had become ubiquitous. Without core, computers would not have advanced (in use, societal importance, technical depth, etc) at the speed they did.

It was one of the most consequential steps in the development of computers to what they are today: up there with transistors, integrated circuits, DRAM and microprocessors.

Technical details

The ability of a ring of magnetic material to store a bit by being magnetized had long been known, from work with them in transformers.

A wire around (or through, as was eventually used) the core could be used to magnetize it in one direction or another, by running a sufficiently large pulse of current through the wire. The problem was how to create large memories using this basic mechanism.

The answer was the invention of the so-called 'coincident core' concept, in which an two-dimensional array of cores (a plane) had a wire strung through the cores along each horizontal, and also along each vertical axis; thus, each core has two wires through it, one 'X', and one 'Y'.

A current pulse of 1/2 the required size to switch the magnetization (the so-called 'half-select current') was sent through a selected X axis wire, and also through a selected Y axis wire; only at the one core where both those X and Y wires pass through it do the two pulses add up to a large enough current to 'flip' the magnetization of that core.

A third wire through the core is also needed, the so-called 'sense' wire, which allows the memory circuitry to detect what the prior value stored in a particular core was.

Core memory was 'destructive readout', in that to read the value stored in a particular core, it is written to 0; in so doing, a different current will be induced in the sense wire, depending on whether the previous value stored in the core was 0 or 1.

The destructive nature of the readout does mean the data has to be written back afterwards, but clever designers soon learned to actually make use of this; for instance, if an instruction wanted to increment a particular location in main memory, the location can be read out, with the CPU telling the memory to wait before the write-back, so the CPU can tell it what value to write back.

Since with the coincident core setup, only the selected core will flip to 0, if a plane contains the same bit from all the words in the memory (i.e. there are as many planes as there are bits in the machine's word), a single sense wire can be run through all the cores in a plane, thereby significantly reducing the wiring/circuit complexity.

The core array of X*Y cores notionally needs X + Y drivers (in reality, various switching tricks allow use of far less than that many) - but note that adding two drivers, one X and one Y, allows use of four times as many cores; like the later DRAM which replaced core, as the memories became larger, they became more efficient in support circuitry.

Finally, to produce Z-bit wide words, one provided Z core arrays in parallel. One additional piece of complexity appears when this is done. Typically, the same X and Y lines are sent in parallel to each core plane. This is fine for reading, when all the bits in a particular word are read; but for writing, when some bits need to be set to 0, and others to be 1, it is problematic.

The solution is the so-called 'inhibit' line, another line through the core; for bits which are to be left as 0 (remember that reading will set the cores to 0) when re-writing the contents, a 1/2 current will be sent through the inhibit line in the reverse direction, which produces an 'anti-pulse' which prevents the core from being written to 1. Like the sense line, a single inhibit line can be run through all the cores in particular plane.

Eventually, core designers realized that the sense line was only used during reading, and the inhibit during writing, so later core memories combined both functions into a single wire, reducing the number of wires needed to run through each core from four to three. With the smaller cores which came into use later on (see below), this was a big help.

Evolution

Early cores were fairly large (several millimeters in diameter), and thus required a lot of current, and time, to switch. Over time cores became much smaller, thereby increasing their speed, and decreasing their energy usage; so small that it was almost impossible to see the holes without magnification.

The magnetic characteristics of the material used to make the cores were sensitive to temperature, especially in early cores. Switching cores released energy in the cores, which tended to heat them up, so keeping them in a constant-temperature machine room did not solve the problem.

Early IBM core memory was therefore often submerged in an oil bath, to keep it at the design operating point. The oil was circulated by a pump, and a thermostat routed the oil through a radiator, or heater, depending in the temperature.

DEC design practise was to include temperature sensors in the core planes, and adjust the driving electronics as needed. The drive circuitry was therefore more complex, but it avoided the considerable engineering associated with the oil system, which acceptable perhaps in a mainframe, but infeasible for a minicomputer.

Further reading

  • Emerson W. Pugh, Memories That Shaped an Industry: Decisions Leading to IBM System/360, MIT Press, Cambridge, 1984

=External links