Difference between revisions of "Core memory"
(Add material on temperature sensitivity) |
(More text about importance; copyedit) |
||
Line 1: | Line 1: | ||
− | '''Core memory''' is a form of [[main memory]] | + | '''Core memory''' is a now-obsolete, but once ubiquitous, form of [[main memory]] which uses tiny rings of magnetic material ('cores', hence the name) to store data; magnetized in one direction, such a core represents a '1', and in the other, a '0'. |
− | + | At the very start, it was somewhat expensive to fabricate, since early core memory was assembled by hand, but various companies (principally [[International Business Machines|IBM]]) produced machinery to mass-produce core memory, radically lowering the price. | |
− | + | ==Importance== | |
− | + | Core memory was one of a handful of the most key breakthrough in the development of computers. | |
− | + | Prior to its invention, all forms of main memory had been physically bulky, un-reliable, clumsy, and expensive (per [[bit]]): [[Cathode Ray Tube|CRT]] storage tubes, [[acoustic delay line]]s, etc; it was difficult to make large memories out of them, and most had additional issues (e.g. delay lines could have slow [[access time]]s). | |
+ | |||
+ | Core memory solved all these problems. It was reliable, relatively cheap, fast, and compact (by the standards of the day) - everything that one could want of a main memory. | ||
+ | |||
+ | It is difficult to convey just how much better core memory was than its predecessor main memory technologies. There's a reason they were all quickly dropped in favour of core - which, looked at from today's [[dynamic RAM|DRAM]]-era perspective, seems quaintly dinosaurian. Individual pieces of hardware one can actually ''see'' with the naked eye, for ''each'' bit? But that should give some idea of how much worse everything before it was, that it killed them all off so quickly. | ||
+ | |||
+ | It is no understatement to say that core memory made the computer age possible. Before the invention of core memory, there were only a few computers, with extremely small main memories. By the time core finally became obsolete, two decades after its introduction, computers (albeit not [[personal computer]]s) had become ubiquitous. Without core, computers would not have advanced (in use, societal importance, technical depth, etc) at the speed they did. | ||
+ | |||
+ | It was one of the most consequential steps in the development of computers to what they are today: up there with [[transistor]]s, [[integrated circuit]]s, DRAM and [[microprocessor]]s. | ||
==Technical details== | ==Technical details== | ||
− | The ability of a ring of magnetic material to store a bit by being magnetized had long been known, from work with them in transformers | + | The ability of a ring of magnetic material to store a bit by being magnetized had long been known, from work with them in transformers. |
− | The answer was the invention of the so-called 'coincident core' concept, in which an two-dimensional array of cores (a '''plane''') had a wire strung through the cores along each horizontal, and also along each vertical axis; thus, each core has two wires through it, one 'X', and one 'Y'. | + | A wire around (or through, as was eventually used) the core could be used to magnetize it in one direction or another, by running a sufficiently large pulse of [[current]] through the wire. The problem was how to create large memories using this basic mechanism. |
+ | |||
+ | The answer was the invention of the so-called 'coincident core' concept, in which an two-dimensional [[array]] of cores (a '''plane''') had a wire strung through the cores along each horizontal, and also along each vertical axis; thus, each core has two wires through it, one 'X', and one 'Y'. | ||
A current pulse of '''1/2''' the required size to switch the magnetization (the so-called 'half-select current') was sent through a selected X axis wire, and also through a selected Y axis wire; only at the one core where ''both'' those X and Y wires pass through it do the two pulses add up to a large enough current to 'flip' the magnetization of that core. | A current pulse of '''1/2''' the required size to switch the magnetization (the so-called 'half-select current') was sent through a selected X axis wire, and also through a selected Y axis wire; only at the one core where ''both'' those X and Y wires pass through it do the two pulses add up to a large enough current to 'flip' the magnetization of that core. | ||
− | A third wire through the core is also needed, the so-called 'sense' wire, which allows the memory circuitry to detect what the prior value stored in a particular core was. Core memory was 'destructive readout', in that to read the value stored in a particular core, it is written to 0; in so doing, a different current will be induced in the sense wire, depending on whether the previous value stored in the core was 0 or 1. | + | A third wire through the core is also needed, the so-called 'sense' wire, which allows the memory circuitry to detect what the prior value stored in a particular core was. |
+ | |||
+ | Core memory was 'destructive readout', in that to read the value stored in a particular core, it is written to 0; in so doing, a different current will be induced in the sense wire, depending on whether the previous value stored in the core was 0 or 1. | ||
The destructive nature of the readout does mean the data has to be written back afterwards, but clever designers soon learned to actually make use of this; for instance, if an instruction wanted to increment a particular location in main memory, the location can be read out, with the [[Central Processing Unit|CPU]] telling the memory to wait before the write-back, so the CPU can tell it what value to write back. | The destructive nature of the readout does mean the data has to be written back afterwards, but clever designers soon learned to actually make use of this; for instance, if an instruction wanted to increment a particular location in main memory, the location can be read out, with the [[Central Processing Unit|CPU]] telling the memory to wait before the write-back, so the CPU can tell it what value to write back. | ||
− | Since with the coincident core setup, only the selected core will flip to 0, if a plane contains the same bit | + | Since with the coincident core setup, only the selected core will flip to 0, if a plane contains the same bit from all the [[word]]s in the memory (i.e. there are as many planes as there are bits in the machine's word), a single sense wire can be run through ''all'' the cores in a plane, thereby significantly reducing the wiring/circuit complexity. |
− | The core array of X*Y cores notionally needs X + Y drivers (in reality, various switching tricks allow use of far less than that many) - but note that adding ''two'' drivers, one X and one Y, allows use of ''four'' times as many cores; like the later | + | The core array of X*Y cores notionally needs X + Y drivers (in reality, various switching tricks allow use of far less than that many) - but note that adding ''two'' drivers, one X and one Y, allows use of ''four'' times as many cores; like the later DRAM which replaced core, as the memories became larger, they became more efficient in support circuitry. |
Finally, to produce Z-bit wide words, one provided Z core arrays in parallel. One additional piece of complexity appears when this is done. Typically, the same X and Y lines are sent in parallel to each core plane. This is fine for reading, when all the bits in a particular word are read; but for writing, when some bits need to be set to 0, and others to be 1, it is problematic. | Finally, to produce Z-bit wide words, one provided Z core arrays in parallel. One additional piece of complexity appears when this is done. Typically, the same X and Y lines are sent in parallel to each core plane. This is fine for reading, when all the bits in a particular word are read; but for writing, when some bits need to be set to 0, and others to be 1, it is problematic. | ||
Line 37: | Line 49: | ||
The magnetic characteristics of the material used to make the cores were sensitive to temperature, especially in early cores. Switching cores released energy in the cores, which tended to heat them up, so keeping them in a constant-temperature machine room did not solve the problem. | The magnetic characteristics of the material used to make the cores were sensitive to temperature, especially in early cores. Switching cores released energy in the cores, which tended to heat them up, so keeping them in a constant-temperature machine room did not solve the problem. | ||
− | Early | + | Early IBM core memory was therefore often submerged in an oil bath, to keep it at the design operating point. The oil was circulated by a pump, and a thermostat routed the oil through a radiator, or heater, depending in the temperature. |
[[Digital Equipment Corporation|DEC]] design practise was to include temperature sensors in the core planes, and adjust the driving electronics as needed. The drive circuitry was therefore more complex, but it avoided the considerable engineering assoociated with the oil system: acceptable perhaps in a [[mainframe]], but infeasible for a [[minicomputer]]. | [[Digital Equipment Corporation|DEC]] design practise was to include temperature sensors in the core planes, and adjust the driving electronics as needed. The drive circuitry was therefore more complex, but it avoided the considerable engineering assoociated with the oil system: acceptable perhaps in a [[mainframe]], but infeasible for a [[minicomputer]]. |
Revision as of 14:11, 19 June 2018
Core memory is a now-obsolete, but once ubiquitous, form of main memory which uses tiny rings of magnetic material ('cores', hence the name) to store data; magnetized in one direction, such a core represents a '1', and in the other, a '0'.
At the very start, it was somewhat expensive to fabricate, since early core memory was assembled by hand, but various companies (principally IBM) produced machinery to mass-produce core memory, radically lowering the price.
Importance
Core memory was one of a handful of the most key breakthrough in the development of computers.
Prior to its invention, all forms of main memory had been physically bulky, un-reliable, clumsy, and expensive (per bit): CRT storage tubes, acoustic delay lines, etc; it was difficult to make large memories out of them, and most had additional issues (e.g. delay lines could have slow access times).
Core memory solved all these problems. It was reliable, relatively cheap, fast, and compact (by the standards of the day) - everything that one could want of a main memory.
It is difficult to convey just how much better core memory was than its predecessor main memory technologies. There's a reason they were all quickly dropped in favour of core - which, looked at from today's DRAM-era perspective, seems quaintly dinosaurian. Individual pieces of hardware one can actually see with the naked eye, for each bit? But that should give some idea of how much worse everything before it was, that it killed them all off so quickly.
It is no understatement to say that core memory made the computer age possible. Before the invention of core memory, there were only a few computers, with extremely small main memories. By the time core finally became obsolete, two decades after its introduction, computers (albeit not personal computers) had become ubiquitous. Without core, computers would not have advanced (in use, societal importance, technical depth, etc) at the speed they did.
It was one of the most consequential steps in the development of computers to what they are today: up there with transistors, integrated circuits, DRAM and microprocessors.
Technical details
The ability of a ring of magnetic material to store a bit by being magnetized had long been known, from work with them in transformers.
A wire around (or through, as was eventually used) the core could be used to magnetize it in one direction or another, by running a sufficiently large pulse of current through the wire. The problem was how to create large memories using this basic mechanism.
The answer was the invention of the so-called 'coincident core' concept, in which an two-dimensional array of cores (a plane) had a wire strung through the cores along each horizontal, and also along each vertical axis; thus, each core has two wires through it, one 'X', and one 'Y'.
A current pulse of 1/2 the required size to switch the magnetization (the so-called 'half-select current') was sent through a selected X axis wire, and also through a selected Y axis wire; only at the one core where both those X and Y wires pass through it do the two pulses add up to a large enough current to 'flip' the magnetization of that core.
A third wire through the core is also needed, the so-called 'sense' wire, which allows the memory circuitry to detect what the prior value stored in a particular core was.
Core memory was 'destructive readout', in that to read the value stored in a particular core, it is written to 0; in so doing, a different current will be induced in the sense wire, depending on whether the previous value stored in the core was 0 or 1.
The destructive nature of the readout does mean the data has to be written back afterwards, but clever designers soon learned to actually make use of this; for instance, if an instruction wanted to increment a particular location in main memory, the location can be read out, with the CPU telling the memory to wait before the write-back, so the CPU can tell it what value to write back.
Since with the coincident core setup, only the selected core will flip to 0, if a plane contains the same bit from all the words in the memory (i.e. there are as many planes as there are bits in the machine's word), a single sense wire can be run through all the cores in a plane, thereby significantly reducing the wiring/circuit complexity.
The core array of X*Y cores notionally needs X + Y drivers (in reality, various switching tricks allow use of far less than that many) - but note that adding two drivers, one X and one Y, allows use of four times as many cores; like the later DRAM which replaced core, as the memories became larger, they became more efficient in support circuitry.
Finally, to produce Z-bit wide words, one provided Z core arrays in parallel. One additional piece of complexity appears when this is done. Typically, the same X and Y lines are sent in parallel to each core plane. This is fine for reading, when all the bits in a particular word are read; but for writing, when some bits need to be set to 0, and others to be 1, it is problematic.
The solution is the so-called 'inhibit' line, another line through the core; for bits which are to be left as 0 (remember that reading will set the cores to 0) when re-writing the contents, a 1/2 current will be sent through the inhibit line in the reverse direction, which produces an 'anti-pulse' which prevents the core from being written to 1. Like the sense line, a single inhibit line can be run through all the cores in particular plane.
Eventually, core designers realized that the sense line was only used during reading, and the inhibit during writing, so later core memories combined both functions into a single wire, reducing the number of wires needed to run through each core from four to three. With the smaller cores which came into use later on (see below), this was a big help.
Evolution
Early cores were fairly large (several millimeters in diameter), and thus required a lot of current, and time, to switch. Over time cores became much smaller, thereby increasing their speed, and decreasing their energy usage; so small that it was almost impossible to see the holes without magnification.
The magnetic characteristics of the material used to make the cores were sensitive to temperature, especially in early cores. Switching cores released energy in the cores, which tended to heat them up, so keeping them in a constant-temperature machine room did not solve the problem.
Early IBM core memory was therefore often submerged in an oil bath, to keep it at the design operating point. The oil was circulated by a pump, and a thermostat routed the oil through a radiator, or heater, depending in the temperature.
DEC design practise was to include temperature sensors in the core planes, and adjust the driving electronics as needed. The drive circuitry was therefore more complex, but it avoided the considerable engineering assoociated with the oil system: acceptable perhaps in a mainframe, but infeasible for a minicomputer.