Talk:PDP-8

From Computer History Wiki
Jump to: navigation, search

Clock speed

From the "PDP-8 Maintenance Manual" (F-87, February 1966), "Major States", "Time States" (pg. 1-9, 1-10):

The computer enters one major state during each 1.5 usec computer cycle
Two major time states, .. T1 and T2, occur during each computer cycle (or major state)

and "Clock and TG Flip-Flop" (pg 3-4):

produces standard positive 100-nsec pulses at a rate at a repetition rate of 1,333,333 pps. Each positive pulse complements the TG flip-flop

This is a bit confusing, but it seems that the basic clock is 1.333 MHz, but that there are a pair of those ticks (T1 and T2) during each major state, which agrees with the timing given for major states, which is at a rate of .6666 Mhz.

Given that, as before, "clock speed" should refer to "CPU cycle time", I'd say we should go with 'major state time' for "cycle time".

And maybe we should change the caption in the info-box to say "cycle time", to better inform readers of what the field is? Jnc (talk) 17:35, 20 August 2019 (CEST)

Oh, the major state types which may be traversed during an instruction are:

  • Fetch - this may be the only one for simple instructions
  • Defer - during indirect addressing
  • Execute - all memory reference instructions except JMP
  • Word Count - 1st cycle of three cycle data break
  • Current Address - 2nd cycle
  • Break - 3rd cycle; or single cycle data break

So really most instructions will be one to three major states. Jnc (talk) 18:32, 20 August 2019 (CEST)

OK, I've changed the template; will do the article 'soon' (to give people a chance to comment). Jnc (talk) 14:25, 22 August 2019 (CEST)
Not sure I follow the reasoning behind all this. The clock speed is simply the frequency of the clock, which I'm fairly convinced we've deduced to be 1.33 MHz. It is "externally visible", and it the basis for such measures as "cycles per instruction" . Why are we getting rid of it? --Nczempin (talk) 18:19, 23 August 2019 (CEST)
I gave the reasoning in Template talk:Infobox Machine, I guess you didn't have a chance to look?
The problem is that 'clock speed' doesn't really give a good sense of how fast a machine is (it all depends on how many clock ticks per instruction, which differs from design to design), whereas 'cycle time' is further up the architectural ladder, and so more likely to be something one can compare across machines to give a relative sense (although admittedly it has issues of its own).
And the low-level internal clock is not necessarily "externally visible" - e.g. the multi-phase clocks in the LSI-11 chip set, KB11, etc are completely invisible even at the instruction level.
But we could easily have both 'clock speed' and 'cycle time', if you see some utility to having the low-level clock speed documented. I wouldn't want to have just clock speed, though. Jnc (talk) 19:40, 23 August 2019 (CEST)
The -8 and -8/S are a perfect example of how misleading 'clock speed' is, in terms of machine performance! They both have a 1.3333MHz clock, but... :-) Jnc (talk) 23:46, 23 August 2019 (CEST)
I don't feel strongly either way about "cycle time", but for me clock frequency is a starting point that seems ubiquitous. Of course clock frequency does not directly equal performance. I don't think I understand the [need for a] distinction between "cycle time" and "memory speed" --Nczempin (talk) 11:09, 26 August 2019 (CEST)
The relationship between the two can vary a lot in systems with caches, depending on the hit rate (i.e. application), of course. Also, for systems which have asynchronous memory (many older machines), or variable-speed memory (like later Pentium systems, where the CPU clock and memory bus speed are independent), the two are not inextricably linked. Jnc (talk) 17:33, 26 August 2019 (CEST)