adrian_b 13 hours ago

As noted by Ken, not only the "special registers" are weird in that dictionary definition, but also the mention about "main storage".

The term CPU has been introduced by IBM in 1954, in the manual of operation of IBM 704, and it was defined thus: “The central processing unit accomplishes all arithmetic and control functions”.

This definition was clearly meant to say that the CPU is the aggregate of 2 parts of the 5 computer parts listed by John von Neumann, i.e. the CPU is the "central arithmetical part" + "the central control part" from von Neumann's list. The other 3 parts were the main memory, the input peripherals and the output peripherals.

The IBM definition has remained valid until today, even if the package that contains the CPU also contains cache memories and possibly even a part or all of the DRAM memory, like in Apple CPUs or Intel Lunar Lake.

Because the CPU is the most important part of a CPU package, the whole package is referred as the CPU ("pars prō tōtō"), even when the package also contains some part of the memory.

I assume that the same way of using the "CPU" term has caused the confusion from the dictionary entry.

In IBM 704, the CPU filled an entire cabinet and the main memory was in a separate cabinet. In later computers, by the time of Honeywell 800, it became possible to locate both the CPU and the main memory in a single cabinet, which was referred to as the CPU cabinet, as the CPU was its most important part.

Whoever has copied that CPU definition, presumably from a manual of Honeywell 800, has confused the description of the content of the "CPU cabinet" of that particular computer with the definition of the term "CPU" as the name of a block used in the description of computer architectures.

Liftyee a day ago

Interesting detail I didn't know about, and different to my initial interpretation too.

It doesn't seem too unreasonable to call the program counter, memory registers, etc. "special registers" since they have specific functions beyond simply storing data. Therefore I could imagine calling the aggregate of such registers in a CPU a "special register group". Perhaps those still using the term are following this line of thought.

  • kazinator 21 hours ago

    The definition doesn't mention that there are any other kinds of registers other than special register groups: it asserts either that there exist groups of nothing but special registers, or else that there exist nothing but special groups of registers.

    "special" is meaningless without a the presence of contrasting subjects that are "ordinary" or "regular".

    E.g, "The human hands have two special finger groups".

  • tialaramex a day ago

    In that they're not the GPRs (General Purpose Registers) this kinda sorta makes sense. Mostly the problem is that this terminology isn't actually used this way. Notice how the CPU (which we're thinking of as one small component of e.g. a laptop or phone, but the originators were imagining a furniture sized piece of equipment which is responsible for the actual computing) is supposed to be comprised of these particular elements but not GPRs, not an FPU, or an MMU, or any other elements which are in fact typical today.

    So this sort of "quiz" is bogus because it's basically checking whether you've rote memorized some particular words. People who know how a CPU works are confused and likely don't pass, those who've memorized the words but lack all understanding pass. This is not a useful education.

    • sidewndr46 a day ago

      This was my thought as well. Any sufficiently complex modern CPU contains some register that it expects some bit to be set in to enable something like a powersaving mode with an interrupt mask in the lower bits to turn it off. Or something equally esoteric, but purposeful once you consider the application.

      My recollection is that even the Atmega 8-bit microcontrollers have tons of special register groups around timers and interrupts.

      Given the relative scarcity of "general purpose" registers on x86 32-bit CPUs, you could actually argue those are the special purpose registers.

      • duskwuff a day ago

        > My recollection is that even the Atmega 8-bit microcontrollers have tons of special register groups around timers and interrupts.

        Not precisely. AVR, like most embedded architectures, has a bunch of I/O registers which control CPU and peripheral behavior - but those are distinct from the CPU's general-purpose registers. The I/O registers exist in the same address space as main memory, and can only be accessed as memory. You can write an instruction like "increment general-purpose register R3", but you can't use that same syntax for e.g. "increment the UART baud rate register"; you have to load a value from that register to a GPR, increment it there, and store it back to the I/O register.

        AVR is a bit weird in that the CPU's general-purpose registers are also mapped to main memory and can be accessed as if they were memory - but that functionality is rarely used in practice.

        Getting back to your original point, x86 does have special-purpose registers - lots of them, in fact. They're accessed using the privileged rdmsr/wrmsr instructions.

userbinator 19 hours ago

The Honeywell 800 allowed eight programs to run on a single processor, switching between programs after every instruction[...]its own separate group of 32 registers (program counter, general-purpose registers, index registers, etc.)

I'm surprised that the analogy to hyperthreading wasn't made here.

  • adrian_b 14 hours ago

    Hyperthreading is a word used by Intel instead of the standard term SMT (Simultaneous Multithreading), in order to be able to register it as a trademark.

    The first CPUs with SMT have appeared only during the nineties.

    That Honeywell CPU is the first commercial CPU with FGMT (Fine-Grained Multithreading).

    SMT can exist only in a CPU able to decode and execute multiple instructions simultaneously, which is why it has appeared much later.

    While FGMT went out-of-fashion for CPUs, many GPUs have used and many still use FGMT, because it can provide better performance per die area than a more complex superscalar/SMT architecture.

    • fuzzfactor 6 hours ago

      Perkin-Elmer turned out to offer the first "desktop" (benchtop) multi-user multi-tasking (threaded) data system, designed in the 1970's and finally released in 1979:

      https://www.bidspotter.com/en-gb/auction-catalogues/quaker-c...

      I would estimate it was fine-grained.

      No microprocessor in the CPU, just a PCB a foot square covered in discrete logic chips.

      Memory PCB the same size, connected by short ribbon cables to the CPU, this bypassed the backplane. Fully upgraded it was 16K of 32-bit memory.

      Each user could run more than one Basic program at a time, and also the same Basic code in memory could be run by more than one user, or even more than one time simultaneously by the same user. Simultaneous with the collection of live data from more than one chemical analyzer, which the firmware could handle most of the mainstream chemical needs without the Basic option. User-programming was for the exotic stuff that actually requires custom code.

      But that made it completely programmable using only the documentation supplied, it was straightforward to accomplish things which PC software that eventually came along never has been able to do without settling for additional commercial offerings which are much more user-friendly (mouse!) but not as appropriate as my own code was.

  • fulafel 14 hours ago

    It's more like a barrel processor than SMT. Actuall the Honeywell 800 is mentioned as an example in https://en.wikipedia.org/wiki/Barrel_processor

    • adrian_b 14 hours ago

      FGMT as implemented in that Honeywell CPU is a step above a barrel processor like that used by the later CDC 6600 in its I/O CPU, because it can transition to any of the other threads, not only in round-robin order, like in a barrel CPU.

      This avoids wasting clock cycles in threads that are waiting for some event.

kens a day ago

Author here if anyone has questions. This was discussed on HN a while ago: https://news.ycombinator.com/item?id=21333245

  • vlovich123 a day ago

    Did register groups evolve into what would become register files, register windows, and register banks?

    • kens a day ago

      The Honeywell's "special register groups" were rather unusual, since the processor switched tasks and registers after every instruction. (This is more commonly known as a barrel processor.) I don't see an evolutionary path to register files, windows, banks, etc, although the concept is a bit similar. Rather, I think the concept of having separate sets of registers developed independently several times.

      • Animats a day ago

        That hardware multiprogramming concept showed up a few times. The CDC 6600 pretended to have 10 peripheral processors for I/O using that approach. National Semiconductor (I think) once made a part which had two BASIC interpreters on one CPU with two sets of registers.

        TI had some IC where the registers were in main memory, with a hardware register to point to them. Context switching didn't require a register save, just pointing the register pointer somewhere else. There were stack machines such as the B5500 where the top items on the stack were in hardware registers but the rest of the stack was in memory. Floating point in the early years of floating point co-processor chips had lots of rather special registers. Getting data in and out of them was sometimes quite slow, so it was desirable to get data into the math co-processor and work on it there without bringing it back into memory if at all possible.

        Lots of variations on this theme. There's a lot going on with special-purpose registers today inside superscalar processors, but the programmer can't see most of it.

      • chihuahua 17 hours ago

        Tera/Cray used to make the MTA (multi-threaded architecture) that had 128 register sets so it could run 128 threads in a round-robin fashion. The purpose was to cover the latency of main-memory access.

bbanyc 17 hours ago

For all the complaints about our corpus being contaminated by AI slop, there's been plenty of human-generated slop being regurgitated over the years as well. Lazy textbook writers copy and paste from older books, lazy test makers quiz on arbitrary phrases from the textbooks, nobody ever does any fact checking to see if any of it makes sense.

My favorite example is the "tongue map" - decades of schoolchildren have been taught that different parts of the tongue are responsible for tasting sweetness and saltiness and so on. This is contrary to everyone's experience, and it turns out to have been a mistranslation of some random foreign journal article on taste buds, but it's stuck around in the primary school curriculum because it's easy to make it into a "fill in the map" activity. As long as the kids can regurgitate what they're told, who cares if anything they're learning is true?

dlcarrier a day ago

I love weird jargon. There's plenty of it in the terms we normally use, like despite ROM being a type of RAM, everyone knows when you say RAM you are excluding ROM.

One of my favorites is BLDC which stands for BrushLess Direct Current, which is used to describe motors which are driven by an alternating current at a variable frequency, not to be confused with VFD, which in this case stands for Variable Frequency Drive, and describes how BLDC actually functions, and is on rare occasions used interchangably, but is most often used with analog or open-loop controllers, while BLCD is used for VFDs that are digitally controlled. BLDCs are Brushless not Brush Less and, AC not DC. I like to think the acronym stands for Bitwise Logic Driven Current, which follows the acronym, correctly describes how it works, and differentiates it from non-BLDC VFDs. Also, while VFD can mean Variable Frequency Drive, it is also used to mean Vacuum Fluorescent Display, and for Lemony Snicket fans it can me a whole lot more. (https://snicket.fandom.com/wiki/V.F.D._%28disambiguation%29)

Uncommon jargon is even better, especially when it comes out of marketing departments and describes something ordinary. I have a rice cooker that has terms like "micom" and "fuzzy logic" on the labels. They describe microcontrollers and variables, respectively, which are found in all but the simplest electromechanical appliances.

It's normal for companies to trademark their names for common technology, like Nvidia and AMD trademarking their implementation of variable framerate as G-Sync and FreeSync, respectively, and some companies are more agressive with their trademarks than others, such as Intel trademarking their symetric multithreading as Hyper-Threading, but AMD just calling it symetric multithreading.

Cessna used to have a bunch of random trademarked jargon in their ads in the 70's, from trademarking flaps to back seats.

  • kens a day ago

    Your comment reminds me of a couple things. On the topic of RAM, I was recently researching the vintage IBM 650 computer (1953). You could get it with Random Access Memory (RAM), but this RAM was actually a hard disk. I guess IBM considered the disk to be more random access than the rotating drum that provided the 650's main storage. One strange thing is that the disk had three independent read/write arms. Unlike modern disks, the arm could only access one platter at a time and had to physically retract to move to a different platter. Having three independent arms let you overlap reads and seeks.

    As far as "fuzzy logic", there's a whole lot of history behind how that ended up in your rice cooker. In the 1990s, fuzzy logic was a big academic research area that was supposed to revolutionize control systems. The basic idea was to extend Boolean logic to support in-between values, and there was a lot of mathematics behind this, with books, journals, conferences, and everything. Fuzzy logic didn't live up to the hype, and mostly disappeared. But fuzzy logic did end up being used in rice cookers, adjusting the temperature based on how cooked the rice was, instead of just being on or off.

    [1] https://bitsavers.org/pdf/ibm/650/22-6270-1_RAM.pdf

    [2] https://en.wikipedia.org/wiki/Fuzzy_logic

    • Animats a day ago

      On the topic of RAM, I was recently researching the vintage IBM 650 computer (1953) ... but this RAM was actually a hard disk.

      Actually it was a drum. No moving arms at all, but some tracks had multiple read/write heads to reduce access time.

      Knuth's high school science fair project was to write an angular optimizing assembler for the IBM 650, called SOAP III.

      • kens a day ago

        Actually, no :-) The 650 had a drum for main storage. The "355 Random Access Memory" was something different, an optional hard drive peripheral, which IBM literally called "RAM" (see the link above). This became the RAMAC system.

        • Animats a day ago

          Right, you could add a RAMAC disk, but that was a very expensive option. Many universities had IBM 650s, but usually without the RAMAC.

      • janzer a day ago

        The 650 main memory was a drum; but what IBM called Random Access Memory (and RAM) for this machine was a hard drive. As described in the Manual of Operation linked above. Here are a few quotes:

        "Records in the IBM Random Access Memory Unit are stored on the faces of magnetic disks."

        "The stored data in the Random Access Memory Unit are read and written by access arms."

        "The IBM 355 RAM units provide extemely large storage capacity for data... Up to four RAM units can be attached to the 650 to provide 24,000,000 digits of RAM storage."

        The main memory on the other hand: "The 20,000 digits of storage, arranges as 2000 words of memory on the magnetic drum..."

    • dlcarrier 20 hours ago

      The rice cooker audibly turns the heating element completely on or off with a relay or thermostat or similar. It does have some kind of PWM , but so does a purely electromechanical rice cooker. There are different settings for brown and white rice, as well as different keep-warm levels, so I presume it can electronically modify the set point for the temperature.

      I had looked up the history of fuzzy logic, when I originally saw the term. Everything I came across was from the mid 60's during the minicomputers era (e.g. the DEC PDP and IBM System/3). I just did a Google ngram search for it, and it does look like it peaked highest in the mid 90's. (https://books.google.com/ngrams/graph?content=fuzzy+logic&ye...) It was especially redundant then, because by that point 32-bit floating-point processors were already extremely common. It made sense during the 60's to have two-bit values in ladder logic control systems, because mini computers were prohibitively expensive, but by the 90's control systems were already using personal computer components to emulate old relay logic, while newer control systems were being written in languages that are still common today.

      You just sent me down a rabbit hole of mid-90's buzzword ands jargon, and I found something especially great: neuro-fuzzy logic (https://link.springer.com/article/10.1007/s44196-024-00709-z)

      There's also quantum logic (https://en.wikipedia.org/wiki/Quantum_logic) and someone wrote a paper relating it with fuzzy logic: (https://link.springer.com/chapter/10.1007/978-3-540-93802-6_...)

      Once academia gets involved, you get fractal (https://xkcd.com/1095/) sub-niches of sub-niches, and it goes further off the rails than marketing jargon.

      I swear there's a computing buzzword cycle of embedded systems (e.g. fuzzy logic, smart, IoT, etc.), artificial intelegence (e.g. genetic algorthims, machine vision, neural networks, etc.), and quantum (just quantum, it's never been practical enough to have sub-niches). Right now we're on the trailing end of AI and working our way to quantum. There's a tech company called C3 that has changed their name from the original C3 Energy to C3 IoT and now C3 AI. When they change it to C3 Quantum, we'll know were in the next phase of the buzzword cycle.

  • Animats a day ago

    Right. For a few years, 3-phase motors used as controlled servos were called "brushless DC" below about 1 HP, and "variable frequency drive" above 1 HP. Now everybody admits they're really all 3-phase synchronous motors.

    • contingencies 21 hours ago

      Can of worms. My understanding is that VFD refers to any control system capable of varying speed and torque, usually through varying the supply frequency and voltage to the coils of an asynchronous AC induction motor. However, it is important to note that a VFD can also control synchronous motors such as BLDC and Permanent Magnet Synchronous Motors (PMSM), although in practice the term is usually applied to control systems for high power industrial AC asynchronous induction motors. It would therefore be incorrect to state "they're really all 3-phase synchronous motors", although some VFD control systems could be seen to emulate synchronous motors with asynchronous motors.

      • dlcarrier 20 hours ago

        I think it really boils down to being jargon from two different groups, so which term gets used isn't a matter of how the device works as much as it is what the person talking about it does for a living.

        • Animats 14 hours ago

          Pretty much.

          At the windings, all motors are driven by some alternating waveform. A classic "DC motor" has a mechanical commutator which turns DC into square wave AC, with the phase leading the motor so the motor turns. Classic AC motors are driven from sinusoidal waveforms. There's a whole theory of DC motors, and an elegant theory of AC motors that goes back to Tesla. Here's the motor family tree.[1]

          Then came power MOSFETs. Today you can make pretty much whatever waveforms you want. It took a while for motor designers to learn how to exploit that properly, and for MOSFETs to get small, cheap, and heat-tolerant. Then drone and electric vehicle motors got really good, at the cost of needing a CPU to manage the motor.

          [1] https://www.allaboutcircuits.com/textbook/alternating-curren...

  • userbinator 18 hours ago

    I have a rice cooker that has terms like "micom"

    I'm going to guess it's a Japanese rice cooker, because "micom" is a Japanese shortening of "microcomputer", which is what they've been calling microcontrollers. Using it for marketing most certainly dates from the time when computerised control was considered a desirable novel feature, much like when "solid state" was used in English.

    • dlcarrier 18 hours ago

      When a a language gets a word from another language, linguists call it either a loanword or a borrowed word. They're odd terms, because loaning and borrowing mean it'll be given back, which isn't expected when a langauge adopts a word from a different language.

      Japan loves to give words back to English though, with cosplay, anime, emoticon, and now micom all originally being from English, then used in Japanese, and brought back to English from their Japanese form.

      • JdeBP 12 hours ago

        It isn't expected until one then learns enough comparative linguistics and etymology to discover that loanwords coming back has happened throughout history. And not just with European languages, although for obvious historical reasons the borrowings and weird routes that some words have travelled across multiple languages is far better documented for European languages.

        It's a thing that happens with linguistic contact, and the deeper learning is that it's odd to expect it not to happen to the same word twice between languages with long-standing contact on occasion, given how often it happens and for how many millennia the process has been active.

    • thaumasiotes 14 hours ago

      > I'm going to guess it's a Japanese rice cooker, because "micom" is a Japanese shortening of "microcomputer"

      How do you produce a final "m" in Japanese without a following M, B, or P?

      • kens 5 hours ago

        It looks like "マイコン" is transliterated to "micom" in English, even though the Japanese word ends in "n", not "m". For example, Micom BASIC, Micom Car Rally, Micom Games.

  • partdavid a day ago

    Since the jargon we've invented in technology has derived from natural language, it's often repurposing common terms as terms of art. In my opinion this leads to ambiguity and I sometimes pine for the abstruse but more precise jargon from classical languages you can use in medicine (for example).

    For example, how many things does "link" mean? "Process"? "Type"? "Local"? It makes people (e.g., non-technical people) think that they understand what I mean when I talk about these things but sometimes they do and sometimes they don't. Sometimes we use it in a colloquial sense, but sometimes we'd like to use it in a strict technical sense. Sometimes we can invent a new, precise term like "hyperlink" or "codec" but as often as not it fails to gain traction ("hyperlink" is outdated).

    That's one reason we get a lot of acronyms, too. They're too unconversational but they can at least signal we're talking about something specific and rigorous rather than loose.

    • dlcarrier 20 hours ago

      Medical jargon (or at least biology jargon) using can still conflict with common language. For example: thorn, spine, and prickle all have different meaning in biology, and the term thorn doesn't cover anything native to England, where that word direves and was used in Shakespeare's plays.

  • unnah 14 hours ago

    Although SMP is an abbreviation for Symmetric Multi-Processing (multiple processors or processor cores with shared memory), SMT is not symmetric but Simultaneous Multi-Threading. To get back on topic, SMT is often confused with barrel processors that switch threads between clock cycles, like the Honeywell 800 with special register groups. The "simultaneous" in SMT means that a single processor core runs instructions from multiple threads on the same clock cycle.