What is meant by computer architecture

What are different types of computer architectures?


I am going through the book "Elements of Computer Systems". This book shows you how to build an entire computer from scratch. While flipping through the chapters on computer architecture, I noticed that everything was focused on the Von Neumann architecture. I was just curious about what the other architectures are and when and where they are used.

I only know two, one is Von Neumann and the second is Harvard. I also know RISC which is used in uC by AVR.



Reply:


There are many different types of computer architectures.

One way of categorizing computer architectures is the number of commands that are executed per cycle. Many computers read and execute an instruction at a time (or they try hard to pretend as ifthey do that even if they feel like doing superscalar and out-of-order things internally). I call such machines "von Neumann" machines because they all have a von Neumann bottleneck. Such machines include CISC, RISC, MISC, TTA, and DSP architectures. Such machines include accumulator machines, register machines, and stacking machines. Other machines read and execute several commands at the same time (VLIW, super-scalar), which exceed the limit of one command per cycle, but still meet the von Neumann bottleneck with a slightly larger number of commands per cycle. Still other machines are not restricted by the von Neumann bottleneck, since they preload all processes once when they are switched on and then process data without further instructions. Such non-Von-Neumann machines include data flow architectures,

Another way to categorize computer architectures is through the connection (s) between CPU and memory. Some machines have a uniform memory so that a single address corresponds to a single location in memory. If this memory is RAM, you can use this address to read and write data or load this address into the program counter to execute code. I call these machines Princeton machines. Other machines have several separate storage areas so the program counter always points to "program memory" no matter what address is loaded into it, and normal reads and writes always go to "data memory" which is a separate location, usually different contains information even if the bits of the data address happen to be identical to the bits of the program memory address. These machines are "pure Harvard" or "

Some people use a narrow definition of "von Neumann machine" that does not include Harvard machines. If you are one of those people, what term would you use for the more general concept of a von Neumann bottleneck machine that includes both Harvard and Princeton machines and excludes NON-VON?

Most embedded systems use the Harvard architecture. Some CPUs are "pure Harvard", which is perhaps the simplest arrangement for installing hardware: The address bus to the read-only program memory is only connected to the program counter, like many early Microchip PICmicros. Some modified Harvard machines also store constants in program memory, which can be read with a special "Read constant data from program memory" command (other than the "Read from data memory" command). The software running on the above types of Harvard machines cannot alter program memory, which is effectively ROM for that software. Some embedded systems are "self-programmable", usually with a program memory in flash memory and a special "erase block in flash memory". Instruction and a special instruction "write block of flash memory" (different from the normal instruction "write to data memory") in addition to instruction "read data from program memory". Several newer Microchip PICmicros and Atmel AVRs are self-programmable modified Harvard machines.

Another way to categorize CPUs is by timing. Most computers are in sync - they have a single global clock. Some CPUs are asynchronous - they don't have a clock - including the ILLIAC I and ILLIAC II, which were the fastest supercomputers in the world at one time.

Please help improve the description of all types of computer architecture at http://en.wikibooks.org/wiki/Microprocessor_Design/Computer_Architecture.






CISC is the "opposite" of RISC. While RISC prefers simple instructions that are easy to optimize for the compiler and are often the same size, CISC likes complex instructions of different sizes.

For example, a pop instruction in CISC changes the stack pointer and places the data from the stack in a different register. However, a RISC processor would read the data with one instruction and then modify the stack pointer with a second instruction. (Generally there are some exceptions, like the PowerPC which can update the stack pointer and push data onto the stack, but this is an exception.)

Because RISC instructions are all the same size, disassemblers are easier to write. Designing the processor is also easier because the pipeline does not have to accommodate different instruction sizes. However, the CISC code density tends to be better, both because the complex instructions require fewer bytes to represent the same number of operations, and because the variable instruction length allows for a certain "compression".

There are also other exotic architectures such as VLIW / EPIC. This type of architecture was designed with parallel processing in mind. However, they have not done very well because they place a huge burden on the compiler to optimize, while other architectures have fancy instruction windows that take some of the optimization burden off the compiler.



Well, there is such a thing as the ENIAC where you essentially have individual ALUs and "program" them by connecting the output of one alus to the input of another alus that will do the next operation on that intermediate variable. Their "registers" and memories are the wires that connect the Alus.

I recently bought the book "The First Computer History and Architectures (History of Computing)", which partly focuses on exactly this topic. I do not recommend buying this book even though it is just a collection of academic articles that are difficult to read and I suspect they have been published elsewhere (for free). (I gave up before finishing the introduction)

As soon as memory was invented and practical, we settled in with the two popular von Neumann and Harvard. Doing rewiring, punch cards, paper tapes, or the like became less practical. And there are stack-based ones (e.g. the zpu) which I suspect fall into the Harvard category rather than their own.

What about von neumann platforms that boot from a write-protected (normally) flash memory on one memory interface and have read / write data RAM in another (which can sometimes work on both in parallel), but from a program point of view in an address space? Or those that have several internal and external memories / interfaces that all work in parallel, but von Neumann are in favor of being in the same address space.

And what good is a Harvard platform on which the processor cannot access the instruction memory as data in order to change / update the bootloader or to load the next program to be executed? Why isn't that a von Neumann architecture? The processor running from the same memory on the same interface and operating in the same way (instruction fetches and memory writes do not occur at the same time)?

The two common memory-based architectures are closer to each other than they differ in current IMO implementations.

Where does gpu fall? Or the business I work in, network processors (NPUs). If you have these relatively small special micro-machines (processors) that are executed by a Harvard-like program RAM (addressable, but not required for performance reasons), you can process different data RAMs, each with its own address space (separate processor) Instructions for each room), (the storage rooms work in parallel) and deliver intermediate data via these RAMs so that the next calculation can be carried out by the next micro-machine in a wired aluminum (eniac) -like manner? What would you call that Are npus and gpus just modified Harvard architectures?





Both von Neumann and Harvard architectures can be used with RISC processors such as AVR and ARM. The AVR uses Harvard while some Neumann ARM chips and some use Harvard.


We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from.

By continuing, you consent to our use of cookies and other tracking technologies and affirm you're at least 16 years old or have consent from a parent or guardian.

You can read details in our Cookie policy and Privacy policy.