Computer Hardware

Key Terms

  • Address space: The number of memory cells that can be addressed individually for a given address size.
  • Arithmetic/logic unit (ALU): The subsystem of a Von Neumann architecture that handles computations based in arithmetic, comparisons, or logical operations.
  • Byte: A standard cell size of 8 bits.
  • Bus: A path for electrical signals (think of this as a wire).
  • Cache memory: A small, special-purpose, very fast memory used to temporarily store data values the computer is currently using in order to improve the computer’s operation speed. Values are replaced in the cache as new values are used by the system.
  • Cell: The smallest addressable collection of bits in a computer; these days, cells are almost always eight bits, or one byte, in size.
  • Cell size (memory width): The number of bits per cell, usually denoted as W.
  • CISC machines: Designed to directly provide a wide range of powerful features so that finished programs for these processors are shorter.
  • Computer organization: The study of ways in which circuits and other functional units can be combined to create a fully functional computer.
  • Control unit: The subsystem of a Von Neumann architecture that manages the execution of a program in the computer’s memory by communicating with the other subsystems to fetch, decode, and execute the program’s instructions.
  • Destructive store: Store the specified value into the memory cell specified by address. The previous contents of the cell are lost.
  • Direct access storage device: An archival storage device, such as a hard drive or a DVD, in which individual data values are stored and retrieved by address, with all locations essentially equal in retrieval times.
  • Fetch: The process of retrieving a value from a memory location in order to put it in a register or pass it to a processing unit.
  • Fetch/store controller: This unit determines whether we put the contents of a memory cell into the MDR (a fetch operation) or put the contents of the MDR into a memory cell (a store operation).
  • Functional units: The different subsystems of a computer, each of which performs some specific portion of the computer’s overall task. Functional units may include memory, computational, and control circuits to carry out their tasks.
  • Grid computing: Enables researchers to easily and transparently access computer facilities without regard for their location.
  • I/O controller: A special-purpose processor that mediates between the main computer processor and a specific input or output device. The processor records the request made by the main processor, leaving it free to continue other tasks, and interrupts the main processor when the input or output task is complete.
  • Input/Output: The subsystem responsible for managing communications with external devices, including external memory storage devices, other processors, and human interaction devices.
  • Instruction set: The set of codes that describe all of the legal operations that a particular computer can execute.
  • Interrupt signal: The signal sent by an I/O controller to the main processor when its I/O task is complete.
  • Interconnection network: A communications system that allows processors to exchange messages and data.
  • Latency: the time for the beginning of the desired sector to rotate under the read/write head.
  • Machine language: The instruction set and rules for each instruction’s operands that form the fundamental language for describing algorithms at the hardware/architecture level.
  • Mass storage systems: Includes floppy disks, flash memory, hard disks, CDs, DVDs, and streaming tapes.
  • Memory: The subsystem responsible for storing data values and moving values to and from memory locations.
  • Memory access time: Typically about 5–10 nsec.
  • Memory address: An unsigned binary integer of some standard length that refers to a particular memory cell.
  • Memory Address Register (MAR): Holds the address of the cell to be fetched or stored.
  • Memory Data Register (MDR): Contains the data value being fetched or stored.
  • MIMD (multiple instruction stream, multiple data stream): A parallel computer architecture in which multiple processors operate on their own sets of data with their own sets of instructions to perform multiple tasks simultaneously.
  • Nondestructive fetch: Fetch a copy of the contents of the memory cell with the specified address and return those contents as the result of the operation. The original contents of the memory cell that was accessed are unchanged.
  • Op code: An unsigned binary integer that is assigned to a specific task the hardware can perform.
  • Parallel algorithms: The study of techniques that makes efficient use of parallel architectures.
  • Parallel processing: An area of study of non-Von Neumann architecture that focuses on systems that can perform multiple instructions at the same time.
  • Principle of Locality: Says that when the computer uses something, it will probably use it again very soon, and it will probably use the “neighbors” of this item very soon.
  • Random access memory (RAM): The typical computer memory where each cell is addressed individually and may be updated or accessed in an amount of time that is independent of the cell’s location.
  • Read-only memory (ROM): A memory that is usually very similar to RAM, except it may only be accessed, not updated.
  • Register: A special kind of very fast memory that is referred to by name rather than by a memory address number. Registers often hold data values that are currently in use by the ALU or other subsystems of the architecture.
  • Scalability: Means that, at least theoretically, it is possible to match the number of processors to the size of the problem.
  • Seek time: The time needed to position the read/write head over the correct track.
  • Sequential access storage device: An archival device for storing data, such as a tape, in which the data is stored sequentially on the tape, and access times depend on the position of the data on the tape.
  • Sequential execution of instructions: A key component of a Von Neumann architecture, it is the idea that instructions are listed in some sequential order, and the computer executes one instruction at a time.
  • SIMD (single instruction stream, multiple data stream): A parallel computer architecture in which multiple processors execute the same sequence of instructions on different pieces of data.
  • Store: The process of writing a value to a memory cell.
  • Stored program concept: A key component of a Von Neumann architecture in which the algorithm that the computer executes is stored inside of the computer in its memory, just like the data upon which the algorithm operates.
  • Transfer time: The time for the entire sector to pass under the read/write head and have its contents read into or written from memory.
  • Von Neumann architecture: The theoretical organization of a computer, developed by John Von Neumann, which has formed the basic architecture for most modern computers during the past 50 years.
Print Friendly, PDF & Email

Comments are closed