Subject : Computer Architecture & Organization Subject Code : 25BTPCL303 UNIT - I Presented By: Prof Dr. Roshni Golhar (NEP 2020 Pattern) (With effect from June 2025) Version – 01
Computer Architecture & Organization
Consents: Computer Architecture and Organization Computer Components CPU Memory Input-Output Subsystems Control Unit
Unit I: Functional Blocks of a Computer & Data Representation
Interconnection Structures & Bus Interconnection Signed Number Representation Fixed and Floating Point Representations IEEE 754 Format Character Representation Number Conversion Self Study: Top level view of computer system
Computer Architecture & Organization Computer Organization and Architecture, Structure and Function, Evolution (a brief history) of computers. A top-level view of Computer function and interconnection- Computer Components, Computer Function, Interconnection structure, bus interconnection, Computer Arithmetic- The Arithmetic and Logic Unit, addition and subtraction of signed numbers, design of adder and fast adder, carry look ahead addition, multiplication of positive numbers, signed operand multiplication, booths algorithm, fast multiplication, integer division. Floating point representation and operations – IEEE standard, arithmetic operations, guard bits and truncation.
Architecture is those attributes visible to the programmer Instruction set, number of bits used for data representation, I/O mechanisms, addressing techniques. e.g. Is there a multiply instruction? Organization is how features are implemented Control signals, interfaces, memory technology. e.g. Is there a hardware multiply unit or is it done by repeated addition?
All Intel x86 family share the same basic architecture The IBM System/370 family share the same basic architecture This gives code compatibility At least backwards Organization differs between different versions Structure is the way in which components relate to each other Function is the operation of individual components as part of the structure
Functions All computer functions include: Data processing Data storage Data movement Control
Functional view
Operations (1) Data movement
Operations (2) Storage
Operation (3) Processing from/to storage
Operation (4) Processing from storage to I/O
Computer Generation First Generation The period 1940 to 1956, roughly considered as the First Generation of Computer. The first-generation computers were developed by using vacuum tube or thermionic valve machine. The input of this system was based on punched cards and paper tape; however, the output was displayed on printouts. The first-generation computers worked on binary-coded concept (i.e., language of 0 an1). Examples: ENIAC(Electronic Numerical Integrator and Computer), EDVAC(Electronic Discrete Variable Automatic Computer), etc.
Second Generation The period 1956 to 1963 is roughly considered as the period of Second Generation of Computers. The second-generation computers were developed by using transistor technology. In comparison to the first generation, the size of second generation was smaller. In comparison to computers of the first generation, the computing time taken by the computers of the second generation was lesser.
Third Generation: The period 1963 to 1971 is roughly considered as the period of Third Generation of computers. The third-generation computers were developed by using the Integrated Circuit (IC) technology. In comparison to the computers of the second generation, the size of the computers of the third generation was smaller. In comparison to the computers of the second generation, the computing time taken by the computers of the third generation was lesser. The third-generation computer consumed less power and also generated less heat. The maintenance cost of the computers in the third generation was also low. The computer system of the computers of the third generation was easier for commercial use.
Fourth Generation : The period 1972 to 2010 is roughly considered as the fourth generation of computers. The fourth-generation computers were developed by using microprocessor technology. By coming to fourth generation, computer became very small in size, it became portable. The machine of fourth generation started generating very low amount of heat. It is much faster and accuracy became more reliable. The production cost reduced to very low in comparison to the previous generation. It became available for the common people as well.
Fifth Generation: The period 2010 to till date and beyond, roughly considered as the period of fifth generation of computers. By the time, the computer generation was being categorized on the basis of hardware only, but the fifth-generation technology also included software. The computers of the fifth generation had high capability and large memory capacity. Working with computers of this generation was fast and multiple tasks could be performed simultaneously. Some of the popular advanced technologies of the fifth generation include Artificial intelligence, Quantum computation, Nanotechnology, Parallel processing, etc.
Computer Components Virtually all contemporary computer designs are based on concepts developed by John von Neumann at the Institute for Advanced Studies. It is based on three key concepts:
Data and instructions are stored in a single read–write memory. The contents of this memory are addressable by location, without regard to the type of data contained there. Execution occurs in a sequential fashion (unless explicitly modified) from one instruction to the next.
There is a small set of basic logic components that can be combined in various ways to store binary data and to perform arithmetic and logical operations on that data. If there is a particular computation to be performed, a configuration of logic components designed specifically for that computation could be constructed. The resulting “program” is in the form of hardware and is termed a hardwired program.
Hardwired Program Suppose we construct a general-purpose configuration of arithmetic and logic functions. This set of hardware will perform various functions on data depending on control signals applied to the hardware. With general-purpose hardware, the system accepts data and control signals produces results.
Full Adder Circuit
Software Program: A stored program computer is controlled by instructions. The set of instructions a computer supports is also called instruction set architecture (ISA). Instructions are elementary operations as adding two numbers, loading data from memory or jumping to another location in the program code.
A program consists of a sequential stream of instructions loaded from memory that control the hardware. The primary resource (or work) of a computer is therefore instruction execution. The generic nature of stored program computers is also their biggest problem when it comes to speeding up execution of program code.
Python Interpreter
Indicates two major components of the system: an instruction interpreter and a module of general-purpose arithmetic and logic functions. These two constitute the CPU. Data and instructions must be put into the system. This module contains basic components for accepting data and instructions in some form and converting them into an internal form of signals usable by the system.
Computer Components: Top Level View
The CPU exchanges data with memory. For this purpose, it typically makes use of two internal (to the CPU) registers . A memory address register (MAR) : which specifies the address in memory for the next read or write . A memory buffer register (MBR) : which contains the data to be written into memory or receives the data read from memory.
Similarly, an I/O address register (I/OAR) specifies a particular I/O device. An I/O buffer (I/OBR) register is used for the exchange of data between an I/O module and the CPU. A memory module consists of a set of locations, defined by sequentially numbered addresses. Each location contains a binary number that can be interpreted as either an instruction or data.
An I/O module transfers data from external devices to CPU and memory, and vice versa. It contains internal buffers for temporarily holding these data until they can be sent on. We now turn to an overview of how these components function together to execute programs.
The basic function performed by a computer is execution of a program, which consists of a set of instructions stored in memory. The processor does the actual work by executing instructions specified in the program. Instruction processing consists of two steps: The processor reads (fetches) instructions from memory one at a time and executes each instruction. Program execution consists of repeating the process of instruction fetch and instruction execution .
The processing required for a single instruction is called an instruction cycle. The two steps are referred to as the fetch cycle and the execute cycle. Program execution halts only if the machine is turned off, some sort of unrecoverable error occurs, or a program instruction that halts the computer is encountered .
At the beginning of each instruction cycle, the processor fetches an instruction from memory. In a typical processor, a register called the program counter (PC) holds the address of the instruction to be fetched next.
Assume that the program counter is set to location 300.The processor will next fetch the instruction at location 300. On succeeding instruction cycles, it will fetch instructions from locations 301, 302, 303, and so on. This sequence may be altered. The fetched instruction is loaded into a register in the processor known as the instruction register (IR).
In general, these actions fall into four categories: Processor-memory: Data may be transferred from processor to memory or from memory to processor. Processor-I/O: Data may be transferred to or from a peripheral device by transferring between the processor and an I/O module. Data processing: The processor may perform some arithmetic or logic operation on data .
Control: An instruction may specify that the sequence of execution be altered. For example, the processor may fetch an instruction from location 149, which specifies that the next instruction be from location 182. The processor will remember this fact by setting the program counter to 182.Thus, on the next fetch cycle, the instruction will be fetched from location 182 rather than 150.
CPU CPU is the heart and brain. It interprets and executes machine level instructions. Controls data transfer from/to Main Memory (MM) and CPU Detects any errors. CPU operation is determined by the instruction it executes. Collection of these instructions that a CPU can execute forms its Instruction Set.
An instruction is represented as sequence of bits: Op code indicates the operation to be performed, 92 above indicates a copy operation –we need two operands –one source and other destination. Op code represents nature of operands (data or address), operand 1 is address and operand 2 is data mode (register or memory), operand 1 is memory, and operand 2 is immediate data.
Memory A memory unit is the collection of storage units or devices together. The memory unit stores the binary information in the form of bits. Volatile Memory: This loses its data, when power is switched off. Non-Volatile Memory: This is a permanent storage and does not lose any data when power is switched off.
Memory Hierarchy:
The memory hierarchy system consists of all storage devices contained in a computer system from the slow Auxiliary Memory to fast Main Memory and to smaller Cache memory. Auxiliary memory access time is generally 1000 times that of the main memory, hence it is at the bottom of the hierarchy. The main memory occupies the central position because it is equipped to communicate directly with the CPU and with auxiliary memory devices through Input/output processor (I/O).
The cache memory is used to store program data which is currently being executed in the CPU. Approximate access time ratio between cache memory and main memory is about 1 to 7~10 . Memory Access Methods: Each memory type is a collection of numerous memory locations. To access data from any memory, first it must be located and then the data is read from the memory location.
Random Access: Main memories are random access memories, in which each memory location has a unique address. Using this unique address any memory location can be reached in the same amount of time in any order. Sequential Access: This methods allows memory access in a sequence or in order. Direct Access: In this mode, information is stored in tracks, with each track having a separate read/write head.
Main Memory : The memory unit that communicates directly within the CPU, Auxiliary memory and Cache memory, is called main memory. It is the central storage unit of the computer system. Main memory is made up of RAM and ROM, with RAM integrated circuit chips holing the major share.
RAM: Random Access Memory DRAM: Dynamic RAM, is made of capacitors and transistors, and must be refreshed every 10~100 ms. It is slower and cheaper than SRAM. SRAM: Static RAM, has a six-transistor circuit in each cell and retains data, until powered off. NVRAM: Non-Volatile RAM, retains its data, even when turned off. Example: Flash memory.
ROM: Read Only Memory It is non-volatile and is more like a permanent storage for information. It also stores the bootstrap loader program, to load and start the operating system when computer is turned on. PROM( ProgrammableROM ),EPROM(Erasable PROM) and EEPROM(Electrically Erasable PROM) are some commonly used ROMs.
Auxiliary Memory: Devices that provide backup storage are called auxiliary memory. For example: Magnetic disks and tapes are commonly used auxiliary devices. Other devices used as auxiliary memory are magnetic drums, magnetic bubble memory and optical disks. It is not directly accessible to the CPU, and is accessed using the Input/output channels.
Input-Output Subsystems The I/O subsystem of a computer provides an efficient mode of communication between the central system and the outside environment. It handles all the input-output operations of the computer system.
Peripheral Devices: Input or output devices that are connected to computer are called peripheral devices. These devices are designed to read information into or out of the memory unit upon command from the CPU and are considered to be the part of computer system. These devices are also called peripherals. For example: Keyboards, display units and printers are common peripheral devices.
There are three types of peripherals: Input peripherals : Allows user input, from the outside world to the computer. Example: Keyboard, Mouse etc. Output peripherals: Allows information output, from the computer to the outside world. Example: Printer, Monitor etc Input-Output peripherals: Allows both input(from outside world to computer) as well as, output(from computer to the outside world). Example: Touch screen etc.
Modes of I/O Data Transfer : Data transfer between the central unit and I/O devices can be handled in generally three types of modes which are given below: Programmed I/O Interrupt Initiated I/O Direct Memory Access
Programmed I/O: Programmed I/O instructions are the result of I/O instructions written in computer program. Each data item transfer is initiated by the instruction in the program. Usually, the program controls data transfer to and from CPU and peripheral. Transferring data under programmed I/O requires constant monitoring of the peripherals by the CPU.
Interrupt Initiated I/O: In the programmed I/O method the CPU stays in the program loop until the I/O unit indicates that it is ready for data transfer. This is time consuming process because it keeps the processor busy needlessly. This problem can be overcome by using interrupt initiated I/O. In this when the interface determines that the peripheral is ready for data transfer, it generates an interrupt.
Direct Memory Access: Removing the CPU from the path and letting the peripheral device manage the memory buses directly would improve the speed of transfer. This technique is known as DMA. Many hardware systems use DMA such as disk drive controllers, graphic cards, network cards and sound cards etc. It is also used for intra chip data transfer in multi-core processors.
Fixed and Floating-Point Representations For decimal numbers, we get around this limitation by using scientific notation. Thus, 976,000,000,000,000 can be represented as 9.76 * 10 14 and 0.0000000000000976 can be represented as 9.76 * 10 -14
This same approach can be taken with binary numbers. We can represent a number in the form: This number can be stored in a binary word with three fields: Sign: plus or minus Significand S Exponent E
Typical 32-Bit Floating-Point Format
Shows a typical 32-bit floating-point format. The leftmost bit stores the sign of the number (0 = positive, 1 = negative). The exponent value is stored in the next 8 bits. The representation used is known as a biased representation.
Typically, the bias equals ( 2 k-1 ), where k is the number of bits in the binary exponent. In this case, the 8-bit field yields the numbers 0 through 127. The final portion of the word (23 bits in this case) is the significand.
Any floating-point number can be expressed in many ways.
Shows the biased representation for 4-bit integers. Note that when the bits of a biased representation are treated as unsigned integers. For example, in both biased and unsigned representations, the largest number is 1111 and the smallest number is 0000.
The basic IEEE format is a 32-bit representation, shown in above Figure . The leftmost bit represents the sign, S, for the number. The next 8 bits, E, represent the signed exponent of the scale factor (with an implied base of 2), and the remaining 23 bits, M, are the fractional part of the significant bits. The full 24-bit string, B, of significant bits, called the mantissa.
Character Representation The most common encoding scheme for characters is ASCII (American Standard Code for Information Interchange). Alphanumeric characters, operators, punctuation symbols, and control characters are represented by 7-bit codes. It is convenient to use an 8-bit byte to represent and store a character.
Note that the codes for the alphabetic and numeric characters are in increasing sequential order when interpreted as unsigned binary numbers. The low-order four bits of the ASCII codes for the decimal digits 0 to 9 are the first ten values of the binary number system.