Microinstructions, Microprogrammed control unit, Micro instruction sequencing – Design considerations, sequencing techniques, Address generation
niveatha1
8 views
86 slides
Sep 16, 2025
Slide 1 of 86
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
About This Presentation
Microinstructions, Microprogrammed control unit, Micro instruction sequencing – Design considerations, sequencing techniques, Address generation
Size: 1.29 MB
Language: en
Added: Sep 16, 2025
Slides: 86 pages
Slide Content
MODULE 2 CENTRAL PROCESSING UNIT
Arithmetic & Loic Unit (ALU) ▶ Part of the computer that actually performs arithmetic and logical operations on data ▶ All of the other elements of the computer system are there mainly to bring data into the ALU for it to process and then to take the results back out ▶ Based on the use of simple digital logic devices that can store binary digits and perform simple Boolean logic operations
Arithmetic Operations Addition ▶ Follow same rules as in decimal addition, with the difference that when sum is 2 indicates a carry (not a 10) ▶ Learn new carry rules ▶ 0+0 = sum carry ▶ 0+1 = 1+0 = sum 1carry ▶ 1+1 = sum carry1 ▶ 1+1+1 = sum 1carry1 Carry 1 1 1 1 1 Augend 1 1 Addend 1 1 1 1 1 Result 1 1 1 1 1 0 1 1 + 1 0 1 1 1 0 0 0 Carry Values
Subtraction ▶ Learn new borrow rules ▶ 0- = 1- 1 = borrow ▶ 1- = 1 borrow ▶ 0- 1 = 1 borrow 1 The rules of the decimal base applies to binary as well. To be able to calculate 0-1, we have to “borrow one” from the next left digit. 1 2 2 2 1 1 - 1 1 1 1 1
Binary Subtraction ▶ 1’s Complement Method ▶ 2’s Complement Method 1’s Complement Method Example: 1010100 – 1000100 1’s complement of 1000100 is 0111011 1 1 1 + 1 1 1 1 1 1 1 1 1 1 + 1 1 If Carry, result is positive. Add carry to the partial result Example: 1000100 – 1010100 1’s complement of 1010100 is 0101011 1 0 0 0 1 + 0 1 0 1 0 1 1 1 1 0 1 1 1 1 = – 0 1 0 0 If no Carry, result is negative. Magnitude is 1’s complement of the result
Binary Subtraction ▶ 1’s Complement Method ▶ 2’s Complement Method 2’s Complement Method If Carry, result is positive. Discard the carry If no Carry, result is negative. Magnitude is 2’s complement of the result Example: 1010100 – 1000100 2’s complement of 1000100 is 0111100 Example: 1000100 – 1010100 2’s complement of 1010100 is 0101100 1 1 1 + 1 1 1 1 1 1 1 1 0 0 0 1 + 0 1 0 1 1 1 1 1 0 0 = – 0 1 0 0
Signed Binary Numbers ▶ When a signed binary number is positive The MSB is ‘0’ which is the sign bit and rest bits represents the magnitude ▶ When a signed binary number is negative The MSB is ‘1’ which is the sign bit and rest of the bits may be represented by three different ways Signed magnitude representation Signed 1’s complement representation Signed 2’s complement representation
Signed Binary Numbers Signed magnitude representation Signed 1’s complement representation Signed 2’s complement representation Signed magnitude representation Signed 1’s complement representation Signed 2’s complement representation - 9 1 1001 1 0110 1 0111 - 1 0000 1 1111 - None- + 9 1001 1001 1001 + 0000 0000 0000
Floating Point Number ▶ Floating point number can be represented as m × r e ▶ m is mantissa, e is exponent and r is radix ▶ Let the decimal number is 6132.789, which can be represented as 0.6132789 × 10 4 ▶ Let the binary number is 1001.110, which can be represented as 0.1001110 × 2 4 or can be represented as 1.001110 × 2 3
Floating Point Arithmetic ▶ Addition/Subtraction Align the radix point first to make the exponent equal before addition or subtraction Add or Subtract mantissa Normalize the result by adjusting the exponent (A × E n ) ± (B × E n ) = (A ± B) E n ▶ Multiplication (A × E m ) × (B × E n ) = (A × B) E m + n ▶ Division (A × E m ) ÷ (B × E n ) = (A ÷ B) E m - n
Floating Point Standard The IEEE Standard for Floating- Point (IEEE 754) is a technical standard for floating- point representation which was defined in 1985 by the Institute of Electrical and Electronics Engineers (IEEE) . Developed in response to divergence of representations Portability issues for scientific code Now almost universally adopted Two representations Single precision (32- bit) Double precision (64- bit)
64 11 52 Double
Normalize significand: 1.0 ≤ |significand| < 2.0 Always has a leading pre- binary- point 1 bit, so no need to represent it explicitly (hidden bit) Significand is Fraction with the “1.” restored Exponent: excess representation: actual exponent + Bias Ensures exponent is unsigned Single precision: Bias = 127 Double precision: Bias = 1023
▶ In the CPU, a 32- bit floating point number is represented using IEEE single precision standard format as follows: ▶ S | EXPONENT | MANTISSA ▶ where S is one bit, the EXPONENT is 8 bits, and the MANTISSA is 23 bits. The mantissa represents the leading significant bits in the number. The exponent is used to adjust the position of the binary point (like "decimal" point) ▶ The mantissa is said to be normalized when it is expressed as a value between 1 and 2. i.e., the mantissa would be in the form 1.xxxx.
▶ The leading integer of the binary representation is not stored. Since it is always a 1, it can be easily restored ▶ The "S" bit is used as a sign bit and indicates whether the value represented is positive or negative ▶ for positive, 1 for negative ▶ If a number is smaller than 1, normalizing the mantissa will produce a negative exponent ▶ But 127 is added to all exponents in the floating point representation, allowing all exponents to be represented by a positive number
Single Precision ▶ Example 1 . Represent the decimal value 2.5 in 32-bit floating point format. 2.5 = 10.1b ▶ In normalized form, this is: 1.01 × 2 1 ▶ The mantissa: M = 01000000000000000000000 (23 bits without the leading 1) ▶ The exponent: E = 1 + 127 = 128 = 10000000b ▶ The sign: S = (the value stored is positive) ▶ So, 2.5 = 10000000 Sign Exponent 01000000000000000000000 Mantissa
▶ Example 2: Represent the number - 0.00010011b in floating point form. ▶ 0.00010011b = 1.0011 × 2 - 4 in normalized form ▶ Mantissa: M = 00110000000000000000000 ▶ Exponent: E = - 4 + 127 = 123 = 01111011b ▶ S = 1 (as the number is negative) ▶ Result : 1 01111011 00110000000000000000000 Sign Exponent Mantissa
Double Precision ▶ Example 3 . Represent the decimal value 85.125 in double precision floating point format. ▶ 85.125 = 1010101.001 ▶ In normalized form this will be 1. 010101001 x 2 6 ▶ sign bit is as positive ▶ For double precision biased exponent = 1023 + 6 =1029 = 10000000101 ▶ Normalized mantissa = 010101001 ▶ we will add 0's to complete the 52 bits ▶ The IEEE 754 Double precision is: 10000000101 0101010010000000000000000000000000000000000000000000 Sign Exponent Mantissa
Multiplication
Hardware Diagram
Hardware Diagram Ms As Qs The sign of the product is determined from the signs of the Multiplicand and Multiplier. If they are alike, Sign of the product is Positive. If they are unlike, Sign of the product is Negative So, As will be equal to Ms Ex- OR with Qs
General Multiplication
▶ Booth's multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in 2’s complement notation. ▶ The algorithm was invented by Andrew Donald Booth in 1950. ▶ It is used to speed up the performance of the multiplication process. It is very efficient too. ▶ If string of 0's or string of 1’s are there in the multiplier that requires no operation only shift. ▶ Consider a general multiplier consisting of a block of 1s surrounded by 0s. For example, 00111110. The product is given by: M × 00111110 = M × (2 5 + 2 4 + 2 3 + 2 2 + 2 1 ) = M × 62 where, M is the multiplicand ▶ The number of operations can be reduced to two by rewriting the same as M × 00111110 = M × (2 6 – 2 1 ) = M × 62 ▶ This one is Booth Multiplication. Booth Multiplication
Example ▶ Let the multiplication is M × +14 In signed 2’s complement representation +14 = 000 1110 Which is M × 000 1110 = M × (2 4 – 2 1 ) = M × (16 – 2) = M × +14 ▶ Let the multiplication is M × - 14 In signed 2’s complement representation -14 = 1 111 0010 Which is M × 1 111 0010 = M × (- 2 4 + 2 2 – 2 1 ) = M × (- 16 + 4 – 2) = M × - 14
Algorithm ▶ As in all multiplication schemes, Booth algorithm also requires examination of the multiplier bits from LSB to MSB and shifting of the partial product. ▶ Prior to the shifting, the multiplicand may be added to the partial product, subtracted from the partial product, or left unchanged according to following rules: The multiplicand is subtracted from the partial product upon encountering the first least significant 1 in a string of 1’s in the multiplier The multiplicand is added to the partial product upon encountering the first (provided that there is a previous ‘1’) in a string of 0’s in the multiplier. The partial product does not change when the multiplier bit is identical to the previous multiplier bit, that is strig of 0s or string of 1s.
Arithmetic Shift Right ▶ In Booth Multiplication Algorithm Shift Right is Arithmetic shift right Example: ▶ Let the number is 1001 ▶ Shift right is 0100 ▶ But, Arithmetic shift right is 1100 ▶ Let the number is 0101 ▶ Arithmetic shift right of this number is 0010
Processor organization: Classification of Processors Categorized by memory organization Von- Neumann architecture Harvard architecture Categorized by instruction type CISC RISC VLIW
Von Neumann Model In 1946, John von Neumann and his colleagues began the design of a new stored program computer referred to as the IAS (Institute for Advanced Study) computer . Stores program and data in same memory. It was designed to overcome the limitation of previous ENIAC computer. The limitation of ENIAC computer:- The task of entering and altering programs for the ENIAC was extremely tedious.
Structure of IAS computer
Structure of IAS computer IAS consists of- A main memory, which stores both data and instructions An ALU capable of operating on binary data A control unit, which interprets the instructions in memory and causes them to be executed I/O equipment operated by the control unit
IAS Memory Formats The memory of IAS consists of 1000 storage locations , called words, of 40 binary digits(bits) each. Both data and instructions are stored there. Each number is represented by a sign bit and a 39- bit value . 1 39 Sign bit A word may also contain two 20- bit instructions, with each instruction consisting of an 8- bit operation code(opcode ) specifying the operation to be performed and a12- bit address designating one of the words in memory . Left Instruction Right Instruction Opcode A 8 ddre s s Opcode 20 A 2 d 8 dre s s
Harvard Architecture Physically separate storage and signal pathways for instructions and data . Originated from the Harvard Mark I relay- based computer, which stored Instructions on punched tape (24 bits wide) Data in electro- mechanical counters In some systems, instructions can be stored in read- only memory while data memory generally requires read- write memory . In some systems, there is much more instruction memory than data memory. Used in MCS- 51, MIPS etc.
Harvard Architecture
Register Organization CPU must have some working space (temporary storage) called registers . A computer system employs a memory hierarchy . At the highest level of hierarchy, memory is faster, smaller and more expensive. Within the CPU, there is a set of registers which can be treated as a memory in the highest level of hierarchy .
Register Organization The registers in the CPU can be categorized into two groups User- visible registers: These enables the machine - or assembly-language programmer to minimize main memory reference by optimizing use of registers. Control and status registers: These are used by the control unit to control the operation of the CPU. Operating system programs may also use these in privileged mode to control the execution of program.
User- visible registers General Purpose Data Address Condition Codes
1. General Purpose Registers: Used for a variety of functions by the programmer. Sometimes used for holding operands(data) instruction . Sometimes used for addressing functions (e.g., indirect, displacement). of an register Data registers: Used to hold only data. Cannot be employed in the calculation of an operand address.
3. Address registers: Used exclusively for the purpose of addressing . Examples include the following: Segment pointer : In a machine with segment addressing, a segment register holds the address of the base of the segment . There may be multiple registers, one for the code segment and one for the data segment. 2. Index registers : – These are used for indexed addressing and may be auto indexed. Stack pointer : A dedicated register that points to the top of the stack. Auto incremented or auto decremented using PUSH or POP operation
4. Condition Codes Registers: Sets of individual bits e.g. result of last operation was zero Can be read (implicitly) by programs e.g. Jump if zero Can not (usually) be set by programs
Control and status registers Four registers are essential to instruction execution: Program Counter (PC): Contains the address of an instruction to be fetched. Instruction Register (IR): Contains the instruction most recently fetched. Memory Address Register (MAR): Contains the address of a location of main memory from where information has to be fetched or information has to be stored. Memory Buffer Register (MBR): Contains a word of data to be written to memory or the word most recently read.
Control and status registers Program Status Word (PSW) Condition code bits are collected into one or more registers, known as the program status word (PSW), that contains status information. Common fields or flags include the following: Sign: Contains the sign bit of the result of the last arithmetic operation. Zero: Set when the result is zero. Carry: Set if an operation resulted in a carry (addition) into or borrow (subtraction) out of a high order bit. Equal: Set if a logical compare result is equal. Overflow: Used to indicate arithmetic overflow. Interrupt enable/disable: Used to enable or disable interrupts.
INSTRUCTION FORMAT The operation of the computer system are determined by the instructions executed by the central processing unit. These instructions are known as machine instruction and are in the form of binary codes . Each instruction of the CPU has specific information field which are required to execute it. These information field of instructions are called elements of instruction .
Elements of Instruction Operation Code: Binary code that specifies which operation to be performed. Source operand address: Specifies one or more source operands Destination operand address: The operation executed by the CPU may produce result which is stored in the destination address. Next instruction address: Tells the CPU from where to fetch the next instruction after completion of execution of current instruction.
Representation of Instruction Opcode Operand address1 Operand address2
Instruction Types According to Number of Addresses
Three Address Instruction
Two Address Instruction
One Address Instruction
Zero Address Instruction The location of the operands are defined implicitly For implicit reference, a processor register is used and it is termed as accumulator(AC). E.g. CMA //complements the content of accumulator i.e. AC AC
Instruction Format Design Issues: An instruction consists of an opcode and one or more operands , implicitly or explicitly. Each explicit operand is referenced using one of the addressing mode that is available for that machine. An instruction format is used to define the layout of the bits allocated to these elements of instructions. Some of issues effecting instruction design are: Instruction Length Allocation of bits for different fields in an instruction
1. Instruction Length A longer instruction means more time in fetching an instruction. For e.g. an instruction of length 32 bit on a machine with word size of 16 bit will need two memory fetch to bring the instruction. Programmer desires: More opcode and operands in a instruction as it will reduce the program length. More addressing mode for greater flexibility in accessing various types of data.
Factors for deciding the instruction length: Memory Size More bits are required in address field to access larger memory range. Memory Organization If the system supports virtual memory then memory range is larger than the physical memory. Hence required the more number of addressing bits. Bus Structure The instruction length should be equal to data bus length or multiple of it. Processor Speed The data transfer rate from the memory should be equal to the processor speed.
2. Allocation of Bits More opcodes obviously mean more bits in the opcode field. Factors which are considered for selection of addressing bits are: Number of Addressing modes: More addressing modes, more bits will be needed. Number of Operands: More operands – more number of bits needed Register versus memory: If more and more registers can be used for operand reference then the fewer bits are needed As number of register are far less than memory size.
Number of Register Sets: Assume that A machine has 16 general purpose registers, a register address require 4 bits. However if these 16 registers are divided into two groups, then one of the 8 register of a group will need 3 bits for register addressing. Address Range: The range of addresses that can be referenced is related to the number of address bits. With displacement addressing, the range is opened up to the length of the address register. Address Granularity: In a system with 16- or 32- bit words, an address can reference a word or a byte at the designer’s choice. 2. Allocation of Bits
Addressing Modes The term addressing mode refers to the mechanism employed for specifying operands . An operand can be specified as part of the instruction or reference of the memory locations can be given. An operand could also be an address of CPU register . The most common addressing techniques are: Immediate Direct Indirect Register Register Indirect Displacement Stack
Addressing Modes To explain the addressing modes, we use the following notation: A =contents of an address field in the instruction that refers to a memory R =contents of an address field in the instruction that refers to a register EA =actual (effective) address of the location containing the referenced operand (X) =contents of memory location X or register X
1. Immediate Addressing: • The operand is actually present in the instruction • OPERAND = A The operand is actually present in the instruction OPERAND = A • This mode can be used to define and use constants or set initial values of variables. • The advantage of immediate addressing is that no memory reference other than the instruction fetch is required to obtain the operand. • e.g. MOVE R0,300 This mode can be used to define and use constants or set initial values of variables. The advantage of immediate addressing is that no memory reference other than the instruction fetch is required to obtain the operand. e.g. MOVE R0,300 Immediate addressing
2. Direct Addressing The address field contains the effective address of the operand: EA= A It requires only one memory reference and no special calculation. Here, ’A’ indicates the memory address field for the operand. e.g. MOVE R1, 1001 Direct addressing
3. Indirect Addressing The effective address of the operand is stored in the memory and the instruction contains the address of the memory containing the address of the data . This is know as indirect addressing: EA = (A) Here ’A’ indicates the memory address field of the required Operands. E.g. MOVE R0,(1000) Indirect addressing
4. Register Addressing The instruction specifies the address of the register containing the operand. The instruction contains the name of the a CPU register. EA =R indicates a register where the operand is present. E.g. MOVE R1, 1010 Register addressing
5. Register Indirect Addressing The effective address of the operand is stored in a register and instruction contains the address of the register containing the address of the data. EA = (R) Here ’R’ indicates the memory address field of the required Operands. E.g. MOVE R0,(R1) Register addressing Register indirect addressing
6. Displacement Addressing A combination of both direct addressing and register indirect addressing modes. EA = A + (R) The value contained in one address field (value = A) is used directly. The other address field refers to a register whose contents are added to A to produce the effective address.
6. Displacement Addressing Three of the most common use of displacement addressing are: Relative addressing Base- register addressing Indexing
Relative addressing For relative addressing, the implicitly referenced register is the program counter (PC). The current instruction address is added to the address field to produce the EA. Thus, the effective address is a displacement relative to the address of the instruction. e.g. 1001 JC X1 1050 X1: ADD R1,5 X1=address of the target instruction- address of the current instruction =1050-1001=49
Base Register Addressing The base register(reference register) contains a memory address , and the address field contains a displacement from that base address specified by the base register. EA=A+(B)
Indexing or Indexed Addressing Used to access elements an array which are stored in consecutive location of memory. EA = A+(R) Address field A gives main memory address and R contains positive displacement with respect to base address. The displacement can be specified either directly in the instruction or through another registers . E.g. MOVE R1, (BR+5) MOVE R0,(BR+R1) Starting address Starting address Offset(index) Offset(index)
Auto Indexing Generally index register are used for iterative tasks , it is typical that there is a need to increment or decrement the index register after each reference to it. Because this is such a common operation, some system will automatically do this as part of the same instruction cycle. This is known as auto-indexing. Two types of auto- indexing auto-incrementing auto-decrementing.
a. Auto Increment Mode If register R contains the address of the operand After accessing the operand, the contents of register R is incremented to point to the next item in the list. Auto- indexing using increment can be depicted as follows: EA = A + (R) or EA = (R)+ (R) = (R) + 1 E.g. MOVE R1,1010 /*starting Memory location 1010 is stored in R1*/ ADD AC,(R1)+ /*contents of 1010 ML are added to AC and the contents of R1 is incremented by 1*/
b. Auto Decrement Mode The contents of register specified in the instruction are decremented and these contents are then used as the effective address of the operand. Auto- indexing using decrement can be depicted as follows: EA = A – (R) or EA = –(R) (R) = (R) – 1 The contents of the register are to be decremented before used as the effective address. E.g. ADD R1,- (R2)
7. Stack Addressing A stack is a linear array or list of locations . Sometimes referred to as a pushdown list or last- in- first- out queue . Associated with the stack is a pointer whose value is the address of the top of the stack . The stack pointer is maintained in a register. Thus, references to stack locations in memory are in fact register indirect addresses. The stack mode of addressing is a form of implied addressing. E.g. PUSH and POP
Basic Instruction Cycle Fetch cycle basically involves read the next instruction from the memory into the CPU and along with that update the contents of the program counter. In the execution phase , it interprets the opcode and perform the indicated operation. The instruction fetch and execution phase together known as instruction cycle .
Basic Instruction Cycle with Interrupt An instruction cycle includes the following sub cycles: Fetch: Read the next instruction from memory into the processor. Execute: Interpret the opcode and perform the indicated operation. Interrupt: If interrupts are enabled and an interrupt has occurred, save the current process state and service the interrupt.
Basic Instruction Cycle with Interrupt
The Indirect Cycle The execution of an instruction may involve one or more operands in memory, each of which requires a memory access. Further, if indirect addressing is used, then additional memory accesses are required. For fetching the indirect addresses as one more instructions subcycle are required. After an instruction is fetched, it is examined to determine if any indirect addressing is involved. If so, the required operands are fetched using indirect addressing.
The Indirect Cycle
Instruction Cycle State Diagram
Instruction address calculation (iac): Determine the address of the next instruction to be executed. Usually, this involves adding a fixed number to the address of the previous instruction. Instruction fetch (if): Read instruction from its memory location into the processor. Instruction operation decoding (iod): Analyze instruction to determine type of operation to be performed and operand(s) to be used. Operand address calculation (oac): If the operation involves reference to an operand in memory or available via I/O, then determine the address of the operand. Operand fetch (of): Fetch the operand from memory or read it in from I/O. Data operation (do): Perform the operation indicated in the instruction. Operand store (os): Write the result into memory or out to I/O.