These slides explain about parallel processing in computer architecture
Size: 1.24 MB
Language: en
Added: Dec 01, 2022
Slides: 47 pages
Slide Content
Parallel Processing Prepared By Ms. Sheethal Aji Mani Assistant Professor Kristu Jayanti College
Parallel processing is an efficient form of information processing which emphasizes the exploitation of concurrent events in the computing process. Parallel Processing demands concurrent execution of many programs in the computer. Concurrency implies parallelism, simultaneity, and pipelining.
Introduction to Parallelism in uniprocessor system From an operating point of view, computer systems have improved chronologically in four phases: batch processing multiprogramming time sharing multiprocessing In these four operating modes, the degree of parallelism increase sharply from phase to phase.
We define parallel processing as Parallel processing is an efficient form of information processing which emphasizes the exploitation of concurrent events in the computing process. Parallel Processing demands concurrent execution of many programs in the computer. Concurrency implies parallelism, simultaneity, and pipelining. The highest level of parallel processing is conducted among multiple jobs or programs through multiprogramming, time sharing, and multiprocessing. Parallel processing can be challenged in four programmatic levels: Job or program level – highest level of parallel processing conducted among multiple jobs or programs Task or procedure level – next higher level parallel processing conducted among procedures or tasks within the same program Interinstruction level- Third level to exploit concurrency among multiple instructions Intrainstruction level-Finally concurrent operations within each instruction
Basic Uniprocessor Architecture A typical uniprocessor computer consists of the three major components the main memory, the CPU ( Central Processing Unit ) and the I/O(Input-Output) subsystem. There are two architectures of commercially available uniprocessor computers to show the relation between three subsystems
System Architecture of the supermini VAX-11/780 uniprocessor system
There are 16, 32-bit general purpose register one of which is a Program Counter (PC). There is also a special CPU status register containing about the current state of the processor being executed The CPU contains an ALU with an optional Floating-point accelerator, and some local cache memory with an optional diagnostic memory Floating-point accelerator (FPA) -A device to improve the overall performance of a computer by removing the burden of performing floating-point arithmetic from the central processor. optional diagnostic memory –checks the errors
The CPU can be intervened by the operator through the console connected to floppy disk The CPU, the main memory( 2^32 words of 32 bit each) and the I/O subsystem are all connected to a common bus, the synchronous backplane interconnection(SBI) Through this bus, all I/O devices can communicate with each other with CPU or with the memory I/O devices can be connected directly to the SBI through the unibus and its controller or through a mass bus and its controller
System Architecture of the mainframe IBM system 370/model 168 uniprocessor computer
The CPU contains the instruction decoding and execution units as well as cache Main memory is divided into four units referred to as logical storage units (LSU) that are four ways interleaved The storage controller provides multiport Connections between the CPU and the four LSU’s Peripherals are connected to the system via high speed I/O channels which operate asynchronously with the CPU
Parallelism can be promoted by Hardware means Software means
PARALLELLISM IN UNIPROCESSOR SYSTEM A number of parallel processing mechanisms have been developed in uniprocessor computers. We identify them in the following six categories: 1. Multiplicity of functional units 2.Parallelism and pipelining within the CPU 3.Overlapped CPU and I/O operations 4.Use of a hierarchical memory system 5.Balancing of subsystem bandwidths 6.Multiprogramming and time sharing
1. Multiplicity of functional units Early computers - 1. one ALU that perform one operation at a time 2. Slow processing Multiple and specialized functional units. - operate in parallel. - two parallel execution units (fixed and floating point arithmetic) in CDC-6600 → 10 functional units
2.Parallelism and pipelining within the CPU Instead of serial bit adders , parallel adders are used in almost ALUs Use of High speed multiplier recoding and convergence division Sharing of hardware resources for functions of multiply and divide Various phases of instructions executions (instruction fetch , decode, operand fetch, arithmetic logic execution, store result) are pipelined For overlapped instruction executions, instruction prefetch and buffering techniques have been developed.
3.Overlapped CPU and I/O operations I/O operations performed simultaneously with CPU computations - by using separate I/O controllers, channels or I/O processors. Dat DMA channel can be used for direct transfer of data between main memory and I/O devices
Use of a hierarchical memory system CPU is 1000 times faster than memory access Heirarchical memory system can be used to close up the speed gap Cache memory can be used to be served as a buffer between cpu and main memory
Balancing of subsystem bandwidths CPU is the fastest unit in computer. The bandwidth of a system is defined as the number of operations performed per unit time. In case of main memory the memory bandwidth is measured by the number of words that can be accessed per unit time. Relationship b/w BWs
Bandwidth Balancing Between CPU and Memory The speed gap between the CPU and the main memory can be closed up by using fast cache memory between them. A block of memory words is moved from the main memory into the cache so that immediate instructions can be available most of the time from the cache. Bandwidth Balancing Between Memory and I/O Devices Input-output channels with different speeds can be used between the slow I/O devices and the main memory. The I/O channels perform buffering and multiplexing functions to transfer the data from multiple disks into the main memory by stealing cycles from the CPU.
Parallel Computer Structures A parallel computer is a kind of computer structure where a large complex problem is broken into multiple small problems which are then executed simultaneously by several processors. We often consider this as parallel processing. It is divided into three Pipeline Computer Array Processors Multiprocessor Systems
Pipeline Computer Pipeline computers leverage parallel computing by overlapping the execution of one process with another process. If we consider executing a small program that has a set of instructions then we can categorize the execution of the entire program in four steps that must be repeated continuously till the last instruction in the program gets executed. The four steps we mentioned above are instruction fetching (IF), instruction decoding (ID), operand fetch (OF), and instruction execution (IE). Each instruction of the program is fetched from the main memory, it is decoded by the processor, and the operands mentioned in the instructions that are required for the execution of instruction are fetched from the main memory, and at last, the instruction is executed .
The figure below shows that to execute three instructions completely it takes 12 secs.
And if we implement pipelining then instruction will be executed in an overlapped fashion. As you can see that the four instructions can be executed in 7 sec. Pipeline computer synchronizes operations of all stages under a common clock. Hence, we can say that executing instructions in pipelined fashion is more efficient.
Array Processors Array processors leverage parallel computing by implementing multiple arithmetic logic units i.e. processing elements in a synchronized way. The processing elements operate in a parallel fashion. Now consider that if we replicate the ALUs and all ALUs are working in parallel then we can say that we have achieved spatial parallelism. Array processors are also capable of processing array elements.
In the figure above, we can see that we have multiple ALUs which are connected in parallel with the control unit using a data routing network. Each ALU in the system i.e., processing elements (PE) consists of the processor(P) and local memory(M). The pattern in which the processing elements are interconnected depends on the specific computation to be performed by the control unit.
The scalar instructions are implicitly executed in the control unit whereas the vector instructions are broadcasted to the parallelly connected processing elements. The operands are fetched directly from the local memories. The instruction fetch and instruction decode is done by the control unit and in this way the vector instructions are executed in a distributed manner.
However, the different array processors may be using different kinds of interconnection networks for connecting the processing elements. The array processors are somewhat complex as compared to the pipelined processors.
Multiprocessor Systems Multiprocessor system supports parallel computing by using a set of interactive processors that have shared resources. In a multiprocessor system, there are multiple processors and processors that have access to a common set of memory modules, peripheral devices, and other input-output devices. However, the entire system with multiple processors is controlled by a single operating system. It is the responsibility of the operating system to provide interaction between the multiple processors present in the system. Even though the processors share memories, peripheral devices, and I/O, each processor in the multiprocessor system has a local memory and even some private devices.
Communication between the processors can be achieved with the help of shared memories or through the interrupted network. The interconnection between the shared memories, I/O devices, and multiple processors in the system can be determined in three different ways: Time shared common bus Crossbar switch network Multiport memories Using a multiprocessor system improves the throughput, flexibility, availability, and reliability of the system.
UNIT 5 ARCHITECTURAL CLASSIFICATION SCHEMES
Flynn’s Classification Taxonomy Flynn's taxonomy distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction Stream and Data Stream. Each of these dimensions can have only one of two possible states: Single or Multiple. The matrix below defines the 4 possible classifications according to Flynn:
SISD- (Single instruction single Data) A serial (non-parallel) computer Single Instruction: Only one instruction stream is being acted on by the CPU during any one clock cycle Single Data: Only one data stream is being used as input during any one clock cycle Deterministic execution This is the oldest type of computer Examples: older generation mainframes, minicomputers, workstations and single processor/core PCs.
Single Instruction, Multiple Data (SIMD): A type of parallel computer Single Instruction: All processing units execute the same instruction at any given clock cycle Multiple Data: Each processing unit can operate on a different data element Best suited for specialized problems characterized by a high degree of regularity, such as graphics/image processing.
Synchronous (lockstep) and deterministic execution Two varieties: Processor Arrays and Vector Pipelines Examples: Processor Arrays: Thinking Machines CM-2, MasPar MP-1 & MP-2, ILLIAC IV Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP, NEC SX-2, Hitachi S820, ETA10 Most modern computers, particularly those with graphics processor units (GPUs) employ SIMD instructions and execution units.
Multiple Instruction, Multiple Data (MIMD): A type of parallel computer Multiple Instruction: Every processor may be executing a different instruction stream Multiple Data: Every processor may be working with a different data stream Currently, the most common type of parallel computer - most modern supercomputers fall into this category. Examples: most current supercomputers, networked parallel computer clusters and "grids", multi-processor SMP computers, multi-core PCs. Note: many MIMD architectures also include SIMD execution sub-components
Multiple Instruction, Single Data (MISD): A type of parallel computer Multiple Instruction: Each processing unit operates on the data independently via separate instruction streams. Single Data: A single data stream is fed into multiple processing units. Few (if any) actual examples of this class of parallel computer have ever existed. Some conceivable uses might be: multiple frequency filters operating on a single signal stream multiple cryptography algorithms attempting to crack a single coded message.
THANKYOU
Serial Vs Parallel Processing Degree of parellelism is another criteria for computer architecture classification The max number of binary digits(bits) that can be processed within unit time by a computer system is called maximum parallelism degree P A bit slice is a string of bits one from each of the words at the same vertical position.
under above classification Word Serial and Bit Serial (WSBS) Word Parallel and Bit Serial (WPBS ) Word Serial and Bit Parallel(WSBP) Word Parallel and Bit Parallel (WPBP)
WSBS has been called bit parallel processing because one bit is processed at a time. WPBS has been called bit slice processing because m-bit slice is processes at a time . WSBP is found in most existing computers and has been called as Word Slice processing because one word of n bit processed at a time. WPBP is known as fully parallel processing in which an array on n x m bits is processes at one time