Parallelism

raseduzzaman 3,071 views 13 slides Jul 25, 2018
Slide 1
Slide 1 of 13
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13

About This Presentation

Presentation about parallelism.Type of parallelism.


Slide Content

Welcome to our presentation
Parallelism

Parallelism
Goals of Parallelism
Exploitation of Concurrency
Types of Parallelism
Md. Monirul Awal (161-15-7501)
Md. Raseduzzaman(161-15-7495)

Parallelism
Executing two or more operations at the same time is known as Parallelism.

Goals of Parallelism
The purpose of parallel processing is to speedup the computer processing
capability or in words, it increases the computational speed.
Increases throughput, i.e. amount of processing that can be accomplished
during a given interval of time.
Improves the performance of the computer for a given clock speed.
Two or more ALUs in CPU can work concurrently to increase throughput.
The system may have two or more processors operating concurrently.

Exploitation of Concurrency
Techniques of Concurrency:
Overlap: execution of multiple operations by heterogeneous functional units.
Parallelism : execution of multiple operations by homogenous functional units.
Throughput Enhancement:
Internal Micro-operations: performed inside the hardware functional units
such as the processor, memory, I/O etc.
Transfer of information: between different functional hardware units for
Instruction fetch, operand fetch, I/O operation etc.

Types of Parallelism:
1.Instruction Level Parallelism (ILP)
2.Processor Level Parallelism

1. Instruction Level Parallelism (ILP)
Instruction-level parallelism (ILP) is a measure of
how many operations in a computer program can
be performed "in-parallel" at the same time.

ILP TECHNIQUES
An instruction pipelineis a technique used in the design of modern
microprocessors, microcontrollers and CPUs to increase their instruction
throughput (the number of instructions that can be executed in a unit of time).
For example, the RISC pipeline is broken
into five stages with a set of flip flops
betweeneach stage as follow:

ILP TECHNIQUES
A superscalarCPU architecture implements ILP inside a single processor
which allows faster CPU throughput at the same clock rate
Simple superscalar pipeline. By fetching
and dispatching two instructions at a time,
a maximum of two instructions per cycle
can be completed.

2. Processor Level Parallelism
Instruction-level parallelism (pipelining and superscalar operation) rarely win more than
a factor of five or ten in processor speed.
To get gains of 50, 100, or more, the only way is to design computers with multiple CPUS
We will consider three alternative architectures:
–Array Computers
–Multi-processors

Array Computer
An array processor consists of a large number of identical processors that perform the same
sequence of instructions on different sets of data.
A vector processor is efficient at executing a sequence of operations on pairs of Data elements;
all of the addition in a single, heavily-pipelined adder.

Multi-processor
The processing elements in an array processor are not independent CPUS,
since there is only one control unit.
The first parallel system with multiple full-blown CPUs is the multiprocessor.
This is a system with more than one CPU sharing a common memory coordinated in software.
The simplest one is to have a single bus with multiple CPUs and one memory all plugged into it.