PP - CH01 (2).pptxhhsjoshhshhshhhshhshsbx

nairatarek3 90 views 21 slides Jun 09, 2024
Slide 1
Slide 1 of 21
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21

About This Presentation

operating system (OS) is a crucial software that manages computer hardware and software resources while providing common services for computer programs. Below are the key aspects of an operating system:

1. Definition and Functionality
The operating system acts as an intermediary between users and ...


Slide Content

Introduction Parallelism and Performance Chapter 01 : Part 01 Dr/ NORA NIAZY

Agenda Introduction to Parallelism Parallel processing Types of parallelism: A taxonomy The Flynn–Johnson Classification

Introduction to Parallelism Parallelism is the process of processing several set of instructions simultaneously . It reduces the total computational time . Parallelism can be implemented by using parallel computers, i.e. a computer with many processors. Parallel computers require parallel algorithm , programming languages, compilers and operating system that support multitasking .

Parallel processing The problem is divided into sub-problems and are executed in parallel to get individual outputs . Later on, these individual outputs are combined together to get the final desired output . It is not easy to divide a large problem into sub-problems . Sub-problems may have data dependency among them. Therefore, the processors have to communicate with each other to solve the problem . It has been found that the time needed by the processors in communicating with each other is more than the actual processing time . So, while designing a parallel algorithm , proper CPU utilization should be considered to get an efficient algorithm.

What Does Parallel Processing Mean Parallel processing is a method of simultaneously breaking up and running program tasks on multiple microprocessors in order speed up performance time. Parallel processing may be accomplished with a single computer that has two or more processors ( CPUs ) or with multiple computer processors connected over a computer network. Parallel processing may also be referred to as parallel computing.

What is parallel computing Parallel computing refers to the process of executing several processors an application or computation simultaneously. Generally, it is a kind of computing architecture where the large problems break into independent, smaller, usually similar parts that can be processed in one go. It is done by multiple CPUs communicating via shared memory, which combines results upon completion. It helps in performing large computations as it divides the large problem between more than one processor. Parallel computing also helps in faster application processing and task resolution by increasing the available computation power of systems. The parallel computing principles are used by most supercomputers employ to operate. The operational scenarios that need massive processing power or computation, generally, parallel processing is commonly used there.

Types of parallelism : The Flynn–Johnson Classification Both sequential and parallel computers operate on a set (stream) of instructions called algorithms . These set of instructions (algorithm) instruct the computer about what it has to do in each step. Depending on the instruction stream and data stream, computers can be classified into four categories − Single Instruction stream, Single Data stream (SISD) computers Single Instruction stream, Multiple Data stream (SIMD) computers Multiple Instruction stream, Single Data stream (MISD) computers Multiple Instruction stream, Multiple Data stream (MIMD) computers

SISD Computers SISD computers contain one control unit, one processing unit, and one memory unit. In this type of computers, the processor receives a single stream of instructions from the control unit and operates on a single stream of data from the memory unit. During computation, at each step, the processor receives one instruction from the control unit and operates on a single data received from the memory unit.

SIMD Computers SIMD computers contain one control unit, multiple processing units, and shared memory or interconnection network. Here, one single control unit sends instructions to all processing units. During computation, at each step, all the processors receive a single set of instructions from the control unit and operate on different set of data from the memory unit. Each of the processing units has its own local memory unit to store both data and instructions. In SIMD computers, processors need to communicate among themselves. This is done by shared memory or by interconnection network. While some of the processors execute a set of instructions, the remaining processors wait for their next set of instructions. Instructions from the control unit decides which processor will be active (execute instructions) or inactive (wait for next instruction).

MISD Computers As the name suggests, MISD computers contain multiple control units, multiple processing units, and one common memory unit. Here, each processor has its own control unit and they share a common memory unit. All the processors get instructions individually from their own control unit and they operate on a single stream of data as per the instructions they have received from their respective control units. This processor operates simultaneously.

MIMD Computers MIMD computers have multiple control units, multiple processing units, and a shared memory or interconnection network. Here, each processor has its own control unit, local memory unit, and arithmetic and logic unit. They receive different sets of instructions from their respective control units and operate on different sets of data.

Note An MIMD computer that shares a common memory is known as multiprocessors , while those that uses an interconnection network is known as multi-computers . Based on the physical distance of the processors, multi-computers are of two types − Multicomputer − When all the processors are very close to one another (e.g., in the same room). Distributed system − When all the processors are far away from one another (e.g.- in the different cities)

Introduction Parallelism and Performance Chapter 01 : Part 02

Agenda Effectiveness of parallel processing Parallel Hardware and Parallel Software V on Neumann Architecture Analysis and Performance Metrics

Effectiveness of parallel processing

Parallel Hardware and Parallel Software Parallel Computer Architecture is the method of organizing all the resources to maximize the performance and the programmability within the limits given by technology and the cost at any instance of time . It adds a new dimension in the development of computer system by using more and more number of processors . A parallel algorithm can be executed simultaneously on many different processing devices and then combined together to get the correct result . Parallel algorithms are highly useful in processing huge volumes of data in quick time.

V on Neumann Architecture

V on Neumann Architecture Describes a computer system as a CPU (or core) connected to the main memory through an interconnection network • Executes only one instruction at a time, with each instruction operating on only a few pieces of data • Main memory has a set of addresses where you can store both instructions and data • CPU is divided into a control unit and an ALU – Control unit decides which instructions in a program need to be executed – ALU executes the instructions selected by control unit – CPU stores temporary data and some other information in registers – Special register PC in the control unit

V on Neumann Architecture Interconnect/bus used to transfer instructions and data between CPU and memory Data/instructions fetched/read from memory to CPU Data/results stored/written from CPU to memory Separation of memory and CPU known as von Neumann bottleneck Problem because CPUs can execute instructions more than a hundred time faster than they can fetch items from main memory

Analysis and Performance Metrics 01 Modern computers have powerful and extensive software packages. To analyze the development of the performance of computers , first we have to understand the basic development of hardware and software. Computer Development Milestones − There is two major stages of development of computer - mechanical or electromechanical parts . Modern computers evolved after the introduction of electronic components. High mobility electrons in electronic computers replaced the operational parts in mechanical computers . For information transmission, electric signal which travels almost at the speed of a light replaced mechanical gears or levers. Elements of Modern computers − A modern computer system consists of computer hardware, instruction sets, application programs, system software and user interface .

Analysis and Performance Metrics 02 The computing problems are categorized as numerical computing, logical reasoning, and transaction processing . Some complex problems may need the combination of all the three processing modes. Evolution of Computer Architecture − In last four decades, computer architecture has gone through revolutionary changes. We started with Von Neumann architecture and now we have multicomputers and multiprocessors. Performance of a computer system − Performance of a computer system depends both on machine capability and program behavior. Machine capability can be improved with better hardware technology, advanced architectural features and efficient resource management. Program behavior is unpredictable as it is dependent on application and run-time conditions