Parallel computing is a type of computing architecture in which several processors execute or process an application or computation simultaneously. Parallel computing helps in performing large computations by dividing the workload between more than one processor, all of which work through the comput...
Parallel computing is a type of computing architecture in which several processors execute or process an application or computation simultaneously. Parallel computing helps in performing large computations by dividing the workload between more than one processor, all of which work through the computation at the same time. Most supercomputers employ parallel computing principles to operate. Parallel computing is also known as parallel processing.
↓↓↓↓ Read More:
Watch my videos on snack here: --> --> http://sck.io/x-B1f0Iy
@ Kindly Follow my Instagram Page to discuss about your mental health problems-
-----> https://instagram.com/mentality_streak?utm_medium=copy_link
@ Appreciate my work:
-----> behance.net/burhanahmed1
Thank-you !
Size: 56.02 KB
Language: en
Added: Sep 16, 2019
Slides: 13 pages
Slide Content
Parallel Computing and its applications
Parallel Computing Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. [1] Large problems can often be divided into smaller ones, which can then be solved at the same time.
Types of Parallel Computing: There are several Types of Parallel Computing which are used World wide. Bit-level Parallelism. 2)Instruction level Parallelism. 3)Task Parallelism.
Bit Level Parallelism: When an 8-bit processor needs to add two 16-bit integers,it’s to be done in two steps. The processor must first add the 8 lower-order bits from each integer using the standard addition instruction, Then add the 8 higher-order bits using an add-with-carry instruction and the carry bit from the lower order addition
Instruction Level Parallelism: The instructions given to a computer for processing can be divided into groups, or re-ordered and then processed without changing the final result. This is known as instruction-level parallelism. An Example 1. e = a + b 2. f = c + d 3. g = e * f Here, instruction 3 is dependent on instruction 1 and 2 . However,instruction 1 and 2 can be independently processed.
Task Parallelism: Task Parallelism is a form of parallelization in which different processors run the program among different codes of distribution. It is also called as Function Parallelism.
Applications of Parallel Computing: This decomposing technique is used in application requiring processing of large amount of data in sophisticated ways. For example; Data bases, Data mining. Networked videos and Multimedia technologies. Medical imaging and diagnosis. Advanced graphics and virtual reality. Collaborative work environments.
Why Use Parallel Computing? Main Reasons: Save time and/or money Solve Large Problem:e.g .- Web search engines/databases processing millions of transactions per second. Provide concurrency Use of non-local resources Limits to serial computing: Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware, transmission limit of copper wire (9 cm/nanosecond)
Why Use ParallelComputing ? Current computer architectures are increasingly relying upon hardware level parallelism to improve performance. Multiple execution units Pipelined instructions Multi-core
Difference With Distributed Computing When different processors/computers work on a single common goal,it is parallel computing. Eg.Ten men pulling a rope to lift up one rock,supercomputers implement parallel computing. Distributed computing is where several different computers work separately on a multi-faceted computing workload. Eg.Ten men pulling ten ropes to lift ten different rocks,employees working in an office doing their own work
Approaches To Parallel Computing Flynn’s Taxonomy: SISD(Single Instruction Single Data) SIMD(Single Instruction Multiple Data) MISD(Multiple Instruction Single Data) MIMD(Multiple Instruction Multiple Data)
Implementation Of Parallel Comput ing In Software When implemented in software(or rather algorithms), the terminology calls it ‘parallel programming’ An algorithm is split into pieces and then executed, as seen earlier. Implementation Of Parallel Computing In Hardware: When implemented in hardware, it is called as ‘parallel processing’ Typically,when a chunk of load for execution is divided for processing by units like cores,processors,CPUs,etc .
Future of Parallel Computing: It is expected to lead to other major changes in the industry. Major companies like INTEL Corp and Advanced Micro Devices Inc has already integrated four processors in a single chip. Now what needed is the simultaneous translation and break through in technologies, the race for results in parallel computing is in full swing. Another great challenge is to write a software program to divide computer processors into chunks. This could only be done with the new programming language to revolutionize the every piece of software written. Parallel computing may change the way computer work in the future and how. We use them for work and play.