assignment_presentaion_jhvvnvhjhbhjhvjh.pptx

23mu36 29 views 29 slides May 05, 2024
Slide 1
Slide 1 of 29
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29

About This Presentation

it


Slide Content

Interleaved memory organization, Multiprocessor operating systems By Praveen raj s r 23mu36

Interleaved memory organization

Memory interleaving is a technique used in computer architecture to improve memory access performance by spreading memory accesses across multiple memory modules in a systematic manner. It is particularly relevant in systems with multiple memory banks or modules, such as multi-channel memory architectures. Memory is partitioned into modules. Interleaved memory organization

Memory is partitioned into modules. Common Address Bus Common Data bus

HIGH–ORDER MEMORY INTERLEAVING: ABR – Address Register DBR – Data Register

LOW–ORDER MEMORY INTERLEAVING:

EXAMPLE:

EXAMPLE 1:

EXAMPLE 2:

VECTOR ACCESS MEMORY SCHEMES: Flow of vector operands between main memory and vector registers is usually pipelined with multiple access paths. Vector operand are not necessarily stored at contiguous memory locations. To access a vector in memory one needs to specify base address, stride and length. C – ACCESS S – ACCESS C/S - ACCESS

C – Access Memory Organization: The m-way low order interleaved memory structure allows m-memory words to be accessed concurrently in an overlapped manner. It is same as Low order Interleaved memory organization. "C" stands for concurrent. Low order 'a' bits select MODULES, and high order 'b' bits select WORD within each module.

C – Access Memory configuration

S – Access Memory Organization: Low order interleaved memory can be rearranged to allow simultaneous access i.e. S-access. All memory modules are accessed simultaneously in synchronized manner. High order n-a=b bits selects offset WORD from each module. At the end m consecutive words are latched in data buffer simultaneously. Then low order 'a' bits are used to multiplex the m words out.

S – Access Memory configuration:

c/S – Access Memory Organization: Both former approaches are combined to have their advantages. 'n' access bus are used with 'm' interleaved memory modules attached to each bus. 'm' modules on each bus allows C-access. The 'n' buses operates in parallel to allow S-access. In each memory cycles "m . n" words are fetched if n buses are fully used

c/S – Access Memory configuration

Multiprocessor operating systems

Operating Systems For Parallel Processing : An operating system (OS), in its most general sense, is software that allows a user to run other applications on a computing device. The operating system manages a computer’s hardware resources, including: Input devices, Output devices, Network devices, Storage. Modern operating systems support parallel execution of processes on multiprocessor and uniprocessor computers . For this purpose an operating system provides process synchronization and communication facilities. Parallel OS are closely influenced by the hardware architecture of the machine it runs on.

OS configuration For Parallel Processing : Operating system can be classified on the basis of its memory whether it is shared/centralized or distributed one. OS can also be configured differently for small scaled, large scaled, and distributed shared machines. Characteristics of a parallel OS: Degree of coordination among processors Coupling and Transparency IPC Parallelism and synchronization Process management OS configuration for parallel processing are as follows: 1.Master-slave configuration 2.Separate Supervisor configuration 3. Floating Supervisor configuration

Master-slave configuration In this mode, one processor (or CPU) called the  master, maintains the status of all the other processors or CPUs called  slave  in the system and apportions the work to all these slave processors. Here, OS runs only on master processor and all other processor are treated as schedulable resources. Master is the sole supervisor for all the slaves. Since the OS is executed on only one processor and all the system calls are also executed on the same processor, other processors needing executive services must request the master that acknowledges the request and performs the services.

Advantages of Master-slave configuration:  There is a single data structure (e.g., one list or a set of prioritized lists) that keeps track of ready processes.  When a CPU goes idle, it asks the operating system for a process to run and it is assigned one.  Thus it can never happen that one CPU is idle while another is overloaded.  Similarly, pages can be allocated among all the processes dynamically and there is only one buffer cache, so inconsistencies never occur. Disadvantages of Master-slave configuration: The problem with this model is that with many CPUs, the master will become bottleneck. Many of the slaves may have to wait for the master’s work to get over ,before they can get their requests served.  If master fails, the system fails.

Separate Supervisor configuration Statically divide memory into as many partitions as there are CPUs and give each CPU its own private memory and its own private copy of the operating system. In effect, the  n  CPUs then operate as  n  independent computers . CPUs share the operating system code and make private copies of only the data.  Resource sharing occurs via shared memory blocks .  Each processor services its own needs.                               If the processor accesses shared kernel code, then the code must be re-entrant. Each processor must maintain separate tables for describing resource allocation and process contexts.

Advantages of Separate Supervisor configuration Less susceptible to failures. If an unusually large program has to be run, one of the CPUs can be allocated an extra large portion of memory for the duration of that program.  In addition, processes can efficiently communicate with one another by having, say a producer be able to write data into memory and have a consumer fetch it from the place the producer wrote it. Disadvantages of Separate Supervisor configuration Since each operating system has its own tables, it also has its own set of processes that it schedules by itself. As a consequence, it can happen that one CPU is idle while other CPU is loaded with work. If the operating system maintains a buffer cache of recently used disk blocks, each operating system does this independently of the other ones. This may lead to inconsistent results .

Floating Supervisor configuration Also known as the SMP(Symmetric Multi-processor ) model as it treats all the processors as well other resources symmetrically . Supervisor routine floats from one supervisor to another.  This model balances processes and memory dynamically, since there is only one set of operating system tables. Has several amount of code sharing, so code must be re-entrant.

Advantages of Floating Supervisor configuration The service routine(copy of OS ) flows from one processor to another, several of the processors may be executing  supervisory service routines simultaneously. Attain better load balancing over all type of resources, conflicts in service requests can be resolved by priorities that are set statically or dynamically. There is no master or fixed supervisor; each processor takes turns and introduces its own problems. Disadvantages of Floating Supervisor configuration This model works, but is almost as bad as the master-slave model. suppose that 10% of all run time is spent inside the operating system. With 20 CPUs, there will be long queues of CPUs waiting to get in.  Above problem can be solved partially by breaking down the operating system into independent critical regions that do not interact with one another. But some data values are used by multiple critical regions. This makes our implementation more complex.

SOFTWARE REQUIREMENTS: Master-Slave Configuration : Master Processor Management: Software should efficiently manage the master processor, which handles the supervision of all other processors. It should allocate tasks to slave processors and maintain their status effectively. Task Allocation and Dispatching: Algorithms and mechanisms for allocating tasks to slave processors based on workload and system requirements. Efficient dispatching mechanisms to ensure timely execution of tasks by slave processors. Fault Handling and Recovery: Software should handle faults and failures in the master processor, including catastrophic failures that may require manual intervention. Recovery mechanisms to restart the master processor or reassign its responsibilities in case of failure. Resource Utilization Optimization: Techniques to optimize resource utilization, ensuring that all processors are efficiently utilized even under varying workloads. Balancing mechanisms to prevent idle time on slave processors and maximize overall system performance.

Separate Supervisor for Each Processor: Individual Processor Management: Software should manage each processor's supervisor independently, including resource allocation and task scheduling. Efficient mechanisms for inter-processor communication and coordination, especially for shared resources. Reentrancy and Replication: Supervisory code should be reentrant or replicated for each processor to ensure independent operation and fault tolerance. Mechanisms to handle conflicts and ensure data consistency in shared resources accessed by multiple supervisors. Fault Tolerance and Recovery: Software should handle faults in individual processors, ensuring continuous operation of the system. Recovery mechanisms to restart failed processors or redistribute their workload to other processors. Resource Sharing and Management: Efficient resource sharing mechanisms, especially for shared memory and I/O devices, to avoid contention and maximize utilization. Techniques for dynamic resource allocation and load balancing among processors.

Floating Supervisor Control: Dynamic Supervisor Management: Software should dynamically manage the movement of the supervisor routine between processors, ensuring load balancing and fault tolerance. Prioritization mechanisms for resolving conflicts and scheduling supervisory tasks across multiple processors. Reentrancy and Resource Protection: Most supervisory code must be reentrant to support simultaneous execution on multiple processors. Mechanisms to control access to shared resources and prevent conflicts among concurrent supervisor tasks. Fault Handling and Graceful Degradation: Robust fault handling mechanisms to maintain system integrity and availability in the event of processor failures. Graceful degradation strategies to adapt system operation in response to failures while maintaining essential functionality. Efficient Resource Utilization: Techniques for efficient resource utilization and load balancing across all processors, ensuring optimal performance under varying workloads. Adaptive algorithms for dynamic resource allocation and reallocation based on system conditions and workload fluctuations.
Tags