distributed system models syllabus from enginnering
SanikaPatil68377
4 views
21 slides
Mar 08, 2025
Slide 1 of 21
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
About This Presentation
engineering
Size: 1.43 MB
Language: en
Added: Mar 08, 2025
Slides: 21 pages
Slide Content
Unit-1 Fundamentals
- In distributed computing system provides two concepts: The sharing of computer resources simultaneously by many users The accessing of computers from a place different from the main computer room In 1980 used workstation, it is a single user computer The early 1960 and 1970 there are two network technology released: LAN(Local Area Network)-building and campus, data transmission rate is 10 megabits per second WAN(Wide Area Network)-cites and countries, data transmission rate is 64 kilobits per second -Other technology ATM(Asynchronous Transfer mode)- It is a very high speed networking possible, data transmission rate is 1.2 gigabits per second
System models Minicomputer Model Workstation Model Workstation server Model Processor Pool Model Hybrid Model All these models are used for building distributed computing systems
3. Workstation Server model: -workstation model is a network of a personal Workstations, each with its own disk and local file system -diskless workstation-without local disk - diskful workstation- its own local disk -diskless workstation is more popular than diskful workstation -making the Workstation Server model more Popular than the Workstation model for building distributed computing systems
Issues in design of Distributed Systems Transparency Access Transparency Location Transparency i . Name Transparency ii. User Mobility Replication Transparency Failure Transparency Migration Transparency Concurrency Transparency Performance Transparency Scaling Transparency
1. Transparency Transparency ensures that the complexities of the distributed system are hidden from users, making interactions seamless. Types include: Access Transparency : Users access resources consistently, regardless of their physical location or access methods. Location Transparency : Resources can be accessed without knowledge of their physical or network location. Name Transparency : Resources retain the same name even if their location changes. User Mobility : Users can move across locations without affecting access. Replication Transparency : Users remain unaware of multiple copies of data managed by the system. Failure Transparency : The system continues to function smoothly during failures, hiding recovery processes. Migration Transparency : Resources or processes can be moved within the system without user disruption. Concurrency Transparency : Allows multiple users to access resources concurrently without conflicts. Performance Transparency : Ensures consistent performance regardless of the system’s workload. Scaling Transparency : Adapts to system growth or shrinkage without affecting usability. 2. Reliability Reliability ensures the system remains operational despite faults or failures. Key aspects include: Fault Avoidance : Reduces the likelihood of failures by using robust hardware and software. Fault Tolerance : Enables the system to continue functioning during failures using: Redundancy Techniques : Duplicating critical components (e.g., data, servers). Distributed Control : Avoiding single points of failure by distributing control across the system. Fault Detection and Recovery : Atomic Transactions : Ensures operations either fully complete or do not execute at all. Stateless Server : Simplifies recovery by not maintaining client state. Acknowledgment and Retransmission : Ensures communication reliability through timeout-based retransmission of messages.
3. Flexibility Flexibility allows the system to adapt and evolve with minimal effort: Ease of Modification : Updates, patches, or fixes can be applied without disrupting the system. Ease of Enhancement : New features or upgrades can be integrated seamlessly to meet changing needs. 4. Performance Performance optimization ensures efficient resource usage and fast response times. Key practices include: Batch Operations : Group tasks together to reduce processing overhead. Caching : Store frequently accessed data closer to the user to reduce delays. Minimize Data Copying : Avoid redundant data transfers to save bandwidth. Minimize Network Traffic : Optimize communication between nodes to reduce latency. Fine-Grain Parallelism : Use multicore processors to execute smaller tasks in parallel for better speed. 5. Scalability Scalability ensures the system can grow (or shrink) without losing efficiency or usability. Strategies include: Avoid Centralized Entities : Distribute responsibilities to prevent bottlenecks and single points of failure. Avoid Centralized Algorithms : Use decentralized algorithms to support scalability. Perform Most Operations on Clients : Offload tasks to client-side processing to reduce server load.