Introduction distributed system modernss

sppunhan 18 views 46 slides Sep 06, 2024
Slide 1
Slide 1 of 46
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46

About This Presentation

jhjhjhjh jhjhjhjh


Slide Content

1
Introduction
Chapter 1

2
The Textbook
Andrew S. Tanenbaum
& Maarten van Steen,
Distributed Systems:
Principles and
Paradigms, Prentice
Hall, 2002.
全華科技圖書 , (03)401-
5467, M: 0952296068

3
Grade Counting Rule
Midterm Exam 30%
Final Exam 30%
Roll call 10% (base grade 80, 5 times
during the term)
Homework or Report 30%
TA: Semmer 孫瑞祥 3282,

4
Definition of a Distributed System (1)
A distributed system is:
A collection of independent
computers that appears to its
users as a single coherent
system.

5
Definition of a Distributed System (2)
A distributed system organized as middleware.
Note that the middleware layer extends over multiple machines.
1.1

6
The Goals of DS
Connecting Users and Resources: To make it easy
for users to access, remote resources, and to share
them with other users in a controlled way.
Transparency: To hide the act that its processes
and resources are physically distributed across
multiple computers.
Definition: A distributed system that is able to present
itself to users and applications as if it were only a single
computer system is said to be transparent.
Openness: A system that offers services according
to standard rules that describe the syntax and
semantics of those services.
Scalability:

7
Transparency in a Distributed System
Different forms of transparency in a distributed system.
Transparency Description
Access
Hide differences in data representation and how a
resource is accessed
Location Hide where a resource is located
Migration Hide that a resource may move to another location
Relocation
Hide that a resource may be moved to another location
while in use
Replication
Hide that a resource may be shared by several competitive
users
Concurrency
Hide that a resource may be shared by several competitive
users
Failure Hide the failure and recovery of a resource
Persistence
Hide whether a (software) resource is in memory or on
disk

8
Scalability Problems
Examples of scalability limitations.
Concept Example
Centralized services A single server for all users
Centralized data A single on-line telephone book
Centralized algorithms Doing routing based on complete information

9
Decentralized Algorithm’s Characteristics
No machine has complete information about
the system state.
Machines make decisions based only on
local information.
Failure of one machine does not ruin the
algorithm.
There is no implicit assumption that a
global clock exists.
No way to get a globally synchronized time.
In LAN, they are based on synchronous
communication.

10
Scaling Techniques
There are three basic techniques for DS
scaling:
Hiding communication latencies
Try to avoid waiting for responses to remote service
requests as much as possible.
Using asynchronous communication.
Distribution
Splitting a component into smaller parts, and
subsequently spreading those parts across the system.
For example, the Internet DNS
Replication
Divide attention
But caching and replication may lead to consistency
problem.

11
Scaling Techniques (1)
1.4
The difference between letting:
a)a server or
b)a client check forms as they are being filled

12
Scaling Techniques (2)
1.5
An example of dividing the DNS name space into zones.

13
Hardware Concepts
1.6
Different basic organizations and memories in distributed computer systems

14
Multiprocessors (1)

A bus-based multiprocessor.
1.7

15
Multiprocessors (2)
a)A crossbar switch
b)An omega switching network
1.8

16
Homogeneous Multicomputer Systems
System Area Networks (SANs)
The nodes are mounted in a big rack and are
connected through a single, often high-
performance interconnection network.
Two popular connection types of SAN
Mesh
Hypercube

17
Homogeneous Multicomputer Systems
Other samples
Massively Parallel Processors (MPPs)
Consisting of thousands of CPUs
High-performance interconnection network
Fault tolerance is required
Clusters of Workstations (COWs)
A collection of standard PCs or workstations
connected through off-the-shelf communication
components such as Ethernet.

18
Heterogeneous Multicomputer Systems
Distributed ASIC Supercomputer (DAS)
a wide-area distributed cluster designed by the Advanced
School for Computing and Imaging (ASCI).
Consisting of four clusters of multicomputers (64 nodes
each), interconnected through a ATM-switched backbone.

19
Software Concepts
Tightly-coupled
DOS (Distributed Operating Systems)
Used for managing multiprocessors and homogeneous multicomputers.
Loosely-coupled
NOS (Network Operating Systems)
Used for hetergeneous multicomputer systems
Distinction from traditional OS: local services are made available to
remote clients.
Middleware
System Description Main Goal
DOS
Tightly-coupled operating system for multi-
processors and homogeneous multicomputers
Hide and manage
hardware resources
NOS
Loosely-coupled operating system for
heterogeneous multicomputers (LAN and WAN)
Offer local services
to remote clients
Middleware
Additional layer atop of NOS implementing
general-purpose services
Provide distribution
transparency

20
Distributed Operating Systems
Two types of DOSs
Multiprocessor operating system
Multicomputer operating system
Uniprocessor Operating System
Like a virtual machine to applications
Kernel mode

Can access memory and registers, and execute
instructions
User mode
Memory and register access is restricted.

21
Uniprocessor Operating Systems
Separating applications from operating system
code through
 a microkernel.
1.11

22
Uniprocessor Operating Systems (cont.)
Benefits to using microkernels
Flexibility
A large part of the OS is executed in user mode, it is
relatively easy to replaced a module without having to
recompile or re-install the entire system.
Could be placed on different machines
Disadvantages of microkernels
Due to the well-entrenched status quo
Have extra communication overheads (about
20% performance degradation)

23
Multiprocessor Operating Systems (1)
A monitor to protect an integer against concurrent access.
monitor Counter {
private:
int count = 0;
public:
int value() { return count;}
void incr () { count = count + 1;}
void decr() { count = count – 1;}
}

24
Multiprocessor Operating Systems (2)
A monitor to protect an integer against concurrent
access, but blocking a process.
monitor Counter {
private:
int count = 0;
int blocked_procs = 0;
condition unblocked;
public:
int value () { return count;}
void incr () {
if (blocked_procs == 0)
count = count + 1;
else
signal (unblocked);
}
void decr() {
if (count ==0) {
blocked_procs = blocked_procs + 1;
wait (unblocked);
blocked_procs = blocked_procs – 1;
}
else
count = count – 1;
}
}

25
Multicomputer Operating Systems (1)
General structure of a multicomputer operating
system
1.14

26
Multicomputer Operating Systems (2)
Alternatives for blocking and buffering in message passing.
1.15

27
Multicomputer Operating Systems (3)
Relation between blocking, buffering, and reliable
communications.
Synchronization point Send buffer
Reliable comm.
guaranteed?
Block sender until buffer not full Yes Not necessary
Block sender until message sent No Not necessary
Block sender until message received No Necessary
Block sender until message delivered No Necessary

28
Distributed Shared Memory Systems
(1/3)
Programming multicomputers is much harder than
programming multiprocessors
Reasons: buffering, blocking, and reliable communication, etc.
Solution: Emulating shared-memory on multicomputers
system
Using virtual memory capability, which is referred to Distributed
Shared Memory (DSM)
DSM is achieved by page-based distributed shared memory
Some problems caused by DSM
Data consistency
The trade-off of page size
Larger page size causes commun. cost when memory access false
Smaller page size may cause low memory hitting ratio
False sharing
Having data belonging to two independent processes in the same
page

29
Distributed Shared Memory Systems
(2/3)
a)Pages of address
space distributed
among four
machines
b)Situation after CPU
1 references page
10
c)Situation if page 10
is read only and
replication is used

30
Distributed Shared Memory Systems
(3/3)
False sharing of a page between two independent processes.

31
Network Operating System (2/4)
NOS
Contrast with DOS, NOS does not assume that the
underlying hardware are homogeneous and that it should
be managed as if it were a single system.
Different operating systems
Different kernels
Different hardware
More primitive than DOS
Compared to DOS
Drawbacks: hard to use

Login from on machine to another.

Copy files from one machine to another.
Changing a configuration such as password or settings.
Advantages:

Easy to add or remove a machine in NOS (they are highly
independent of each other).

32
Network Operating System (2/4)
General structure of a network operating system.
1-19

33
Network Operating System (3/4)
Two clients and a server in a network operating system.

34
Network Operating System (4/4)
Different clients may mount the servers in different places.

35
Positioning Middleware
A synthetic solution between DOS and NOS:
Middleware: To place an additional layer of software between
applications and the network operating system, offering a higher
level of abstraction.
General structure of a distributed system as middleware.

36
Middleware Models
Remote Procedure Calls (RPCs)
Hiding network communication by allowing a process to
call a procedure of which an implementation is located on
a remote machine.
When calling a such procedure, parameters are
transparently shipped to the remote machine where the
procedure is subsequently executed, after which the
results are sent back to the caller.
Distributed objects
Object itself located in a single machine
Making its interface available on other machines
Distributed documents
World wide web (WWW)

37
Middleware and Openness
In an open middleware-based distributed system, the
protocols used by each middleware layer should be the
same, as well as the interfaces they offer to applications.
Fig. 1-23

38
Comparison between Systems
A comparison between multiprocessor operating systems,
multicomputer operating systems, network operating
systems, and middleware-based distributed systems.
Item
Distributed OS
Network OS
Middleware-
based OS
Multiproc. Multicomp.
Degree of transparency Very High High Low High
Same OS on all nodes Yes Yes No No
Number of copies of OS 1 N N N
Basis for communication
Shared
memory
Messages Files Model specific
Resource management
Global,
central
Global,
distributed
Per node Per node
Scalability No Moderately Yes Varies
Openness Closed Closed Open Open

39
Clients and Servers
General interaction between a client and a server.

40
An Example Client and Server (1)
The header.h file used by the client and server.

41
An Example Client and Server (2)
A sample server.

42
An Example Client and Server (3)

A client using the server to copy a file.
1-27 b

43
Processing Level
The general organization of an Internet search
engine into three different layers
1-28

44
Multitiered Architectures (1)
Alternative client-server organizations (a) – (e).
1-29

45
Multitiered Architectures (2)
An example of a server acting as a client.
1-30

46
Modern Architectures
An example of horizontal distribution of a Web service.
1-31
Tags