Computer(presentation) Transport and application layer

DineshYadav94622 1 views 31 slides Oct 13, 2025
Slide 1
Slide 1 of 31
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31

About This Presentation

Computer Netwok


Slide Content

Transport and Application Layers
in Computer Networks

A deep dive into the core responsibilities and functioning of the transport
and application layers in computer network

Presented by
Rishiwar Singh
Dinesh Yadav
Swastik Gupta
Semester VI

Introduction to Transport Layer
The transport layer is responsible for providing logical communication between application processes running on different hosts. Unlike the network
layer, which offers host-to-host communication, the transport layer focuses on process-to-process delivery. This allows multiple applications (like a
web browser, email client, or game) to run simultaneously on a device and communicate independently over the Internet.
The key services provided by the transport layer include:

• Reliable data transfer
• Multiplexing and demultiplexing
• Flow control
• Congestion control
• Error detection and correction

Two main transport layer protocols are:

1. TCP (Transmission Control Protocol): Offers reliable, connection-oriented communication with acknowledgments and retransmissions.
2. UDP (User Datagram Protocol): Provides connectionless, fast delivery with no guarantee of reliability.

TCP is suitable for applications that require data integrity and correct ordering, like web browsing, file transfer, or banking. UDP, on the other hand, is
ideal for applications that prefer speed over reliability, like online gaming or voice chat.
Real-life Example: Imagine a student watching a YouTube lecture while also using Google Chat. These two applications are handled by the transport
layer independently—YouTube's video stream might use UDP for fast delivery, while Google Chat might use TCP for message reliability. Both services
work smoothly without interfering with each other.

Multiplexing and Demultiplexing

Multiplexing (Sender's
Side)
• Combines data from different
applications
• Adds headers with source and
destination port numbers
• Passes these segments to the
network layer
Demultiplexing
(Receiver's Side)
• Examines headers of incoming
segments
• Uses port numbers to deliver data
to the correct application
Common Port Numbers
• Port 80 for HTTP
• Port 25 for SMTP (email)
• Port 53 for DNS

Multiplexing and demultiplexing are key responsibilities of the transport layer that allow multiple communication processes to
coexist on the same device.
This mechanism enables multiple apps to share a network connection while keeping their data separate.

Real-life Example: When a user downloads files and checks email at the same time, the device uses port numbers to know which
data belongs to the browser and which to the email client. This avoids data mix-up and ensures smooth multitasking.

UDP – User Datagram Protocol

No connection setup
Data can be sent immediately.
Unreliable delivery
No acknowledgment, retransmission, or flow control.

Low overhead
Small headers make it lightweight.
Faster than TCP
In many scenarios due to reduced processing.

UDP (User Datagram Protocol) is one of the core transport layer protocols. It offers a simple, fast, and connectionless communication method. Unlike TCP,
it does not establish a connection before data transfer, and it doesn't guarantee delivery, order, or error correction.
UDP is used in applications where:
• Speed is more important than reliability
• Occasional data loss is acceptable
• The application itself handles any necessary error correction

Common uses of UDP include:

• DNS (Domain Name System) lookups
• Online gaming
• Live audio or video streaming
• Voice over IP (VoIP)

UDP is preferred when low latency is critical and the network is relatively reliable, or where the application can handle losses or retransmissions itself.

Real-life Example: When you type a web address (like www.example.com) into your browser, your device sends a DNS query to find the website's IP
address. This query is sent using UDP. Since it's just a small request and response, there's no need for the added complexity of TCP. UDP helps speed
things up by avoiding connection setup.

UDP Segment Structure

A UDP segment consists of two parts: 1. Header (8 bytes) 2. Data

There is no sequence number or acknowledgment field like in TCP, which is why UDP is faster and simpler. The checksum provides
basic error detection, but if an error is found, the packet is simply discarded. No retransmission occurs.
Applications using UDP often have their own mechanisms to handle data recovery, if necessary. But many real-time apps skip this
altogether, as late data is often worse than lost data.
Real-life Example: In online voice chat (like in multiplayer games or VoIP apps), packets of speech are sent using UDP. If a few
packets are lost or arrive late, they're skipped rather than retransmitted. This avoids delays and keeps the conversation flowing
naturally—even if a word or two is missed.

Principles of Reliable Data Transfer

Reliable data transfer ensures that data is delivered completely, in the correct order, and without duplication. This is critical for applications where missing or corrupted data is unacceptable (e.g., file
transfers, emails, banking).
To achieve this, reliable protocols use:

• Sequence numbers to track packet order
• Acknowledgments (ACKs) to confirm receipt
• Timers to trigger retransmissions if ACKs are not received
• Checksums to detect corruption

Types of reliable transfer methods:

• Stop-and-Wait: One packet sent at a time, waits for ACK before sending the next.
• Pipelining (used in advanced protocols like Go-Back-N and Selective Repeat): Multiple packets sent before receiving ACKs, improving efficiency.

The protocol must handle:

• Lost packets
• Duplicate packets
• Out-of-order packets

Real-life Example: When uploading a document to a cloud drive (like Google Drive), the file must arrive in full and uncorrupted. The underlying protocol ensures that every byte is received and
reassembled in the correct order. If part of the upload is lost due to a network glitch, it's automatically resent until the complete file is received.

Stop-and-Wait Protocol


Send Packet
Sender transmits one packet
Wait for ACK
Sender waits for
acknowledgment
Receive ACK
Confirmation of successful
delivery
Next
Packet/Retrans
mit
Send next packet or
retransmit if timeout

The Stop-and-Wait protocol is the most basic method of ensuring reliable data transfer. It works in a simple sequence:

1. The sender sends one packet.
2. It waits for an acknowledgment (ACK) from the receiver.
3. Once the ACK is received, the sender transmits the next packet.

If the sender doesn't receive an ACK within a certain time (due to packet loss or error), it retransmits the same packet. The receiver
uses sequence numbers (usually 0 and 1 alternately) to identify duplicate packets and discard them.
While Stop-and-Wait ensures reliability, it has a major drawback: inefficiency, especially over high-latency or high-bandwidth links. The
sender is idle while waiting for the ACK, which wastes bandwidth. This protocol is mostly theoretical or used in systems where data is
sent in small amounts and high speed is not critical.
Real-life Example: Imagine you're texting someone via a satellite connection. You send a message and wait to hear back before sending
another. If the reply is delayed, you're stuck waiting. That's how Stop-and-Wait behaves—simple, but slow when delays are long (like in
satellite communication).

Reliable Transport
Protocols: TCP and
UDP
The transport layer plays a vital role in enabling smooth communication
between applications across networks. It sits between the application and
network layers and provides end-to-end process-level communication.

Pipelined Reliable Transfer Protocols
To overcome the limitations of Stop-and-Wait, more efficient protocols use pipelining. This means the sender can transmit
multiple packets before needing acknowledgments.
Go-Back-N (GBN)
• Sender can send N packets without receiving ACKs
• If a packet is lost, the receiver discards that and all following packets
• The sender must go back and resend all packets from the lost one onward

Selective Repeat (SR)
• The receiver buffers and acknowledges correctly received packets, even if some are missing
• The sender only retransmits the lost packets
• This makes SR more efficient than GBN, especially in networks with higher packet loss

Sliding Windows
Both protocols use sliding windows to keep track of packets being sent and acknowledged


Real-life Example: You're watching a YouTube video. If one chunk doesn't arrive, the app re-fetches only that specific chunk (like
Selective Repeat). It doesn't reload the entire video from the beginning (which would be like Go-Back-N). This makes streaming
smooth even on unreliable networks.

TCP Overview
TCP (Transmission Control Protocol) is the most widely used transport protocol on the Internet.


Reliable delivery
TCP ensures all data arrives at its
destination
In-order data arrival
Data is delivered in the same
sequence it was sent
Full-duplex communication
Data can flow in both directions
simultaneously

Flow and congestion control
Prevents overwhelming receivers and networks
Connection-oriented setup
Establishes a connection before data transfer

Before data transfer begins, TCP establishes a connection using a three-way handshake: 1. Client sends a SYN (synchronize)
message. 2. Server replies with SYN-ACK. 3. Client sends an ACK.
Once established, data is sent in byte streams (not discrete messages). Each byte is numbered, and acknowledgments track which
bytes have been received. TCP ensures that: • Lost packets are retransmitted. • Out-of-order packets are reassembled. • Duplicate
data is discarded.
TCP also uses flow control to avoid overwhelming the receiver, and congestion control to prevent network overload.

Real-life Example: Logging into a banking website uses TCP. It's critical that your password and account data arrive completely and
securely, in the correct order. TCP's reliability ensures that your login isn't affected by temporary network hiccups.

TCP Flow Control
Flow control ensures the sender doesn't overwhelm the receiver with too much data at once. TCP achieves this using a sliding
window protocol.




Receiver Advertises
Window Size
The receiver advertises a window size
(rwnd), telling the sender how many
bytes it can handle

Sender Limits Data
Transmission
The sender must limit data sent to within
this window
Window Slides Forward
As the receiver processes and
acknowledges data, the window slides
forward, allowing more data to be sent
This prevents buffer overflow, where the receiver gets more data than it can process. TCP's flow control is receiver-driven, meaning
the receiver controls how much data it can accept. The advertised window size is sent in each TCP segment's header.
Real-life Example: You're downloading a high-resolution image on your phone, but your phone is busy running several apps. TCP
flow control lets the server know to slow down the data flow so your phone doesn't get overloaded—ensuring a smooth and
complete download without crashes.

TCP Connection Management
TCP connection management includes both establishing and terminating connections.


Connection Establishment – 3-Way
Handshake
1. Client sends SYN
2. Server replies with SYN-ACK
3. Client sends ACK
This ensures both parties are ready and agree on initial
sequence numbers before data transmission begins



Data Transfer
Both sides exchange data according to TCP rules

Connection Termination – 4-Step
Process
1. One side sends a FIN (I'm done sending)
2. Other side responds with ACK
3. Receiver then sends its own FIN
4. The original sender replies with ACK, completing closure
This process ensures that both sides gracefully close the
connection and that no data is lost during shutdown

TCP also handles special cases like: • Simultaneous open (both sides send SYN) • Half-close (one side stops sending but still receives)

Real-life Example: When you open a browser tab and visit a secure site, TCP sets up a connection using the handshake. When you close the tab, TCP
cleanly tears down the connection so no part of the webpage or data is left hanging or lost.

TCP Congestion Control
Congestion control in TCP helps prevent network overload by adjusting how fast data is sent. If too much data is injected into the
network at once, it can cause routers to become congested, leading to packet loss, delays, and retransmissions.

Slow Start
Starts with a small congestion window
(cwnd). Doubles cwnd each round-trip
time (RTT) to probe network capacity
Congestion Avoidance
Once a threshold is reached, growth
becomes linear, not exponential



AIMD
Additive Increase Multiplicative
Decrease: Gradually increases sending
rate. Reduces it sharply when
congestion is detected
Loss Detection
If three duplicate ACKs are received:
Fast Retransmit. If timeout occurs:
cwnd is reset, and slow start restarts

This adaptive behavior ensures fair bandwidth sharing among multiple TCP streams and maintains network stability.

Real-life Example: When you're on a video call and your internet becomes slow, the quality of the video drops. That's because TCP
detects congestion and reduces the sending rate to avoid dropping more packets, ensuring at least some data continues to flow.

TCP Timeout and Retransmission
TCP uses timeouts and retransmissions to ensure data reliability. If a segment is lost or delayed, TCP resends it after a timeout. But
setting this timeout is tricky—too short causes unnecessary retransmissions, too long delays recovery.
RTT Estimation
Calculates average RTT using SampleRTT values

Timeout Calculation
TimeoutInterval = EstimatedRTT + 4 * DevRTT

Retransmission
If no ACK is received in this interval, TCP retransmits the segment

Fast Retransmit
Resends lost data quickly upon receiving duplicate ACKs,
without waiting for timeout

This adaptive mechanism balances responsiveness and stability.

Real-life Example: You're uploading an assignment, and your Wi-Fi briefly drops. TCP waits a moment, notices there's no ACK, and
then automatically resends the missing data—ensuring the full file reaches the server without your input.

TCP vs UDP Comparison
TCP and UDP offer different strengths. Choosing between them depends on the needs of the application.

Feature TCP UDP
Connection Connection-oriented Connectionless
Reliability Guaranteed Not guaranteed
Order Maintains order May arrive out of order
Speed Slower (due to checks) Faster (no checks)
Overhead High (20+ byte headers) Low (8-byte headers)
Applications Web, Email, File Transfer DNS, Streaming, Gaming

TCP is suitable when: • Reliability and order are critical (e.g., login forms, file downloads)

UDP is suitable when: • Speed is more important than perfect delivery (e.g., live games, VoIP)

Key responsibilities of the transport layer: • Process-to-process delivery • Reliable data transfer (TCP) • Fast, lightweight transfer (UDP) • Flow and
congestion control (TCP) • Multiplexing and demultiplexing
This layer adds headers to the data from the application layer, which helps in ensuring that data gets delivered to the correct application
(demultiplexing), and handles any necessary retransmissions, acknowledgments, or error checks.
Real-life Example: Online banking apps use TCP to ensure secure, accurate transmission of your data. On the other hand, games like PUBG use UDP,
where slight data loss is better than delayed movement or voice. When you're downloading a PDF from your email (TCP) and streaming a music
playlist (UDP) at the same time, your computer uses the transport layer to apply different protocols suited for each app—ensuring reliable file
delivery and seamless music playback.

Introduction to the Application Layer
The application layer is the topmost layer of the Internet protocol stack. It provides services that directly support user applications
like browsers, email clients, file transfer tools, and messaging apps.
Responsibilities of the application layer:

• Interface between user software and the network
• Provides protocols for specific tasks
• Defines message formats, syntax, and semantics
Common application-layer protocols:
• HTTP: For web browsing
• SMTP, POP3, IMAP: For email
• FTP: For file transfers
• DNS: For domain name resolution
• HTTPS: Secure version of HTTP

It interacts with the transport layer via sockets, where each application process is identified by an IP address and port number
combination. The application layer doesn't concern itself with how data is sent—just that the right data reaches the right
application.
Real-life Example: When you open a browser and type www.google.com, the application layer triggers DNS to resolve the
domain name and then HTTP (or HTTPS) to fetch the web page. You just see the site appear, but under the hood, multiple
protocols work in sync.

Principles of Application Layer
Protocols

Protocol Definitions
Application-layer protocols govern how
network applications communicate.
These protocols define:

• Message structure
• Message types (e.g., request,
response)
• Syntax and semantics
• Rules for data exchange
Client-Server Model
Server is always-on with a known IP
Client initiates requests
Used by websites, email servers, etc.

Examples: HTTP, FTP, SMTP
Peer-to-Peer (P2P)
Model
No dedicated server

Peers both send and receive data
More scalable but harder to manage
Examples: BitTorrent, Skype









These architectures determine how the application-layer protocol is designed. Client-server models are easier to control and secure,
while P2P models offer better scalability and resource distribution.
Real-life Example: When using BitTorrent to download a movie, your computer connects to many other users (peers) and
downloads different parts of the file from them simultaneously. This decentralized setup uses a P2P protocol for efficient large-file
sharing.

The Web and HTTP
Client Request
Browser sends HTTP request to server

Server Processing
Server processes request and prepares response

Response Delivery
Server sends HTTP response with content

Content Rendering
Browser renders the received content


HTTP (HyperText Transfer Protocol) is the foundational application-layer protocol used to transfer web pages and resources between web browsers (clients) and servers. It's a request-response protocol, typically
running over TCP.

Key features:

• Stateless: Each HTTP request is independent. The server doesn't retain user data between requests.
• Runs on TCP: Typically on port 80 (or 443 for HTTPS)
• Supports methods like:
• GET: Request data
• POST: Submit data
• HEAD: Retrieve headers only
Types of HTTP Connections:
1. Non-Persistent HTTP (HTTP/1.0)
• A separate TCP connection for each object
• Higher delay due to multiple handshakes
2. Persistent HTTP (HTTP/1.1)
• A single connection reused for multiple requests
• Reduces delay and overhead

HTTP headers provide important metadata (e.g., browser type, language, cache-control). Responses include status codes like 200 OK, 404 Not Found, etc.

Real-life Example: When you click on a news article, your browser sends a GET request to the server. The server responds with HTML content, images, and videos—all over HTTP. If it's using HTTP/1.1, your browser
uses one connection to load all items efficiently.

HTTP/1.1 200 OK














HTTP Message Format and Status
Codes

HTTP communication involves two main types of messages: requests (sent by the client) and responses (sent by the server).
These codes help the client understand what happened to the request.
Real-life Example: If you type a URL wrong and the page doesn't exist, the server replies with a 404 Not Found error. But if the
page exists, you'll get a 200 OK along with the webpage content.

Web Caching and Content Delivery Optimization


Client Request
Browser checks local cache first
Cache Check
If not in local cache, proxy cache is checked







Origin Server
Only accessed if content not in caches
CDN Delivery
If needed, request goes to nearest CDN node



Web caching and Content Delivery Networks (CDNs) help optimize web performance by storing copies of web content closer to users, reducing latency and server load.

Web Caching:

• Saves copies of frequently accessed resources (e.g., images, stylesheets)
• Browser or proxy checks if the cached version is still valid using headers like:
• If-Modified-Since
• ETag
• If unchanged, server returns 304 Not Modified and browser uses cached copy

Content Delivery Networks (CDNs):

• Global network of distributed servers
• Stores content (e.g., videos, files, websites) across multiple nodes
• DNS routes users to the closest CDN node
• Reduces delays and balances load

Caching reduces the need to re-download resources every time, saving bandwidth and speeding up browsing.

Real-life Example: The first time you visit a shopping site, it loads images from the main server. Next time, those images load instantly from your browser's cache or a nearby
CDN server—making the site feel much faster.

Cookies and Maintaining State

Initial Visit
User visits website for the first time
Cookie Storage
Browser stores the cookie locally



Cookie Creation
Server sends Set-Cookie header with response
Subsequent Visits
Browser sends cookie with each request to the same site

HTTP is stateless, meaning each request is processed independently. This creates a challenge for applications that require session tracking (e.g., staying logged in, shopping carts). To maintain
user state, websites use cookies.
What is a Cookie? A cookie is a small piece of data stored by the browser and sent with every request to the same server. It helps the server recognize the user across multiple requests.


Cookie Exchange Process:

1. User visits a website.
2. Server responds with a Set-Cookie header.
3. Browser stores the cookie and sends it back with future requests in a Cookie: header.

Cookie Fields:

• Name=Value pair
• Expiration date
• Domain and path restrictions
• Security flags (Secure, HttpOnly, SameSite)

Security Risks & Protections:

• Cross-site scripting (XSS): Mitigated using HttpOnly.
• Cross-site request forgery (CSRF): Limited using SameSite.
• Eavesdropping: Prevented using Secure cookies over HTTPS.

Real-life Example: You log into Amazon and add items to your cart. Even if you close and reopen the browser, your cart stays intact because the site uses cookies to remember your session
state.

Email Protocols – SMTP, POP3, and IMAP

Email communication relies on three main application-layer protocols, each serving a specific purpose:

SMTP (Simple Mail Transfer Protocol)

• Used to send emails
• Works between mail servers and from client to server
• Uses TCP port 25
• Push protocol: sender pushes the email to the server

POP3 (Post Office Protocol v3)

• Used to retrieve emails
• Downloads messages from server and optionally deletes them
• Simple, suitable for offline reading
• Uses TCP port 110

IMAP (Internet Message Access Protocol)

• Allows multiple-device email access
• Messages stay on the server
• Users can create folders, mark messages as read/unread
• More complex than POP3
• Uses TCP port 143

SMTP handles the sending side, while POP3 or IMAP handle the receiving.

Real-life Example: Alice sends an email to Bob using SMTP. Bob opens it on his phone using IMAP, and later sees the same email marked as "read" on his laptop. That's the power of IMAP's synchronization across devices.

Web Security – HTTPS, Cookies, and Threat
Protection




Secure Connection
Initiation
The browser initiates a secure session
with the server using TLS handshake


Certificate Verification
The server presents a digital
certificate which the browser validates
against trusted certificate authorities

Key Exchange
A secure session key is generated for
encrypting all subsequent
communication
Encrypted Data
Transfer
All HTTP traffic is now encrypted,
protecting against eavesdropping and
tampering


Security is critical when sensitive data (like passwords, credit card numbers, or personal info) is exchanged over the web. Standard HTTP is not secure—anyone on
the network path can intercept, read, or modify traffic. To fix this, we use HTTPS, which is HTTP over SSL/TLS.
HTTPS (HTTP Secure)

• Encrypts all HTTP traffic using SSL/TLS
• Protects against eavesdropping and tampering
• Uses TCP port 443
• Relies on digital certificates for server authentication

Cookie Security Flags:

• Secure: Send cookie only over HTTPS.
• HttpOnly: Prevent access via JavaScript (reduces XSS).
• SameSite: Prevents cross-site requests (helps stop CSRF attacks).

Together, HTTPS and secure cookies protect data during transit and reduce the chances of hijacking or impersonation.

Real-life Example: When you log in to your bank's website, the browser checks the server's certificate. Once validated, it establishes an encrypted connection. A
secure session cookie is created—protected by HttpOnly and Secure flags—to maintain your login safely.

Dynamic Web Content
and Scripting
Web pages have evolved from static documents to highly interactive
applications. This is achieved through dynamic content using scripting—
both on the server side and client side.

Server-Side and Client-Side Scripting

Server-Side Scripting
Generates custom content on the fly
before sending it to the client
Technologies include: PHP, JSP,
ASP.NET, Python (Django), etc.
Often interacts with databases

Sends HTML dynamically based on user
input or state
Client-Side Scripting
Runs inside the browser
Most common: JavaScript
Enables real-time interaction without
reloading the page

Often used with AJAX (Asynchronous
JavaScript and XML)
AJAX
Allows background HTTP requests

Dynamically updates page content
without reloading
Speeds up user interactions

These technologies enable modern, responsive web apps like Gmail, Facebook, or Google Docs.

Real-life Example: When you check your email in Gmail, new messages load automatically without refreshing the entire page.
JavaScript + AJAX quietly communicates with the server to retrieve just the new content—saving time and improving experience.

Content Delivery and Web
Performance


Web Caching (Recap)
Saves copies of frequently used
resources (e.g., images, CSS,
scripts)
Reduces server load and improves
client response time
CDNs
Network of geographically
distributed cache servers
Serve static content (e.g., video,
images, scripts) from nearby
locations
Use DNS redirection to direct users
to the closest node
Examples: Cloudflare, Akamai,
Amazon CloudFront
Peer-to-Peer (P2P)
Users share data directly (each
acts as client and server)
No centralized server
Efficient for large file distribution
Used in torrenting, distributed
backups, etc.
Together, caching, CDNs, and P2P architectures help scale the Internet by reducing congestion, improving download speeds, and
decentralizing load.
Real-life Example: When you watch a Netflix show or YouTube video, the content is delivered from a local CDN node nearby—not
from the main server. This reduces delay, avoids buffering, and improves the viewing experience.

Streaming and Real-Time Applications

Streaming and real-time communication applications require timely delivery of audio/video data. Unlike file transfers, they can tolerate some packet loss but cannot
afford long delays. That's why they often use UDP and specialized protocols like RTP, RTSP, and SIP.
Types of Streaming:

1. Stored Streaming: Pre-recorded files streamed from a server (e.g., YouTube)

2. Live Streaming: Content broadcast in real time (e.g., webinars, live sports)
Real-Time Protocols:
• RTP (Real-Time Transport Protocol): Works on top of UDP, adds sequence numbers and timestamps, used in voice and video apps

• RTSP (Real-Time Streaming Protocol): Controls media playback (play, pause, etc.)

• SIP (Session Initiation Protocol): Establishes and manages multimedia sessions (like VoIP calls)
These protocols work together to deliver real-time performance without waiting for retransmissions.
Buffering: Clients use buffers to smooth out delivery delays, late packets may be discarded rather than retried

Real-life Example: When you watch a live football match online, RTP delivers the video using UDP. If a packet is lost, it's skipped—better to miss a few frames than to
delay the stream. The result: smooth, real-time playback without interruptions.

Remote Procedure Call and Web
Services


Client Call
Client calls a function
Marshalling
Local stub packages the
request
Network
Transfer
Request sent over the
network
Server
Execution
Server executes and returns
result

Remote Procedure Call (RPC) allows programs to run code on another machine as if it were local. It abstracts the details of network
communication, letting developers focus on the logic.

Web Services – Modern RPC:

1. SOAP (Simple Object Access Protocol): XML-based messaging, works over HTTP, uses WSDL to describe services, formal and
robust
2. REST (Representational State Transfer): Resource-based model, uses HTTP verbs (GET, POST, PUT, DELETE), lightweight and
stateless, common for modern APIs (e.g., Twitter, Facebook)
REST is simpler and faster, making it popular for web and mobile apps.

Real-life Example: When you book a flight online, the airline's site may use a REST API to talk to a hotel booking service. A function
like GET /hotels?location=Paris retrieves results via HTTP—just like calling a local function, but it runs remotely.

Domain Name System (DNS)
Root DNS Servers
Top of the hierarchy

TLD Servers
Manage .com, .org, etc.

Authoritative Servers
Host actual domain data

Local Resolvers
Used by ISPs or OS


DNS (Domain Name System) is the Internet's "phone book." It maps human-readable domain names (like www.example.com) to IP addresses (like 192.0.2.1), which computers use to communicate.

Query Types:

• Recursive: Resolver asks each level on behalf of the client.

• Iterative: Resolver gets referrals, then queries the next server itself.

DNS Records:

• A: Maps name to IPv4 address

• MX: Mail server for the domain

• CNAME: Canonical name (alias)

• NS: Name server

• PTR: Reverse lookup

DNS responses are cached to reduce query load and delay.

Real-life Example: When you type www.cs.washington.edu, your browser asks DNS to find the IP. Your system's DNS resolver contacts root → .edu → washington.edu → cs.washington.edu servers to get
the final address and connect you.

Application Layer Summary and Review

Protocol Purpose
HTTP/HTTPS Web browsing (secure/insecure)
FTP File transfer
SMTP/POP3/IMAP Email sending and reading
DNS Name resolution
RTP/RTSP/SIP Real-time streaming/voice
SOAP/REST Web APIs and remote calls
The Application Layer is the topmost layer in the Internet protocol stack, closest to end-users. It provides services that support applications like email, browsing, and file sharing.

Key Takeaways:

• Provides interfaces for apps to communicate over the network

• Uses transport layer (TCP/UDP) for data delivery

• Defines protocols, message formats, and rules for data exchange

• Supports both client-server and P2P architectures
Protocol Selection:
• TCP for reliable delivery (email, login, download)

• UDP for speed and real-time use (gaming, VoIP)

These protocols are designed with the application's needs in mind—balancing performance, security, and complexity.
Real-life Example: When you open https://www.google.com:
1. DNS resolves the domain to an IP address.

2. TCP establishes a reliable connection.

3. HTTPS ensures secure data exchange.

4. HTTP fetches the webpage content.

All layers work together to create a fast, safe user experience.

References and Source
Slide 30: References and Source This entire 30-slide series was based strictly on the content from:




This Presentation is Presented By
Rishiwar Singh
Swastik Gupta
Dinesh Yadav
Textbook Source: UNIT 5 - Computer Networks (Transport
and Application Layers) [Provided PDF: "unit 5 Computer
network.pdf"]
All definitions, explanations, and real-life examples are
extracted or paraphrased directly from the textbook PDF
uploaded by the user.
Tags