virtual-metaverse---project-proposal.pptx

CatherineBenban2 22 views 39 slides Jul 28, 2024
Slide 1
Slide 1 of 39
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39

About This Presentation

DS


Slide Content

Unit III: Virtualization MARK AJ L. CASPILLO Instructor

Virtualization is technology that you can use to create virtual representations of servers, storage, networks, and other physical machines. Virtual software mimics the functions of physical hardware to run multiple virtual machines simultaneously on a single physical machine.

VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES Hardware Support for Virtualization Modern operating systems and processors permit multiple processes to run simultaneously. If there is no protection mechanism in a processor, all instructions from different processes will access the hardware directly and cause a system crash. Therefore, all processors have at least two modes, user mode and supervisor mode, to ensure controlled access of critical hardware. Instructions running in supervisor mode are called privileged instructions. Other instructions are unprivileged instructions. In a virtualized environment, it is more difficult to make OSes and applications run correctly because there are more layers in the machine stack.

2. CPU Virtualization A VM is a duplicate of an existing computer system in which a majority of the VM instructions are executed on the host processor in native mode. Thus, unprivileged instructions of VMs run directly on the host machine for higher efficiency. Other critical instructions should be handled carefully for correctness and stability. The critical instructions are divided into three categories:  privileged instructions, control-sensitive instructions, and behavior-sensitive instructions . 

Privileged instructions execute in a privileged mode and will be trapped if executed outside this mode. Control-sensitive instructions attempt to change the configuration of resources used. Behavior-sensitive instructions have different behaviors depending on the configuration of resources, including the load and store operations over the virtual memory. The Critical I nstructions

A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode. When the privileged instructions including control- and behavior-sensitive instructions of a VM are exe- cuted , they are trapped in the VMM. In this case, the VMM acts as a unified mediator for hardware access from different VMs to guarantee the correctness and stability of the whole system.

2.1 Hardware-Assisted CPU Virtualization This technique attempts to simplify virtualization because full or paravirtualization is complicated. Intel and AMD add an additional mode called privilege mode level (some people call it Ring-1) to x86 processors. Therefore, operating systems can still run at Ring 0 and the hypervisor can run at Ring -1. All the privileged and sensitive instructions are trapped in the hypervisor automatically. This technique removes the difficulty of implementing binary translation of full virtualization. It also lets the operating system run in VMs without modification.

Memory Virtualization Virtual memory virtualization is similar to the virtual memory support provided by modern operat-ing systems. In a traditional execution environment, the operating system maintains mappings of virtual memory to machine memory using page tables, which is a one-stage mapping from virtual memory to machine memory. All modern x86 CPUs include a memory management unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual memory performance. However, in a virtual execution environment, virtual memory virtualization involves sharing the physical system memory in RAM and dynamically allocating it to the physical memory of the VMs.

I/O Virtualization I/O virtualization involves managing the routing of I/O requests between virtual devices and the shared physical hardware. At the time of this writing, there are three ways to implement I/O virtualization: full device emulation, para-virtualization, and direct I/O. Full device emulation is the first approach for I/O virtualization. Generally, this approach emulates well-known, real-world devices.

All the functions of a device or bus infrastructure, such as device enumeration, identification, interrupts, and DMA, are replicated in software. This software is located in the VMM and acts as a virtual device. The I/O access requests of the guest OS are trapped in the VMM which interacts with the I/O devices.

A single hardware device can be shared by multiple VMs that run concurrently. However, software emulation runs much slower than the hardware it emulates [10,15]. The para-virtualization method of I/O virtualization is typically used in Xen. It is also known as the split driver model consisting of a frontend driver and a backend driver. The frontend driver is running in Domain U and the backend dri-ver is running in Domain 0. They interact with each other via a block of shared memory. The frontend driver manages the I/O requests of the guest OSes and the backend driver is responsible for managing the real I/O devices and multiplexing the I/O data of different VMs. Although para-I/O-virtualization achieves better device performance than full device emulation, it comes with a higher CPU overhead.

Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native performance without high CPU costs. However, current direct I/O virtualization implementations focus on networking for mainframes. There are a lot of challenges for commodity hardware devices. For example, when a physical device is reclaimed (required by workload migration) for later reassign- ment , it may have been set to an arbitrary state (e.g., DMA to some arbitrary memory locations) that can function incorrectly or even crash the whole system. Since software-based I/O virtualization requires a very high overhead of device emulation, hardware-assisted I/O virtualization is critical. Intel VT-d supports the remapping of I/O DMA transfers and device-generated interrupts. The architecture of VT-d provides the flexibility to support multiple usage models that may run unmodified, special-purpose, or “virtualization-aware” guest OSes.

Case Study: Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.

Amazon EC2 is a central part of AWS: Amazon Elastic Compute Cloud (EC2) forms a central part of Amazon.com's cloud-computing platform, Amazon Web Services (AWS) by allowing users to rent virtual computers on which to run their own computer applications. EC2 encourages scalable deployment of applications by providing a web service through which a user can boot an Amazon Machine Image (AMI) to configure a virtual machine, which Amazon calls an "instance", containing any software desired. A user can create, launch, and terminate server-instances as needed, paying by the second for active servers – hence the term "elastic". EC2 provides users with control over the geographical location of instances that allows for latency optimization and high levels of redundancy.

2) Instance types: Initially, EC2 used Xen virtualization exclusively. However, on November 6, 2017, Amazon announced the new C5 family of instances that were based on a custom architecture around the KVM hypervisor, called Nitro. Each virtual machine, called an "instance", functions as a virtual private server. Amazon sizes instances based on "Elastic Compute Units". The performance of otherwise identical virtual machines may vary. On November 28, 2017, AWS announced a bare-metal instance type offering marking a remarkable departure from exclusively offering virtualized instance types

As of January 2019, the following instance types were offered: General Purpose: A1, T3, T2, M5, M5a, M4, T3a Compute Optimized: C5, C5n, C4 Memory Optimized: R5, R5a, R4, X1e, X1, High Memory, z1d Accelerated Computing: P3, P2, G3, F1 Storage Optimized: H1, I3, D2 As of April 2018, the following paying method for instance were offered: On-demand: pay by the hour without commitment. Reserved: rent instances with one-time payment receiving discounts on the hourly charge. Spot: bid-based service: runs the jobs only if the spot price is below the bid specified by bidder. The spot price is claimed to be supply- demand based, however a 2011 study concluded that the price was generally not set to clear the market, but was dominated by an undisclosed reserve price

3) Cost: As of April 2018, Amazon charged about $0.0058/hour ($4.176/month) for the smallest "Nano Instance" (t2.nano) virtual machine running Linux or Windows. Storage-optimized instances cost as much as $4.992/hour (i3.16xlarge). "Reserved" instances can go as low as $2.50/month for a three-year prepaid plan. The data transfer charge ranges from free to $0.12 per gigabyte, depending on the direction and monthly volume (inbound data transfer is free on all AWS services)

(4) Free tier: As of December 2010, Amazon offered a bundle of free resource credits to new account holders. The credits are designed to run a "micro" sized server, storage (EBS), and bandwidth for one year. Unused credits cannot be carried over from one month to the next

5) Reserved instances: Reserved instances enable EC2 or RDS service users to reserve an instance for one or three years. The corresponding hourly rate charged by Amazon to operate the instance is 35-75% lower than the rate charged for on-demand instances. Reserved Instances can be purchased in three different ways: All Upfront, Partial Upfront and No Upfront. The different purchase options allow for different structuring of payment models. In September 2016, AWS announced several enhancements to Reserved Instances, introducing a new feature called scope and a new reservation type called a Convertible. In October 2017, AWS announced the allowance to subdivide the instances purchased for more flexibility

6) Spot instances: Cloud providers maintain large amounts of excess capacity they have to sell or risk incurring losses. Amazon EC2 Spot instances are spare compute capacity in the AWS cloud available at up to 90% discount compared to On- Demand prices. As a trade-off, AWS offers no SLA on these instances and customers take the risk that it can be interrupted with only two minutes of notification when Amazon needs the capacity back. Researchers from the Israeli Institute of Technology found that "they (Spot instances) are typically generated at random from within a tight price interval via a dynamic hidden reserve price". Some companies, like Spotinst , are using big data combined with machine learning to predict spot interruptions up to 15 minutes in Advance .

7) Saving plans: In November 2019, Amazon announced Savings Plans. Savings Plans are an alternative to Reserved Instances that come in 2 different plan types: Compute Savings Plans and EC2 Instances Savings Plans. Compute Savings Plans allow an organisation to commit to EC2 and Fargate usage with the freedom to change region, family, size, availability zone, OS and tenancy inside the lifespan of the commitment. EC2 Instance Savings plans provide the lowest prices but are less flexible meaning a user must commit to individual instance families within a region to take advantage, but with the freedom to change instances within the family in that region.

8) Features of EC2: Amazon EC2 provides the following features:  Virtual computing environments, known as instances  Preconfigured templates for your instances, known as Amazon Machine Images (AMIs) , that package the bits you need for your server (including the operating system and additional software)  Various configurations of CPU, memory, storage, and networking capacity for your instances, known as instance types  Secure login information for your instances using key pairs (AWS stores the public key, and you store the private key in a secure place)  Storage volumes for temporary data that's deleted when you stop or terminate your instance, known as instance store volumes

 Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS), known as Amazon EBS volumes  Multiple physical locations for your resources, such as instances and Amazon EBS volumes, known as Regions and Availability Zones  A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your instances using security groups  Static IPv4 addresses for dynamic cloud computing, known as Elastic IP addresses  Metadata, known as tags , that you can create and assign to your Amazon EC2 resources  Virtual networks you can create that are logically isolated from the rest of the AWS cloud, and that you can optionally connect to your own network, known as virtual private clouds (VPCs)

9) Limits of EC2: AWS defines certain limits by default, to prevent users from accidentally creating too many resources. Your AWS account may reach one or more of these limits when using a large number of servers, backups or static IP addresses. EC2 Instances By default, AWS has a limit of 20 instances per region. This includes all instances set up on your AWS account. To increase EC2 limits, request a higher limit by providing information about the new limit and regions where it should be applied.

Static IP Addresses By default, AWS sets a limit of 5 static IP addresses per region. This includes IP addresses unassigned and currently assigned to a server. To increase IP addresses limit, request a higher limit by providing information about the new limit and regions where it should be applied. Snapshots The AWS default limit for all snapshots is 10000 snapshots per region. To increase the number of snapshots allowed, contact AWS Support and request a higher limit. Other Limits If your AWS account reaches any of AWS’ other limits, contact AWS Support and request a higher limit.

What is Software-Defined Networking (SDN) Software-Defined Networking (SDN)  is an approach to networking that uses software-based controllers or application programming interfaces (APIs) to communicate with underlying hardware infrastructure and direct traffic on a network. This model differs from that of traditional networks, which use dedicated hardware devices (i.e., routers and switches) to control network traffic. SDN can create and control a virtual network – or control a traditional hardware – via software.

Why Software-Defined Networking is important? SDN represents a substantial step forward from traditional networking, in that it enables the following: Increased control with greater speed and flexibility:  Instead of manually programming multiple vendor-specific hardware devices, developers can control the flow of traffic over a network simply by programming an open standard software-based controller. Networking administrators also have more flexibility in choosing networking equipment, since they can choose a single protocol to communicate with any number of hardware devices through a central controller.

Customizable network infrastructure:  With a software-defined network, administrators can configure network services and allocate virtual resources to change the network infrastructure in real time through one centralized location. This allows network administrators to optimize the flow of data through the network and prioritize applications that require more availability. Robust security:  A software-defined network delivers visibility into the entire network, providing a more holistic view of security threats. With the proliferation of smart devices that connect to the internet, SDN offers clear advantages over traditional networking. Operators can create separate zones for devices that require different levels of security, or immediately quarantine compromised devices so that they cannot infect the rest of the network.

How does Software-Defined Networking (SDN) work? In SDN (like anything virtualized), the software is decoupled from the hardware. SDN moves the control plane that determines where to send traffic to software, and leaves the data plane that actually forwards the traffic in the hardware. This allows network administrators who use software-defined networking to program and control the entire network via a single pane of glass instead of on a device by device basis. There are three parts to a typical SDN architecture, which may be located in different physical locations: Application s, which communicate resource requests or information about the network as a whole Controllers , which use the information from applications to decide how to route a data packet Networking devices , which receive information from the controller about where to move the data

Benefits of Software-Defined Networking (SDN) Many of today’s services and applications, especially when they involve the cloud, could not function without SDN. SDN allows data to move easily between distributed locations, which is critical for cloud applications.

SDN supports moving workloads around a network quickly. For instance, dividing a virtual network into sections, using a technique called network functions virtualization (NFV), allows telecommunications providers to move customer services to less expensive servers or even to the customer’s own servers. Service providers can use a virtual network infrastructure to shift workloads from private to public cloud infrastructures as necessary, and to make new customer services available instantly. SDN also makes it easier for any network to flex and scale as network administrators add or remove virtual machines, whether those machines are on-premises or in the cloud.

What are the different models of SDN? While the premise of centralized software controlling the flow of data in switches and routers applies to all software-defined networking, there are different models of SDN. Open SDN:  Network administrators use a protocol like OpenFlow to control the behavior of virtual and physical switches at the data plane level. SDN by APIs:  Instead of using an open protocol, application programming interfaces control how data moves through the network on each device.

SDN Overlay Model:  Another type of software-defined networking runs a virtual network on top of an existing hardware infrastructure, creating dynamic tunnels to different on-premise and remote data centers. The virtual network allocates bandwidth over a variety of channels and assigns devices to each channel, leaving the physical network untouched. Hybrid SDN:  This model combines software-defined networking with traditional networking protocols in one environment to support different functions on a network. Standard networking protocols continue to direct some traffic, while SDN takes on responsibility for other traffic, allowing network administrators to introduce SDN in stages to a legacy environment.
Tags