Week_3.pptxmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm

NavumGupta1 4 views 18 slides Mar 03, 2025
Slide 1
Slide 1 of 18
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18

About This Presentation

nnnnnbgmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmlllllllllllllll k,lnnl


Slide Content

OpenStack Networking This diagram depicts a sample OpenStack Networking deployment, with a dedicated OpenStack Networking node performing L3 routing and DHCP, and running the advanced services  FWaaS  and  LBaaS . Two Compute nodes run the Open vSwitch ( openvswitch -agent ) and have two physical network cards each, one for tenant traffic, and another for management connectivity. The OpenStack Networking node has a third network card specifically for provider traffic

Open vSwitch Open vSwitch (OVS) is a software-defined networking (SDN) virtual switch similar to the Linux software bridge. OVS provides switching services to virtualized networks with support for industry standard  NetFlow ,  OpenFlow , and  sFlow . Open vSwitch is also able to integrate with physical switches using layer 2 features, such as  STP ,  LACP , and  802.1Q VLAN tagging . Tunneling with VXLAN and GRE is supported with Open vSwitch version  1.11.0-1.el6  or later

Modular Layer 2 (ML2)

ML2 network types Multiple network segment types can be operated concurrently. In addition, these network segments can interconnect using ML2’s support for multi-segmented networks. Ports are automatically bound to the segment with connectivity; it is not necessary to bind them to a specific segment. Depending on the mechanism driver, ML2 supports the following network segment types: flat GRE local VLAN VXLAN

The various  Type  drivers are enabled in the ML2 section of the  ml2_conf.ini  file Tenant networks Tenant networks are created by users for connectivity within projects. They are fully isolated by default and are not shared with other projects. OpenStack Networking supports a range of tenant network types: Flat  - All instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation takes place. VLAN  - OpenStack Networking allows users to create multiple provider or tenant networks. They can also communicate with dedicated servers, firewalls, load balancers and other network infrastructure on the same layer 2 VLAN. [ml2]type_drivers = local,flat,vlan,gre,vxlan

VXLAN and GRE tunnels  - VXLAN and GRE use network overlays to support private communication between instances. An OpenStack Networking router is required to enable traffic to traverse outside of the GRE or VXLAN tenant network. A router is also required to connect directly-connected tenant networks with external networks, including the Internet; the router provides the ability to connect to instances directly from an external network using floating IP addresses.

Configure controller nodes Edit  /etc/neutron/plugin.ini  (symbolic link to  /etc/neutron/plugins/ml2/ml2_conf.ini. Add flat to existing list of values and set flat_networks to * Type_drivers = vxlan , flat Flat_networks =* Create an external network as a flat network and associate it with the configured  physical_network .

Create a subnet using neutron subnet-create command. Restart the neuton -server service to apply the changes.

 Configure the Network and Compute nodes 1.  Create an external network bridge ( br -ex) and add an associated port (eth1) to it Create the external bridge in  /etc/ sysconfig /network-scripts/ ifcfg - br -ex : In  /etc/ sysconfig /network-scripts/ifcfg-eth1 , configure the eth1 to connect to br -ex Reboot the node or restart the network service for the changes to take effect 2.  Configure physical networks in  /etc/neutron/plugins/ml2/openvswitch_agent.ini  and map bridges to the physical network 3.  Restart the  neutron- openvswitch -agent  service on both the network and compute nodes for the changes to take effect

Open vSwitch with Data Plane Development Kit (OVS-DPDK) datapath

Standard OVS built out of three main components: ovs-vswitchd – a user-space daemon that implements the switch logic kernel module (fast path) – that processes received frames based on a lookup table ovsdb -server – a database server that ovs-vswitchd queries to obtain its configuration. External clients can talk to ovsdb -server using OVSDB protocol

When a frame is received, the fast path (kernel space) uses match fields from the frame header to determine the flow table entry and the set of actions to execute. If the frame does not match any entry in the lookup table it is sent to the user-space daemon ( vswitchd ) which requires more CPU processing. The user-space daemon then determines how to handle frames of this type and sets the right entries in the fast path lookup tables

OVS has several ports: outbound ports which are connected to the physical NICs on the host using kernel device drivers, Inbound ports which are connected to VMs. The VM guest operating system (OS) is presented with vNICs using the well-known VirtlO paravirtualized network driver.

PCI Passthrough Through Intel’s VT-d extension (IOMMU for AMD) it is possible to present PCI devices on the host system to the virtualized guest OS. This is supported by KVM (Kernel-based Virtual Machine). Using this technique it is possible to provide a guest VM exclusive access to a NIC. For all practical purposes, the VM thinks the NIC is directly connected to it. PCI passthrough suffers from one major shortcoming - a single interface eth0 on one of the VNF1 has complete access and ownership of the physical NIC

Data Plane Development Kit (DPDK)

DPDK-accelerated Open vSwitch (OVS-DPDK) Open vSwitch can be bundled with DPDK for better performance, resulting in a DPDK-accelerated OVS (OVS+DPDK). At a high level, the idea is to replace the standard OVS kernel datapath with a DPDK-based datapath, creating a user-space vSwitch on the host, which is using DPDK internally for its packet forwarding. The nice thing about this architecture is that it is mostly transparent to users as the basic OVS features as well as the interfaces it exposes (such as OpenFlow, OVSDB, the command line, etc.) remains mostly the same. 

DPDK with Red Hat OpenStack Platform Generally, we see two main use-cases for using DPDK with Red Hat and Red Hat OpenStack Platform. DPDK enabled applications, or VNFs, written on top of Red Hat Enterprise Linux as a guest operating system. Here we are talking about Network Functions that are taking advantage of DPDK as opposed to the standard kernel networking stack for enhanced performance. DPDK-accelerated Open vSwitch, running within Red Hat OpenStack Platform compute nodes (the hypervisors). Here it is all about boosting the performance of OVS and allowing for faster connectivity between VNFs.
Tags