Invented by Ganesan Chandrashekhar, Rahul Korivi Subramaniyam, Ram Dular Singh, Vivek Agarwal, Howard Wang, Nicira Inc

The Market For Configuration of the Logical Router

The Configuration of Logical Router market is projected to experience a compound annual growth rate (CAGR) of more than 5% between 2018 and 2023, driven by rising demand for cloud-native applications and increasing virtualization adoption.

VMware’s NSX platform enables the deployment of a Distributed Logical Router and its related control plane capabilities, an essential element in any successful NSX implementation.

Market Overview

The market for Configuring Logical Routers is growing due to the rising popularity of distributed networking, the adoption of NSX by service providers, and a need to address security threats with greater granularity. This market encompasses various products such as software and hardware as well as services and professional services.

Service provider networks typically consist of multiple layers of switches and routers that handle traffic between customers. This can lead to complexity in the design and maintenance process.

Juniper Networks created the logical system, a fancy term for an array of virtual machines and associated routing protocols that work in harmony behind the scenes to facilitate operations.

This functionality is an impressive upgrade to Junos OS, providing service providers with the opportunity to reduce overhead while improving agility.

The logical system may appear complex, but its performance and cost savings are truly remarkable. As a result, it has become widely adopted as the new standard in enterprise routing. Not only does this reduce costs, but also allows faster deployments and greater insight into your network traffic.

Technology Overview

Logical routers are software-based network devices that offer routing services within the NSX platform. They can be configured to support various routing protocols, such as BGP, RIP and OSPF, along with static routes. Furthermore, logical routers act as logical boundaries between a logical network and its external environment.

Virtual routers, managed by the virtual routing engine, are abstracted from physical hardware in VMware NSX platforms. This separation makes logical routers a more efficient way of managing routing protocols.

VMware NSX-T’s logical router architecture allows for the creation of network elements like segments (Layer 2 broadcast domain) and gateways (routers) as logical constructs that reside at the hypervisor layer, abstracted from their underlying physical hardware. This makes interconnecting virtual and physical workloads deployed in different logical L2 networks much simpler.

Logical interfaces can be either IPv4 or IPv6, with either encapsulation or frame relay DLCIs. They’re commonly used to extend Ethernet or Frame Relay connectivity across hosts that may have different subnets.

Junos OS Release 14.1 allows you to configure multichassis link aggregation (MC-LAG) interfaces on logical systems within MX Series routers. MC-LAG offers several advantages over traditional LAG, such as node level redundancy and multihoming support.

For instance, MC-LAG routers can automatically create a transit link between two tier-0 logical routers when two of them share a common link. This feature is especially beneficial when sending traffic northbound and connecting an L2 switch to a router.

Similar, if a Tier-1 logical router has DNAT and Edge firewall ports connected to the same logical switch, traffic should first go through the DNAT port and then through the Edge firewall port. This order maximizes resource use and helps reduce the number of logical routers needed for processing traffic across the network.

Additionally, logical routers can be configured with flow aggregation rules to improve performance by decreasing the time it takes to route traffic through NSX environment. Beginning with Junos OS Release 11.4, flow aggregation is supported on logical systems; however, cflowd version 9 must be configured on these devices in order to take advantage of this feature.

Industry Trends

The market for Configuring Logical Routers has been expanding steadily. This can be attributed to the increasing demand for cloud-based services like OTT video that require large bandwidth networks in order to function optimally and securely. Furthermore, software-defined networking (SDN) and network functions virtualization (NFV) are fueling this growth as service providers strive to enhance network performance while reducing cost of ownership through cloud migration. Consequently, the demand for logical routers is expected to continue its upward trajectory over the coming years.

The logical router market is dominated by several major players, such as Cisco, Juniper Networks, Brocade Communications, ZyXEL Technology and Nortel Networks. Companies must invest heavily in research and development, product and service innovations, marketing activities and public relations strategies in order to remain competitive in this competitive space.

Conclusions

The market for Configuring Logical Routing Systems has seen significant growth. This is mainly because logical networks are essential components of virtualization environments, particularly large data centers with many virtual machines. Logical networks consist of a collection of logical switches (or ports) representing IP subnets that are implemented within an managed network across a group of managed forwarding elements (MFEs).

In some embodiments, the logical router is implemented centralized, such as by one or more gateway hosts that implement all of its routing table. This setup enables the logical router to act as a logical barrier between an internal network and external networks.

This configuration necessitates that the logical router be able to process various types of routes, including those connecting it to another logical switch or port attached to it as well as those providing access to external networks. Furthermore, it must support routing protocols like OSPF and IS-IS in order to efficiently forward traffic between logical network ports.

Logical router processing involves recursive traversal operations and error checking that cannot be handled by the managed forwarding elements commonly used to implement logical networks. Furthermore, this type of processing often relies on an input translation controller (software that generates a set of input routes from user-defined network configuration) for its input route generation.

Some embodiments address this problem by including a route processing engine that converts input routes to output routes that can be implemented in the logical router’s routing table. Furthermore, this process is efficient because it narrows the input set down to only those output routes which are then passed along to the table mapping engine for use in creating the routing table of a logical router.

After creating the output set of routes, the table mapping engine distributes that logical router’s routing table information to network elements (e.g., managed forwarding elements and managed gateways located on host machines) so they can implement its routing table accordingly. This data includes flow entries and routing table info indicating responsibilities for forwarding packets with specific network addresses to specific logical ports as well as gateway responsibilities to forward specified packets to external networks.

The Nicira Inc invention works as follows

Some embodiments allow for the operation of multiple logical networks using a network virtualization infrastructure. This method defines a managed physically switching element (MPSE), which includes multiple ports that allow packet forwarding to and from a variety of virtual machines. Each port is assigned a unique media control (MAC) address. This method creates multiple managed physical routing elements (MPREs), for different logical networks. Each MPRE can receive data packets from the same port on the MPSE. Each MPRE is used to route data packets across different segments of the network. This method gives the configuration data for the defined MPSE as well as the plurality MPREs to a number of host machines.

Background for Configuration of the logical router

In a network virtualization environment, 3-tier apps are the most common. A web-tier, database-tier and app-tier are all on different L3 subnets. This means that IP packets must travel from one virtual machine to another in a subnet. Once they reach a L3 router, they are forwarded to the destination. Even if the destination VM hosts on the same host machine that the originating VM, this is true. This creates unnecessary network traffic, lower latency, and lower throughput which can significantly impact the performance of hypervisor-based applications. This performance degradation is generally caused by two VMs that are communicating with each other via different IP subnets.

FIG. “FIG. VMs 121 to 129 run on host machines 131?133. These are physical machines that are communicatively connected by a physical network 105.

VMs can be found in different parts of the network. Specifically, VMs 121 to 125 are located in segment A, while VMs 126 to 129 are located in segment B. Link layer protocols allow VMs within the same segment of the network to communicate with one another using link layer protocol (L2). However, VMs located in different segments of a network can’t communicate with one another with link layer protocols. They must instead communicate through network layer routers or gateways. VMs running on different host machines can communicate with one another through the network traffic in the 105 physical network, regardless of whether they are located in the same segment.

The host machines 131-133 are running hypervisors that enable VMs to communicate locally with one another without having to go through the physical network 105. However, VMs belonging to different segments will need to go through an L3 router, such as the shared router 110. This router can only be reached behind a physical network. This applies even if the VMs are running on the same host machine. The traffic between VM 125/VM 126 must pass through the physical network 105 or the shared router 110, even though they both run on host machine 132.

A distributed router is required to forward L3 packets to all hosts that VMs can run on. Distributed routers should allow data packets to be forwarded locally (i.e. at the originating hypervisor), so that there is only one hop between destination VM and source VM.

Some embodiments define a “logical router” or logical routing element (LRE) for the logical network to facilitate L3 packet forwarding among virtual machines (VMs). A LRE may be distributed among the host machines in a logical network, acting as a virtual distributed router. Each host machine has its own instance of the LRE that it manages as a managed physical route element (MPRE), and performing L3 packet forwarding between the VMs on that host. The MPRE permits L3 packet forwarding between VMs on the same host machine. This is possible without the need to traverse the physical network. Different LREs may be defined for different tenants. A host machine can operate multiple LREs by using multiple MPREs. Different MPREs may be shared by different tenants on the same host machine. They share a common port and a similar L2 MAC address (MPSE) on managed physical switching elements (MPSE).

In some cases, a LRE may include one or more logical inputs (LIFs), which each serve as an interface to a segment of the network. Each LIF can be addressed by an IP address. It serves as a default gateway for network nodes (e.g. VMs) in a particular segment of the network. Each segment of the network has its own LRE logical interface, and each LRE has its unique set of logical connections. Each logical interface is assigned a unique identifier within the network virtualization infrastructure (e.g., an IP address or overlay network ID).

In certain embodiments, a logical system that uses such logical routers further enhances network Virtualization by making MPREs running on different host machines appear identical to all VMs. Each LRE can be addressed at L2 data layer using a virtualMAC address (VMAC). This is the same as for all LREs in this system. Each host machine has a unique physical address (PMAC). Each MPRE that implements a specific LRE can be uniquely addressed by the unique PMAC assigned by its host machine to other machines in the physical network. Some embodiments allow packets to leave a MPRE with VMAC as source addresses. The host machine will change this source address to the unique PMAC, before the packet enters PNIC or leaves the host for physical network. Some embodiments allow packets to enter a MPRE with VMAC as their destination address. If the destination address is unique PMAC addresses associated with the host, the host will change the destination MAC into the generic VMAC. A LIF in a network segment can serve as the default gateway to VMs within that segment. A MPRE receives an ARP query from one of its LIFs and responds locally to the query without forwarding it to other host machines.

Some embodiments allow for L3 layer routing to be performed on physical host machines without running virtualization software. Some embodiments allow data traffic to be routed from virtual machines to physical hosts by their own MPREs. However, data traffic from physical hosts to virtual machines must pass through the designated MPRE.

In some embodiments, at most one MPRE in the host machine is configured to be a bridging MRE. This bridge also includes logical interfaces that can be used for bridging and not routing. A logical interface that is configured to route (routing LIFs), performs L3 level routing between segments of the logical networks by translating L3 layer network addresses into L2MAC addresses. A logical interface designed for bridging LIFs (bridging LIFs), performs bridging by binding MAC addresses with a network segment identifier, such as VNI or a logical Interface.

In some embodiments, LREs that operate in host machines according to the above described are configured using configuration data sets generated by a cluster controllers. In some embodiments, the controllers generate these configuration data set based on logical network definitions that have been created by different users or tenants. A network manager allows users to create different logical network that can be implemented over the network Virtualization Infrastructure. The network manager pushes these parameters to controllers to enable them to generate the host machine-specific configuration data sets including the configuration data for LREs. Some embodiments provide instructions for host machines to fetch configuration data for LREs from the network manager.

Some embodiments dynamically collect and deliver routing information to the LREs. An edge VM may learn network routes from other routers. The cluster of controllers then propagates the learned paths to the LREs in the host machines.

The Summary that precedes is meant to be a quick introduction to certain embodiments of the invention. This summary is not intended to provide an overview or introduction of all the inventive subject matter in this document. The Detailed Description and the Drawings that follow will provide additional information about the embodiments as well as other embodiments. To fully understand the various embodiments in this document, it is necessary to read the Summary, the Detailed Description, and the Drawings. The claimed subject matter is not limited to the drawings, summary, and detailed description. Instead, they are to be defined in the appended claims. This is because the claimed subjects can be taken into other forms without departing completely from their spirit.

The following description contains many details that serve as explanations. But, anyone with ordinary skill in the arts will see that the invention can be used without these details. Other times, well-known devices and structures are shown as block diagrams in order to not obscure the invention’s description with unnecessary detail.

Some embodiments define a “logical router” or logical routing element (LRE) for the logical network to facilitate L3 packet forwarding among virtual machines (VMs). A LRE may be distributed among the host machines in a logical network, acting as a virtual distributed router. Each host machine has its own instance of the LRE that it manages as a managed physical route element (MPRE), and performing L3 packet forwarding between the VMs on that host. The MPRE permits L3 packet forwarding between VMs on the same host machine. This is possible without the need to traverse the physical network. Different LREs may be defined for different tenants. A host machine can operate multiple LREs by using multiple MPREs. Different MPREs may be shared by different tenants on the same host machine. They share a common port and a similar L2 MAC address (MPSE) on managed physical switching elements (MPSE).

FIG. FIG. 2 shows packet forwarding operations by LREs that are local to host machines and act as MPREs. Each host machine performs virtualization functions to host one or several VMs. It also performs switching functions that allow the VMs to communicate in a network virtualization environment. Each MPRE performs L3 routing operations within its own host machine so that traffic between two VMs running on the same machine is always conducted locally. This applies even if the VMs are part of different network segments.

FIG. “FIG. A logical network 200 is a virtualized network made up of a variety of computing and storage resources, which are connected by a physical network205. This network virtualization infrastructure is a collection of interconnected storage and computing resources, as well as a physical network. Host machines 231-233 host the VMs 221-229 and are communicatively connected by the physical network 205. In some embodiments, each host machine 231-233 is a computing device that can create and host VMs. Virtual machines 221-229 have a set number of network addresses. For example, each VM has a MAC address (for L2) and an IP address (for L3). You can send and receive network data from and to other network elements such as other VMs.

The virtualization software (not illustrated) manages the VMs and runs on host machines 231-233. One or more components or layers of virtualization software could be included. This may include any of the components or layers known in virtual machine technology, such as virtual machine monitors, hypervisors, or virtualization kernels. These terms are not always clear and can be confusing as virtualization terminology has changed over time and is not fully standardized. The term “virtualization software” is used herein. It is meant to refer to any software layer or component that is logically interposed between the virtual machine and the host system.

FIG. 2. Each VM operates in one segment of the logical networking 200. VMs 221-225 work in segment A and VMs 226-229 in segment B. A network segment can be defined as a section of the network that the elements of the network communicate with one another using link layer L2 protocols, such as an IP subnet. A network segment can be an encapsulation overlay network, such as VXLAN and VLAN.

In some embodiments, VMs within the same segment of the network can communicate with one another with link layer protocols (e.g. according to each VM’s MAC address), while those in different segments of a network cannot communicate with one another with a link protocol. They must instead communicate through network layer gateways or routers (L3). Some embodiments handle L2 traffic between VMs using MPSEs (not illustrated) that are located within each host machine. For example, network traffic between the VM 223 and the VM 224 would go through a first MPSE in the host 231, which receives data from one its ports and then transmits it through the physical network 205, to a second MPSE in the host machine 232. This MPSE would then send data to the VM 224 via one of its ports. The same goes for the network traffic of the same segment from the VM 228 and VM 229 through a single MPSE in the host 233. This forwards the traffic locally within host 233 to another.

Unlike the logical networks 100 in FIG. 1 relies on an external L3 routing router for its implementation (which can be implemented as a standard physical router or a VM designed to perform routing functionality, etc.). The implementation of the logical network 200 is shown in FIG. MPREs 241-243 and 231-233 are used to perform L3 routing functions within host machines 231-233. The different MPREs on the host machines perform the role of a logical router for the VMs within the logical network 200. An LRE can be implemented in a data structure that is replicated across multiple host machines and instantiated to become their MPREs. FIG. 2 The LRE is instantiated on the host machines 231-233 and 241-243.

Click here to view the patent on Google Patents.