Software – Rajiv Krishnamurthy, Laxmikant Gunda, Nicira Inc

Abstract for “Application/context-based management of virtual networks using customizable workflows”

“Methods, apparatus and methods for application and/or contextual-based management virtual networks using customizable workflows” are disclosed. A context engine is used to monitor the data traffic from virtual machines in virtual networks to identify applications. The policy manager receives the context information and creates a policy that is associated with the application entity in virtual network policy plane. This policy allows monitoring and management of application entities via policy plane.

Background for “Application/context-based management of virtual networks using customizable workflows”

Virtualizing computer systems can provide many benefits, including the ability to run multiple computer programs on one hardware computer, replicate computer systems, move computer systems between multiple hardware computers, etc. Additional benefits can be realized by virtualizing networks to make it easier to use network infrastructure for multiple purposes.

“?Infrastructure-as-a-Service? “?Infrastructure-as-a-Service?” is also commonly used. A suite of technologies offered by a service provider that allows for elastic creation of a virtualized and networked computing platform. (Sometimes referred to as a “cloud computing platform?”). IaaS can be used by enterprises as an internal business cloud computing platform. Sometimes referred to as a private cloud? This gives application developers access to infrastructure resources such as storage, virtualized servers, and networking. The platform gives developers access to all the resources needed to run an application. It allows them to create, deploy and manage web applications (or other types of networked applications) on a larger scale and at a much faster pace than before.

Virtualized computing environments can include many processing units (e.g. servers). Other components include storage devices and networking devices (e.g. switches), The current computing environment configuration requires a lot of manual input from the user and configuration to configure and install the components. Particular applications and functionality must be placed in particular places (e.g., network layers) or the application/functionality will not operate properly.”

Virtual computing involves the provisioning of virtual resources and computing services. U.S. Patent Application Ser. No. No. Filled Sep. 21, 2007 and granted as U.S. Patent No. 8,171,485, U.S. Provisional Patent Application Number. 60/919.965, entitled “METHOD AND SYTEM FOR MANAGING VIRTUAL and REAL MACHINES”,? Mar. 26th, 2007, and U.S. Provisional Patent Application Number. 61/736.422, entitled “METHODS AND APPARATUS OF VIRTUALIZED COMPUTING”,? All three were filed Dec. 12, 2012. They are all hereby incorporated by reference in their entirety.

Virtualized computing platforms can provide powerful capabilities to perform computing operations. These computing capabilities can be difficult to use manually and may require significant training or expertise. Customers often need to be familiar with details and configurations of software and hardware resources in order to set up and configure cloud computing platforms. The following examples of apparatus and methods are useful in managing virtual machine resources and virtual network resources in software-defined data centres and other virtualized computing platforms.

A virtual machine is a computer that runs an operating system and other applications just like a physical one. A guest operating system is an operating system that has been installed on a virtual machine. Virtual machines (VMs), which are isolated computing environments, can be used for desktop or workstation environments as well as to test and consolidate server apps. Virtual machines can be run on either hosts or clusters. For example, a host can house multiple virtual machines.

Policies and rules can be used to manage virtual networks that are associated with virtual machines. The network virtualization manager is an infrastructure that can be used by an executing program (e.g., executing through a VM). For applications that are deployed in a data centre, virtual networks can be provisioned. Network layers, planes, and related services can be configured to enable an application VM execution in one or more layers. While previous implementations provided and configured network layers and associated services manually and separately, some examples provide and configure network levels and services through automated definition and discovery. This allows tiered applications to be correlated, determine information flow and automatically identify an application entity within a specific network layer (e.g. policy layer, management/policy layers, etc ).

“In certain examples, when starting up a cloud computing environment or adding resources to an already established cloud computing environment, data center operators struggle to offer cost-effective services while making resources of the infrastructure (e.g., storage hardware, computing hardware, and networking hardware) work together to achieve pain-free installation/operation and optimizing the resources for improved performance. Customers must be familiar with the details and configurations of their hardware resources in order to set up workload domains to deliver cloud computing services. Some workload domains can be mapped to a management cluster deployment (e.g., a vSphere Cluster of VMware, Inc.) within a single rack deployment. This makes it easier for users to use and understand. Cross-rack clusters can be created as racks are added. As there are more deployment options and management cluster capabilities, this allows for more complex configurations of workload domains. These examples make managing workload domains and configurations easier than ever before.

A management cluster is a collection of physical and virtual machines (VMs) that hosts core components of cloud infrastructure necessary to manage a software defined cloud data center (SDDC), in a cloud computing environment that supports customer service. Cloud computing provides instant, ubiquitous and convenient access to a pool of configurable computing resource resources. Cloud computing customers can request the allocation of these resources in order to support their services. One example is when a customer requests to run a service in the cloud computing environment. A number of workload domains can be created using resources from the shared pool.

“Virtual networks are compatible with virtual machines in SDDC or other cloud computing environments. You can manage virtual networks (e.g., with NSX purchased by VMware, Inc.) through policies and rules. Applications can use network and other infrastructure. These virtual networks are available for the deployment of such applications in the SDDC.

“Manual configuration Open Systems Interconnect OSI network layers (e.g. Layer 1 (L1)), Layer 2 (L2) and Layer 3 (L3). It is difficult and time-consuming to perform the associated services such as load balancing, distributed firewall (DFW), load balance (LB), etc. Next, the application VM needs to be placed on the L2/L3 network. Some examples simplify and improve network configurations and application placement by using management layers or policies to define applications. Several examples herein show how to define an application entity within the policy/management layers. An application entity can be described as a logically manageable entity. It includes a group VMs that the application will execute.

“Some examples create logical overrides networks so that two VMs can be located at different locations in the datacenter, and possibly across multiple datacenters. They will think they are connected to the same physical network by a single switch. A network tunnel is used to create a logical overlay network between hosts that host the two VMs. The packet sent by the first VM to the second VM is encapsulated in its L2 header. This L3 header is addressed to the second host and then another L2 header to send the packet on the first hop to that host. The packet is then decapsulated by the destination host and the original packet is given to the second VM. A central controller cluster orchestrates the encapsulation and decapsulation as well as the exchange. It knows the location of each VM and can translate logical switch configurations to physical switch configurations. This allows for the programming of a physical forwarding plan with instructions to encapsulate the packet and forward it according to the translations. The management server receives configuration inputs from users, such as the logical network configuration. This information is communicated to the controller cluster via APIs. Higher-level constructs like logical L3 routing are also handled by the controller cluster. They are distributed among hosts with VMs connected to the logical route. Each logical router has the ability to perform functions similar to a physical router. This includes network address translation, source network address translation and access control list (ACL). Firewalls, load-balancers, etc. can all be implemented. Each port can also be assigned firewall rules based on configurations. Some policy rules can be converted into firewall rules by using context information. Firewall rules can be used for access control, permission regulation, and other purposes.

“Availability” refers to the amount of redundancy needed to ensure that the workload domain is able to continue operating. Performance refers to the operating speed of the computer processing unit (CPU), memory (e.g. gigabytes of random access memory, RAM), mass storage (e.g. GB hard drive disk, HDD), solid state drive (SSD), and power capabilities for a particular workload domain. Capacity refers to the total number of resources (e.g. aggregate storage, aggregate CPU etc.). All servers that are part of a cluster or a workload domain. The redundancy, CPU operating speed, memory, storage, security and/or power requirements of a user are some examples. As an example, more resources will be required to support a workload domain if the user selects higher requirements (e.g., redundancy, CPU operating speed, memory storage, security and/or power option require more resources than redundancy and CPU speed, memory storage, security and/or power choices).

“Example Virtualization Environments.”

There are many types of virtualization environments. There are three types of virtualization environments: full virtualization (paravirtualization), and operating system virtualization (operating system virtualization).

Full virtualization is, as it is used herein, a virtualization environment where hardware resources are managed and controlled by a hypervisor (e.g. a virtual monitor (VMM), and/or other software and hardware to create and execute virtualmachines) to provide virtual hardware resources for a virtual device. The hypervisor is called a host machine, host computer, or any other computing device. Each virtual machine that runs on the host machine is called a guest machine. The hypervisor manages guest operating system execution and provides guest systems with a virtual platform. Multiple operating system instances may be able to share the virtualized hardware resources of a host computer in certain cases.

“Operating System Virtualization” can also be referred to as container virtualization. Operating system virtualization is a method in which processes can be isolated within an operating system. A host operating system is typically installed on the server’s hardware in an operating system virtualization program. The host operating system can also be installed on a virtual machine in a full virtualization environment, or in a paravirtualization setting. A host operating systems of an operating system virtualization environment is configured to provide isolation and management for processes executed within the host system. This includes applications that run on the host system. A container is a way to isolate the processes. Multiple containers can share the same host operating system. A process that is executed within a container can be isolated from any other processes running on the host operating systems. Operating system virtualization allows for resource management and isolation without the overhead of a full virtualization environment. The host operating system can also be installed on a virtual machine in a full virtualization environment, or paravirtualization. Linux Containers LXC, LXD, Docker, OpenVZ, and others are examples of operating system virtualization environments.

“In some cases, a data centre (or pool of linked Data Centers) may contain multiple virtualization environments. A data center could include hardware resources managed by a full virtualization, paravirtualization, or an operating system virtualization environments. A workload can be distributed to any one of these virtualization environments in a datacenter.

“FIG. “FIG. As described below, the example system 100 contains an application director (106) and a manager (138). These are used to manage a computing platform provider 110. The example system 100, which is described below, facilitates the management of provider 110 but does not include provider 110. Alternativly, the system 100 could be included in provider 110.

“The computing platform provider 110 provides virtual computing resources (e.g. virtual machines or?VMs). 114) may be accessed via the computing platform 110 by users (e.g. users associated with an administrator and/or developer 118 and/or any other programs, code, or device. etc.”

“An example of a 102-based application, implemented using the computing platform provider 110 in FIG. 1 also includes multiple VMs 114. FIG. FIG. 1 provides different functions within the application102 (e.g. services, parts of the application102, etc.). An administrator 116 or a developer 118 can customize one or more VMs 114 in the illustrated example relative to a stock (or out-of-the box) version of the services or application components. The services that execute on the VMs 114 example may also depend on other VMs 114.

“As shown in FIG. “As illustrated in FIG. 1, the computing platform provider 110 can provide multiple deployment environments 112, such as for testing, development, staging, and/or production. The computing platform provider 110 may provide services to the administrator 116, developer 118, and/or any other program. A vCloud Administrator Centre is an example implementation of a RESTAPI for cloud and/or computing services. (vCAC and/or VMware Realize Automation) API for vCloud Directors and (vRA) API? API available from VMware, Inc. The computing platform provider 110 provides virtual computing resources (e.g. the VMs 114) that can be used to create the deployment environments 112 where the administrator 116 or the developer 118 may deploy multi-tier applications. A specific example of a deployment environment may be used to implement FIG. VMware, Inc. offers vCloud DataCenter cloud computing services.

“In some cases, a lighter-weight virtualization can be used by using containers instead of the VMs 114 within the development environment 1112. Containers 114a are software constructs that can run on top a host operating systems without the need to use a hypervisor. Containers 114a are not virtual machines. They do not create their own operating system. Containers 114 a, like virtual machines, are logically distinct from each other. Multiple containers can be run on the same computer, processor system and/or development environment 112. Containers 114a, which are similar to virtual machines, can execute instances or programs on a single computer, processor system, and/or in the same development environment 112.

“The FIG. 1, may be running on one or more VMs and orchestrates the deployment of multi-tiered applications onto one of these example deployment environments 112. FIG. FIG.

“The example topology generator 120 generates the basic blueprint 126 which specifies the logical topology for an application to deploy. The basic blueprint 126 is a general representation of the application structure as a collection components that execute on virtual computing resources. The basic blueprint 126 for an online store application, generated by the topology generator 120, may include a description of a web app (e.g. in the form a Java webapplication archive or?WAR). File containing dynamic web pages, static pages, Java class files, Java servlets, Java classes and/or any other property configuration or resources files that make up a Java Web Application. It executes on an Apache Tomcat application server that uses MongoDB as a datastore. The term “application” is used herein. The term “application” generally refers to a logical delivery unit that includes one or more applications, their dependent middleware, and/or operating system. Applications can be distributed over multiple VMs. In the above example, the term “application” refers to the entire online store application, including all database components. The entire online store application is referred to as “application” in the example above. This includes the application server and all database components. The application might include the hardware or virtual computing hardware used to implement the components in some cases.

“The basic blueprint 126 example of FIG. 1. may be assembled using items (e.g. templates) from a catalogue 130. This is a list of virtual computing resources (e.g. VMs and networking, storage, etc.). That may be provisioned by the computing platform provider 110. The catalog 130 lists all available application components (e.g. software services, scripts code components, and application-specific package) that can be installed on provisioned virtual computing resource. An administrator 116 (e.g. IT (Information Technology), or system administrator) may pre-populate and/or customize the example catalog 130. This administrator enters specifications, configurations and properties and/or any other details about items in catalog 130. The example blueprints 126 can indicate the order in which the components of an application component are installed based on the application. A load balancer cannot usually be configured until a web app is up and running. The developer 118 might specify a dependency from Apache to an application code bundle.

“The example deployment planner generator 122 of FIG. 1. generates a deployment program 128 using the basic blueprint126. It includes deployment settings for basic blueprint126 (e.g. cluster size, CPU, memory and networks). An execution plan for tasks that specifies the order in which virtual computing resource are provisioned, application components are configured, installed, and started. FIG. 128 shows an example deployment plan. FIG. 1 shows an IT administrator a process-oriented view 126 of the basic blueprint. It indicates the actions that must be taken to deploy the application. A single basic blueprint 126 can be used to generate different deployment plans 128 to test prototypes and scale up or down deployments. Each deployment plan 128 can be separated and distributed as local plans that have a series tasks to be performed by the VMs 114 from the deployment environment 1112. Each VM 114 coordinates the execution of each task with a centralized deploy module (e.g. the deployment director 124) in order to ensure tasks are executed according to the dependencies defined in the application blueprint 126.

FIG. 1. Executes the deployment plan 128 through communication with the computing platform provider 110 via interface 132. This allows 1 to provision and configure VMs 114 within the deployment environment 112. FIG. The example interface 132 in FIG. 1 is a communication abstraction layer that allows the application director to communicate with a heterogeneous mix of provider 110 or deployment environments 112. Each VM 114 is provided by the deployment director 124 with a set of tasks that are specific to its receiving VM 114 (herein referred as a?local deploy plan?). The VMs 114 execute tasks to install, configure and/or start one of the application components. A task could be a script which, when executed by VM 114 causes the VM 114 retrieve and install certain software packages from a central repository 134. The deployment director 124 coordinates the tasks with the VMs 114 in order to observe installation dependencies between VMs 114 as per the deployment plan 128. The application director 106 can be used to monitor, modify, or scale the deployment after the application has been deployed.

FIG. 1 interfaces with components of system 100 (e.g. the application director 106 or the provider 110) in order to facilitate management of resources for the provider 110. Manager 138 is an example. It includes a blueprint manager 140 for creating and managing multi-machine blueprints, and a resource manger 144 to recover unused cloud resources. Other components may be added to the manager 138 for managing a cloud environment.

The illustrated blueprint manager 140 manages the creation and deployment of multi-machine blueprints. as one unit. A multi-machine blueprint might include definitions for multiple basic templates that make up a service, such as an e-commerce provider with web servers, application server, and database servers. A basic blueprint is a list of policies (e.g. hardware policies, security policy, network policies, etc.). A basic blueprint is a list of policies that are defined for a single machine, such as a virtual machine like a web server or container. The blueprint manager 140 makes it easier to manage multiple virtual machines or containers than manually managing (e.g. deploying) basic blueprints.

FIG. 140 shows the example blueprint manager 140. 1. additionally annotates basic blueprints/multi-machine plans to control how workflows related to the basic blueprints/multi-machine blueprints will be executed. A workflow, as used herein is a sequence of actions or decisions that are executed on a virtual computing platform. To execute workflows, the example system 100 contains first and second distributed execution managers (DEM(s),) 146A and146B. The illustrated example shows that the first DEM 146A contains a first set if characteristics and is physically located in a first location 148A. The second DEM 146B contains a second set characteristics and is physically located in a second location 148B. A DEM’s location and characteristics may make it more suitable to perform certain workflows. A DEM might include hardware that is specifically suited to perform certain tasks (e.g. high-end calculations), and may be located in an area that meets local laws. The example blueprint manager 140 identifies basic blueprints as well as multi-machine blueprints that have similar capabilities.

The resource manager 144 in the illustrated example allows for the recovery of computing resources 110 that are not being used. Automated reclamation can include the identification, verification, and/or reclamation unused, underutilized resources. “Resources to increase the efficiency of the cloud infrastructure running.”

“Network Virtualization Examples”

Software-defined networking (SDN), allows computer networks to be programmed to initialize, control, change, and manage dynamically through open interfaces and abstraction of lower-level functionality. SDN, or network virtualization, addresses the issue that traditional networks’ static architecture does not allow for the dynamic, scalable computing, and storage requirements of modern computing environments like data centers. A network can be divided into several planes (e.g. control plane, data plan, management or policy), which allows for the separation of the systems that decide where traffic is sent (e.g. an SDN controller or control plane) from the underlying systems that forward traffic (e.g. the data plane, etc ).

A plane is an architectural component of a network or area of operation. Each plane can accommodate a different type or data traffic and is independent from the network’s hardware infrastructure. Network user traffic is carried by the data plane (sometimes referred to as forwarding plane, user plane, carrier plane or bearer plane). The control plane is responsible for signaling data traffic. The control plane carries control packets that originate from or are intended for a router. A subset of control planes is the management or policy plane. It carries administrative data traffic.

The three planes are implemented by the router firmware and switches in conventional networking. SDN separates the data and control planes. This allows for the control plane to be implemented in software, rather than hardware. Software implementation allows programmatic access and increases flexibility in network administration. Network traffic can be controlled from one central control console. This eliminates the need to adjust individual network switches. Switch rules can also be dynamically modified to prioritize, deprioritize or block certain packet types.

“Each network plane is associated with one or more data transfer/communication protocols. Management plane protocols, such as Command Line Interface (CLI), Network Configuration Protocol NETCONF, Representational state Transfer (RESTful), and application programming interface (API), are used to configure interfaces, subnets, routing protocols, and Internet Protocol (IP). A router may run control plane routing protocols, such as OSPF, EIGRP and BGP in certain cases. To discover nearby devices and network topology information. The router inserts control-plane protocol results into tables such as a Routing Information Base, Forwarding Information Base (FIB), and others. Software and/or hardware for data plane (e.g., field-programmable gate arrays or (FPGAs), application specific integrated circuits [ASICs], etc. FIB structures are used to forward data traffic over the network. Management/policy plane protocols such as Simple Network Management Protocol, (SNMP) can be used to monitor device operation and performance, interface counter(s), and so forth.

“A network virtualization platform is able to decouple the hardware and software planes so that the host hardware can be administratively programed to allocate its resources to software plane. This programming allows virtualization of CPU resources, memory, data storage, network input/output interface (IO), and/or any other network hardware resource. Virtualization of hardware resources allows for the implementation of multiple virtual network applications, such as firewalls and routers, Web filters, intrusion detection systems, etc. within one hardware appliance. Virtual or logical networks can be created. Networks can be built on top of existing physical networks. Virtual networks can have the exact same properties as the physical networks.

“In a network virtualization environment applications are interconnected via a virtual switch rather than a physical, hard-based network switch. Virtual switches can be described as software-based “switches”. These are software-based?switches that allow packets to move up and down a software stack that relies on the same processor(s). A virtual switch, also known as a soft switch, or vSwitch, can be installed on every server in a virtual network. It allows packets to be encapsulated across multiple vSwitches, which forward data packets in a network layer on top of a physical networking as directed by a controller. The controller communicates with the vSwitch using a protocol like OpenFlow.

A virtualized network, which is similar to a virtual machine and a software container, presents logical network components (e.g. logical switches routers firewalls load balancers virtual private networks (VPNs), etc.). for connected workloads. Programmatically, virtual networks are created, provisioned, and managed. The underlying physical network serves as a packet-forwarding backplane to data traffic on the virtual network. Each VM is assigned a network and security service that will be used to protect it. The VM can move among hosts in the dynamic virtualized environment by staying attached to them. A network virtualization platform, such as VMware’s NSX or other similar platforms. It can be installed on top of existing network hardware and supports geometries and fabrics from multiple vendors. Some applications and monitoring tools can be used seamlessly with the network virtualization platform, even without modifications.

“In some cases, the virtual network creates an address space that allows logical networks to appear in physical networks. A virtual network L2 can be created even though the physical network is L3 or Layer 3. Another example is that if the physical network has L2, an L3 virtual networking can be created. A data packet can be sent to a VM via lookup from the virtual networks when it leaves a VM. The packet can then be sent back to the physical network via lookup from the virtual network. The virtual network is therefore decoupled from its physical counterpart. A layer of abstraction is created between the end systems and the network infrastructure. This allows for the creation of logical networks that are not dependent on the network hardware.

“Example: Two VMs can be located in arbitrary places within a data centre (and/or across multiple centers, etc.). A logical overlay network allows two VMs to be connected so that they believe that they are connected via a single switch. The network tunnel is created between the hosts computers where the two VMs are located. The packet’s L2 header for the first VM is encapsulated in an L3 header that is addressed to the second host. A second L2 header for the first hop towards the second VM (e.g. the destination host) is then generated. The packet is then unpacked by the destination host and the original packet is provided to the second VM. The central controller cluster can orchestrate routing from the first VM into the second VM. It knows the location of each VM, and converts logical switch configurations to physical switch configurations to program the physical forwarding plan with instructions to encapsulate the packet and forward it according to the translation(s). A management server receives configuration input from the user, such as logical network setup, and transmits it to the controller cluster via one of several APIs.

“The controller cluster also manages higher-level constructs like logical L3 routings. These are distributed across hosts that have VMs connected to the logical route. Each logical router can have the capabilities of physical routers such as network address translation (NAT), secureNAT (SNAT), access list (ACL), and others. Distributed firewalls and load balancers can be implemented by the controller cluster. A configuration can specify which firewall rules are to be applied to each port.

“Certain cases provide a novel architecture for capturing contextual attributes on host computer that executes one or more virtual machine and consumes captured contextual attributes to perform service on the host computers. Some examples run a guest-introspection agent (GI) on each machine where contextual attributes are to been captured. Some examples execute one or more VMs per host computer. Others also run a context engine on each host machine and one or two attribute-based service engines. The context engine of a host collects some contextual attributes through the GI agents on VMs. The context engine provides contextual attributes to service engines. These contextual attributes are used to identify service rules which specify context-based services that will be performed on processes executing on VMs or data message flows sent to or received by VMs.

Data messages are a set of bits sent over a network in a specific format. Data messages can be used to describe various formats of bits that are sent over a network such as Ethernet frames and IP packets. TCP segments, UDP informationgrams and TCP segments may also be included. As used herein, L2, L3, and L7 layers (or layer 2, 3, layer 4, and layer 7) refer respectively to the second datalink layer, fourth network layer and seventh application layer of OSI (Open System Interconnection).

“Network Plane System & Workflow Examples”

“Networks can be divided into multiple planes or layers, which includes virtual networks. FIG. FIG. 2 shows an example network layout 200 that includes a data plane (210), a control plane (220), and a management/policy plan (230). FIG. 2 The data plane 210 facilitates data packet forwarding and switching (e.g. according to a forwarding list, etc. ), etc. The data plane 220 determines network address translation, neighbor addresses, netflow accounting and access control list (ACL). Logging, error signaling, and so on. The control plane 220 allows routing, including static routes, neighbor information and IP routing tables. It also supports link state. Protocols executing on the control plane 220 facilitate routing, interface state management, connectivity management, adjacent device discovery, topology/reachability information exchange, service provisioning, etc. The management/policy plan 230 allows network configuration and interfacing. This includes command line interface (CLI), graphic user interface (GUI), and many others.

“While the control and data planes 220 and 210 can accommodate networking constructs like switches, routers, ports, and switches, these planes 220 and 210 do not understand compute constructs like applications. Some examples create application entities in management plane 230. Certain examples are able to automatically tie applications to network behavior rather than manually doing so.

“In some cases, infrastructure 100 can be used to identify and manage applications and/or other resources at policy layer. Some examples allow the creation of applications that run in a virtualized network environment within the policy layer. Using certain examples, it is possible to define an application entity within the policy layer. An application entity is a logically manageable entity which includes a group VMs 114 that execute the application.

“In certain examples, a multi-tier application (e.g., a three-tier application, n-tier application, etc.) divides into one group of VMs 114 for each application tier. Three-tier applications (e.g. presentation tier and business logic tier) have three VM 114 groups. One tier is for Web presentation and one for application logic and one for datastore.

“Certain cases facilitate the discovery of user logins via VMs 114, and associated applications executing data and command flow via a context engine. The context engine detects individual processes within VMs 114, and/or users logging in to VMs 114. The policy layer can link process(es), user(s), and other information to create tiered applications. The flow information of the user/application can be found along with the flow information of another user/application connected to the flow.

“Network virtualization functionality can be placed in different planes of the 200 network structure, as described above. The context engine, for example, can be implemented in management plane 230, data plan 210, or control plane 220.

“In some cases, the context engine may be implemented in both the management plane (230) and the data plane (210). FIG. 2 shows this. 2 The context engine can be implemented in two parts, a context management plane (MP), 240, and a data plane (DP), 250. The context engine MP 240, context engine DP 250 and context engine DP 250 are the two components that perform the functions of context engine.

“In some cases, a policy engine (also known as a policy manager), and/or other operations or management components(s) can also be implemented in the management, data, and/or control planes 230, 210, and/or 220. The policy engine 2260 creates, stores and/or distributes policy(ies) to applications that are running on virtual networks and VM(s), 114, for instance.

“In some cases, the policy engine 260 creates rules based upon VM 114 address. It also collects context information about guest VMs 114. The network visualization manager then defines policies based off the captured content. Rules can be created using policies based on application or user. The policy engine 265 can create application-based rules. The policy engine 260 can also define application entities in policy plane 230. These entities can be based on applications that are running on the host 110.

“In some cases, the management/policy plan 230 can be divided into a management plane 230 or a policy plane 235 (see, for example, FIG. 3). FIG. 3. The policy engine 260 is located in the policy plan 235 and the context engine MP 244 remain in the management plane. FIG. 3. An application entity 302 may be defined in policy plane 335 based upon context information from processes running in the VMs 114, extracted by the context engine 240, and policies from policy engine 260.

“As illustrated in FIG. 4 The context engine MP240 pulls context from multiple sources within the management plane. The management plane 230 contains context from multiple cloud compute context 402, virtual centre (VC) compute context 400, identity context 406, mobility contextual 408, network context 412 and endpoint context 410. In some cases, the compute context may include multi-cloud context 402 of various cloud vendors like Amazon, Azure, and others. Compute context may also include virtual center inventory (e.g. CPU, memory and operating system). Identity context 406 can include user, group, and other information from directory services like Active Directory (AD), Lightweight Directory Access Protocol, Keystone, or any other identity management. Mobility or mobile context 808 also includes location, Airwatch mobile device manager, international mobile equipment identification (IMEI) code and etc. In certain examples, endpoint context 410 includes DNS, process, application inventory, etc. Network context 412 contains IP, Port, Ethernet, MAC, bandwidth and quality of service (QoS), latency, congestion, latency, etc. The context gathered in management plane 230 can then be used to group objects, for use by policies and rules. The context engine MP240, for example, can store context information 414 and generate one to three outputs, including grouping objects 416 and analytics 418 and policy 420.

“A second component can be instantiated within the context engine in the data plane 220 as the context engine 250. The context engine DP 250 collects context from a thin client agent in the data plane 210. Data plane context can include information such as user context and process context. It also includes application inventory and system context. User context includes user ID, group ID, etc. Process context includes name, path, libraries, etc. Application inventory includes company name, product name, installation path, and so on. System context can include operating system information as well as hardware information and network configuration. In some cases, guests may be able to provide user and process context on a per-flow basis. On a per VM 114 base, the application inventory and system context is gathered. These contact information may be called realized or runtime contexts.

The context of services can satisfy many use cases. These services can be implemented as plug-ins for the context engine DP 250. Services include application visibility (IDFW), packet capture (LB), process control (Vulnerability scan), load balancer (LB), and application visibility (API). The application visibility plug-in, for example gathers context for flow and stores it in the management plane. This information is used by a user interface to visualize the flow of the VMs 114 between the processes within the VM 114.

“FIG. “FIG. FIG. FIG. 5 shows how context engine MP240 communicates with context based services MP 502 in management plane 230. The context engine DP 250 can communicate with context-based plugins 504 (e.g. in a hypervisor 510 on the data plane 210). The context engine 2110, for example, can be instantiated as MP 240’s context engine, and 210 as the context engines DP 250. The context-based service engine(s), 230, can also be instantiated as context-based services MP 502 or 210 as context-based service plugins 504 in the management plan 230.

“In the context-based services example MP 502, a management plan analytics (MPA message bus 506) facilitates communication with respect services such as application visibility 508, IFW 510 and application firewall 512. Process control 514, vulnerability scanner 516, LB 518 are some examples. Each service 508-518 is associated with a plugin 520-518 (e.g. application visibility 520 and IDFW 522; application firewall 524; process control 526; vulnerability scan 528, or LB 530). Accessible via the message bus 506 from an MPA library 532

“In some cases, the application visibility tool 508 uses the application plugin 520 to collect process context per flow and store it in the management plane.230. This information is used by a user interface to visualize the flow of the VMs 114 as well as the processes within them 114.

“In some cases, the IDFW service510 is used in conjunction with the IDFW plug-in 522 to gather the user context per connection flow. This information is passed to the DFW and it creates an identity-based firewall. The application firewall service 512 works in conjunction with the plugin 524 for gathering process/application context per connection flow. This information is used to program the DFW to create an application-based firewall.

“In some cases, the process service 514 leverages process control plugin 526 in order to start, stop and pause, resume or terminate processes executing on VM(s), 114 via the network based upon the user/or application context. The LB service 518 uses the LB plugin 530 to load balance traffic based upon policies.

“Using the plugin model, you can easily add context-based services to your existing services. FIG. 5 shows this. 5 Each context service is composed of a MP component that configures the data plane 210 service components and gathers any information from the DP components to be stored in management plane 230. The communication bus that links the services 508-518 within the management plane230 can leverage MPA, remote procedure calls (RPC), and other information. For example, the context engine DP 250 can cache context that has been achieved.

“As shown at FIG. 5. The context engine DP250 includes a context engine core 534 that operates with context-based plugins 504 and caches realized context in a realized contextual cache 536. The context engine core 534 works in conjunction with an Endpoint Security (EPSec library 538. It communicates via a multiplexor 540 with one or more VMs 114. Each VM 114 creates a guest context, and each VM 114 contributes context through a thin agent 542-548.

“In some cases, context information can be stored in memory 536. A unique identifier (e.g. ID, token, etc.) can be generated once the context has been stored. Each stored context can be created and given to the respective services 508-518 instead of passing the entire context. This allows for optimization or improvement of context passing between context services and verticals like DFW, DLB, etc. The identifier (e.g. token, etc.) can also be optimized or improved. can be passed in one to several packets in a header packet (e.g. VxLAN/Geneve). can be used across hypervisors (510).

“The policy engine 260 interacts with context engine MP 240 within the management/policy plan 230. The context engine DP 250 leverages plugins 504. and VM 114 guest contents in the data plane 210. Based on context information extracted via the context engines 240 and 250, the policy engine 260 implements app entities in the management plan 230.

“In some cases, the context engine (and its components, the context engines MP 240 and DP 250), leverages information from guest intrtrospection (GI), context-based services 502, 504 and context-based services 502 to determine applications, users and/or other processes that are operating in context on the system 100, its VMs 112. GI, for example, is a set of elements that includes APIs for endpoint safety. It allows offloading of anti-malware and/or other processing to a hypervisor 510-level agent. Services 502, 504 can use context information to enable one or more user workflows, such as virtual machine, network and/or application entity creation, modification and control.

“In some cases, the network virtualization administrator provides services 502, 504 as well as the ability to create virtual networks such L2 networks (e.g. for switching, etc.). ), and an L3 network (e.g. for routing, etc. ), etc. The manager provides IDFW 510/application firewall 512 services that allow users to create rules to prevent traffic from flowing from one source to another (e.g., according to user identity, IP address, etc.). The LB 518 allows load balancing based on who is logged into, which application is responsible data/message traffic, etc.

“In the hypervisor510, each flow of data/message is collected by context engine 240 and 250. When a user/application attempts to connect to a server it sends data packets. The associated VM 114 can also be added to a guest to allow for monitoring of the flow. The identity of the user can be determined as well as other hidden information. It is possible to determine properties that are not available elsewhere on the network (e.g., which group the application belongs to, what process(es), etc.). The context engine 250, 250 gathers flow information and sends it to the service(s), 502, 504 or the policy engine, 260. The firewall 510,512, for example, can use the collected flow context. It uses rules, information and context to allow or deny connection, flow, etc. An administrator or another operator can visualize context information in certain cases (e.g., which applications are currently running in the datacenter, etc ).

“FIG. “FIG. FIG. 6 shows the example policy engine 265. FIG. 6 shows an example policy engine 260. It includes a context input process 602, a rules engines 604, and options data stores 606 as well as a policy generator 608. The context input processor 602 receives context from the context engine MP 240. It can be used to identify, group, or permission an application process. The context input processor 602 processes input context in accordance with one or more rules provided to it by the rules engine 604.

“By applying the rule(s), to the context information one or more policies can be retrieved form the options data storage 606, such the policy generator 608. The policy generator 608 uses context information and the available options to create one or more policies. One or more policies can be created to govern and/or instantiate an application entity in policy layer 235. This is for an application that executes on a VM 114 being monitored by the context engine MP 240. You can create one or more policies to control execution of the application on one or multiple VMs 114. You can create one or more policies to control the instantiation or use of virtual networks 114 or virtual VMs to host the application and the associated user/group.

“Context information, policy(-ies), etc. can be visualised and sent to users in certain cases. An interface generator 610 can be included in the policy engine 266, which provides a visual representation of policies, users, applications, connectivity, options, etc. for review by the user and modification. An administrator can, for instance, create services and networks on top of policy entities in the policy layer 237 using the resulting graphical user interface.

The template generator 612 can save a certain configuration, including an application entity, network, or service. A user or process, apart from the interface, can trigger the creation of a template based upon context, settings and other information from a current system configuration 100.

“In some cases, a modeling metalanguage (e.g. a markup languages, etc.) may be used. The policy generator 608 can be used to create a policy model, including a policy structure and associated meta information. Name, properties, metadata and relationship information (e.g. between a source object or a destination object) can all be used to create a policy model. A policy tree structure can help you define such things as name, properties, metadata, etc. Policy models allow for relations to be created and query/addressed to give information about the relationship between objects. Access from policy model to consumer or policy to policy can be made easier by using relationships. A policy tree may include policy objects that are user-managed, policy objects that have been implemented by the state, and so on. Some policy objects may not persist in some cases. Some policy objects may persist in the options database 606. The policy engine 260 can interact with the network virtualization manger to connect via a policy role to the virtualization manger from the policy engine 260. This allows the policy engine 260 to make API calls to obtain debugging information, status information and connection information. The policy model allows you to define allowed whitelist communication, denied Blacklist communication, and other permissions.

“In some cases, an endpoint can be included in a group (VMs 114 or IP addresses, VxLAN etc.). That are logically connected to provide a particular service for a particular application. For example, group members get the same policy. A group of applications is a group that has logically related applications.

“A contract is a security policy that regulates communication between application groups. A contract is a set rules that allow or deny service (e.g., port, protocol and/or classifier). For example, a group could provide multiple contracts and consume them all. In some cases, both a consumer and provider group can consume contracts. Each group can have a tag. To allow a pair of provider and consumer groups to communicate, their tags must match. The provider and consumer tags can be set up to identify the source (e.g. provider) and destination(e.g. consumer) of a contract rule. If two groups consume or provide the same contract, communication between them is permitted for groups with matching tags. If configured by the user, tags can be optionally applied in certain cases.

“FIG. 7 shows an example 700 contract with multiple providers and consumers. The policy layer 235 supports multiple tenants and provides an application-oriented view to a network management API. Tenants can be separated, so policies that affect one tenant’s applications may also affect other tenants. FIG. FIG. 7 shows that a tenant 702 has a contract 704 as well as two applications 706, 708. Each application 706, 708 has a Web group 710-712 and an app group 714-716. Each 710-716 group is associated with a 718-724-provider 727.3332 pair. Some of these consumers and/or suppliers are tagged 734-740, according to the contract 704.

“Using the policy hierarchy shown in FIG. 7. A given application 706, 708 may be controlled by policies that are associated with it and its groups. Other policies that are not related to the application 706, 708 shouldn’t have any effect. However, policies that are consumed by multiple applications 706, 708 can have an effect on the behavior of all other applications 706, 708. If policies are conflicting for group 710-716 and endpoint 718-732 in certain cases, explicit rules and/or implicit rules can be used to determine which policy is more important (e.g. allow takes precedence over deny, etc ).

“In some cases, endpoints of network virtualization managed by policy engine 260 may receive policies from both infrastructure and tenant. Infrastructure policies can be applied to any endpoint and are generally defined by infrastructure administrators. Infrastructure policies may have a higher priority than user/tenant-level policies. Rather than an application-centric view, which may be conveyed through a user/tenant level policy, infrastructure policies may span across multiple applications(s)/tenant(s), for example.”

“FIG. 8 shows an example policy mapping 800 between an infrastructure space 804 and a user/tenant area 802. FIG. 8 tenants 806-810 have the corresponding applications 812-816. Each application has a web-group 818-822, and an application-group 824-828. Each group 818-828 contains a number of endpoints from 830 to 840 within the user/tenant area 802.

“A subset 830, 832 and 836 endpoints is also connected with an infrastructure group 842 or 844. Each group 842-844 belongs to an infrastructure domain 846, or 848. The infrastructure domains 846-848 and 846-848 are respectively associated with an infrastructure tenant 845-848. FIG. 9 Individual endpoints 830-832, 836 and 838 consume policies from both the infrastructure space 804 and the user/tenant area 802. The application group 824, for example, consumes policies 860 with priority 862 from infrastructure space 804 as well as policies 802 from user/tenant area 802. This takes precedence over policies 860 from infrastructure space 804.

“FIG. 10 shows an example network policy 1000 that describes a network topology for an application. It also defines external connectivity for the app. FIG. FIG. 10 shows that a tenant 1002 can be associated with an application (s) 1004, L2 contextual 1006, and L3 context 1008, respectively. The L2 context 1006 represents a broadcast domain. You can map the context 1006 to a virtual wire or logical switch, or VLAN. A routing domain is the broadcast domain. L3 context 1008 is the representation of the routing domain. For example, the context 1008 can be mapped on a TIER1 router or an edge.

“A subnet 1012 may be configured for L2 context 1006. The subnet 1012 can be either an external subnet (e.g. reachable via an external gateway 1010 within an external group 10222) or a local subnet (e.g. reachable within its routing area). The L2 relationship connectivity 1016 can be defined between the groups 1014, and L2 context 1006. This will establish network connectivity between them. A L3 linked relationship 1018 is used to link L2 context 1006 with L3 context 1008

External connectivity 1020 can be expressed as connecting the L3 context 1008 with an external gateway 1010. External connectivity 1010 is a pre-configured router (equivalent edge in the virtual network), that provides external connectivity for applications 1004. Some examples show that policy does not manage external gateways 1010, and therefore no policies can be applied to these services.

In certain cases, you can create L2 context 1006 to isolate the network and then assign a subnet 1012 for L2 context 1006. Support for the isolated network can be provided by L2 services such as DHCP and metadata proxy. You can create L2 context 1006 by linking L2 context 1006 with L3 context 1008. This will create a routed network. Once the L2 context 1006 is created, the network can be linked to L3 context 1008. A subnet that is assigned to routing network can be routable from an external gateway 1010. In this case, the intent is to mark the subnet 1012 external and connect the L3 context 1008 with the external gateway 10.10. For example, the subnet 1012 could be advertised to an external gateway 1010.

“As illustrated in FIG. 10. An endpoint 1024 can be assigned an IP address public/floating to facilitate external connectivity (e.g. via SNAT rules). The endpoint 1024 can facilitate port forwarding and/or translation (e.g. via dynamic NAT rules (DNAT), etc ).

“As mentioned above, the interface generator 110 can create an interface that allows an administrator or other user to see configuration and operation information about a group being monitored. FIG. FIG. 11 shows an example of an application monitoring interface 1100 that is used to monitor a network virtualization manager 1102 which summarizes traffic generated 1106, applications running 1104, and flows 1108, as well as VMs 114 and 1104, respectively. FIG. FIG. 12. This is an additional interface 1202 that the network virtualization manger 1102 uses to collect data for a group (e.g. a security group etc ).

“FIG. “FIG. 13” shows an example interface 1300 that provides flow information 1302 to the network virtualization manger 1102 which includes a listing of VMs 114 as well as a visualization 1304. You can select a VM entry 1306 from a list to visualize the corresponding flow 1304. You can view additional information about the flow 1304 and/or VM 114 operations by selecting a VM representation 108.

“FIG. 14 shows an example interface 1410 with a pop up box 1410 that lists applications for the network virtualization manger 1102 on a VM 114. FIG. 1410 shows an example interface. The interface 1410 can provide information about the application name 1412 and version 1414 as well as user 1416 and status 1418. FIG. FIG. 15 shows another example interface 1500 with a pop up box 1510 that allows you to visualize an example of application flow between endpoints. This includes application, port and protocol as well as bandwidth. FIG. FIG. 16 shows another example interface 1600, showing application flows 1602 as well as associated traffic details 1604 for a particular flow 1606.

“FIG. “FIG. The example window 1706 shows the collected application data. FIG. 18 can show a percentage or other portion of completion 1708 in the window 1706 after application data collection has been initiated using the option 1702. The option 1702 can be used to cancel application data collection.

“FIG. 19 shows an example interface 1700 after application data collection is complete. The icons 1712-1720 represent the identified processes in the window 1706. To view more information about the process(es), you can select an icon 1712-1720. FIG. 20 illustrates an example interface 2000. FIG. 20 shows an example interface 2000 that provides further information 2002 about the selected web-tier process(es). 19. Each process identified 2004 can be selected by the network virtualization manger 1102 and the policy generator 260 as an app entity 302, as described above.”

“In some cases, the network virtualization manger 1102 can be implemented using the VMs 114 through the computing platform provider 110. FIG. FIG. 21 shows an example implementation for the computing platform provider 110 in host/computing platform 110. FIG. 21 shows an example implementation of the computing platform provider 110 as a host computer/computing platform 110. 21. The host computer 110 has a plurality data compute nodes or VMs 114 that communicate with a network manager 1102. The context engine 2110 of the example network virtualization management 1102 is a composite representation (MP 240, context engine DP 250) along with one or more context based service engines(s) 230. This includes, for example, a discovery and control engine 2120, process control engine 526 and load balancer 530. A threat detector 2132 and deep packet inspection (DPI), module 2135. A host computer 110 includes an attribute storage 2145 and a service-rule storage 2140 that are associated with the network virtualization management 1102.

“The DCNs/VMs114 are endpoint computers that execute on the host computer 110. You can implement the DCNs as VMs 112, containers 114a, and/or a combination of VMs 128 and containers 114a. The DCNs 114 can be referred to as VMs 114 herein for ease of reference. It is evident from the description, however, that the VMs 114 forming DCNs in this disclosure can alternatively or additionally include containers 114 a.”

“Every VM 114 contains a guest-introspection agent (GI) 2150 that executes to collect context attributes for the context engine 2100. The context engine 2110 can collect contextual attributes from the GI agent 2150 of the VMs 114 through a variety different methods. The GI agent 2150 of a VM 114 registers hooks, e.g. callbacks, with one or more modules in the VM?s operating system for new process events and network connection events.

Summary for “Application/context-based management of virtual networks using customizable workflows”

Virtualizing computer systems can provide many benefits, including the ability to run multiple computer programs on one hardware computer, replicate computer systems, move computer systems between multiple hardware computers, etc. Additional benefits can be realized by virtualizing networks to make it easier to use network infrastructure for multiple purposes.

“?Infrastructure-as-a-Service? “?Infrastructure-as-a-Service?” is also commonly used. A suite of technologies offered by a service provider that allows for elastic creation of a virtualized and networked computing platform. (Sometimes referred to as a “cloud computing platform?”). IaaS can be used by enterprises as an internal business cloud computing platform. Sometimes referred to as a private cloud? This gives application developers access to infrastructure resources such as storage, virtualized servers, and networking. The platform gives developers access to all the resources needed to run an application. It allows them to create, deploy and manage web applications (or other types of networked applications) on a larger scale and at a much faster pace than before.

Virtualized computing environments can include many processing units (e.g. servers). Other components include storage devices and networking devices (e.g. switches), The current computing environment configuration requires a lot of manual input from the user and configuration to configure and install the components. Particular applications and functionality must be placed in particular places (e.g., network layers) or the application/functionality will not operate properly.”

Virtual computing involves the provisioning of virtual resources and computing services. U.S. Patent Application Ser. No. No. Filled Sep. 21, 2007 and granted as U.S. Patent No. 8,171,485, U.S. Provisional Patent Application Number. 60/919.965, entitled “METHOD AND SYTEM FOR MANAGING VIRTUAL and REAL MACHINES”,? Mar. 26th, 2007, and U.S. Provisional Patent Application Number. 61/736.422, entitled “METHODS AND APPARATUS OF VIRTUALIZED COMPUTING”,? All three were filed Dec. 12, 2012. They are all hereby incorporated by reference in their entirety.

Virtualized computing platforms can provide powerful capabilities to perform computing operations. These computing capabilities can be difficult to use manually and may require significant training or expertise. Customers often need to be familiar with details and configurations of software and hardware resources in order to set up and configure cloud computing platforms. The following examples of apparatus and methods are useful in managing virtual machine resources and virtual network resources in software-defined data centres and other virtualized computing platforms.

A virtual machine is a computer that runs an operating system and other applications just like a physical one. A guest operating system is an operating system that has been installed on a virtual machine. Virtual machines (VMs), which are isolated computing environments, can be used for desktop or workstation environments as well as to test and consolidate server apps. Virtual machines can be run on either hosts or clusters. For example, a host can house multiple virtual machines.

Policies and rules can be used to manage virtual networks that are associated with virtual machines. The network virtualization manager is an infrastructure that can be used by an executing program (e.g., executing through a VM). For applications that are deployed in a data centre, virtual networks can be provisioned. Network layers, planes, and related services can be configured to enable an application VM execution in one or more layers. While previous implementations provided and configured network layers and associated services manually and separately, some examples provide and configure network levels and services through automated definition and discovery. This allows tiered applications to be correlated, determine information flow and automatically identify an application entity within a specific network layer (e.g. policy layer, management/policy layers, etc ).

“In certain examples, when starting up a cloud computing environment or adding resources to an already established cloud computing environment, data center operators struggle to offer cost-effective services while making resources of the infrastructure (e.g., storage hardware, computing hardware, and networking hardware) work together to achieve pain-free installation/operation and optimizing the resources for improved performance. Customers must be familiar with the details and configurations of their hardware resources in order to set up workload domains to deliver cloud computing services. Some workload domains can be mapped to a management cluster deployment (e.g., a vSphere Cluster of VMware, Inc.) within a single rack deployment. This makes it easier for users to use and understand. Cross-rack clusters can be created as racks are added. As there are more deployment options and management cluster capabilities, this allows for more complex configurations of workload domains. These examples make managing workload domains and configurations easier than ever before.

A management cluster is a collection of physical and virtual machines (VMs) that hosts core components of cloud infrastructure necessary to manage a software defined cloud data center (SDDC), in a cloud computing environment that supports customer service. Cloud computing provides instant, ubiquitous and convenient access to a pool of configurable computing resource resources. Cloud computing customers can request the allocation of these resources in order to support their services. One example is when a customer requests to run a service in the cloud computing environment. A number of workload domains can be created using resources from the shared pool.

“Virtual networks are compatible with virtual machines in SDDC or other cloud computing environments. You can manage virtual networks (e.g., with NSX purchased by VMware, Inc.) through policies and rules. Applications can use network and other infrastructure. These virtual networks are available for the deployment of such applications in the SDDC.

“Manual configuration Open Systems Interconnect OSI network layers (e.g. Layer 1 (L1)), Layer 2 (L2) and Layer 3 (L3). It is difficult and time-consuming to perform the associated services such as load balancing, distributed firewall (DFW), load balance (LB), etc. Next, the application VM needs to be placed on the L2/L3 network. Some examples simplify and improve network configurations and application placement by using management layers or policies to define applications. Several examples herein show how to define an application entity within the policy/management layers. An application entity can be described as a logically manageable entity. It includes a group VMs that the application will execute.

“Some examples create logical overrides networks so that two VMs can be located at different locations in the datacenter, and possibly across multiple datacenters. They will think they are connected to the same physical network by a single switch. A network tunnel is used to create a logical overlay network between hosts that host the two VMs. The packet sent by the first VM to the second VM is encapsulated in its L2 header. This L3 header is addressed to the second host and then another L2 header to send the packet on the first hop to that host. The packet is then decapsulated by the destination host and the original packet is given to the second VM. A central controller cluster orchestrates the encapsulation and decapsulation as well as the exchange. It knows the location of each VM and can translate logical switch configurations to physical switch configurations. This allows for the programming of a physical forwarding plan with instructions to encapsulate the packet and forward it according to the translations. The management server receives configuration inputs from users, such as the logical network configuration. This information is communicated to the controller cluster via APIs. Higher-level constructs like logical L3 routing are also handled by the controller cluster. They are distributed among hosts with VMs connected to the logical route. Each logical router has the ability to perform functions similar to a physical router. This includes network address translation, source network address translation and access control list (ACL). Firewalls, load-balancers, etc. can all be implemented. Each port can also be assigned firewall rules based on configurations. Some policy rules can be converted into firewall rules by using context information. Firewall rules can be used for access control, permission regulation, and other purposes.

“Availability” refers to the amount of redundancy needed to ensure that the workload domain is able to continue operating. Performance refers to the operating speed of the computer processing unit (CPU), memory (e.g. gigabytes of random access memory, RAM), mass storage (e.g. GB hard drive disk, HDD), solid state drive (SSD), and power capabilities for a particular workload domain. Capacity refers to the total number of resources (e.g. aggregate storage, aggregate CPU etc.). All servers that are part of a cluster or a workload domain. The redundancy, CPU operating speed, memory, storage, security and/or power requirements of a user are some examples. As an example, more resources will be required to support a workload domain if the user selects higher requirements (e.g., redundancy, CPU operating speed, memory storage, security and/or power option require more resources than redundancy and CPU speed, memory storage, security and/or power choices).

“Example Virtualization Environments.”

There are many types of virtualization environments. There are three types of virtualization environments: full virtualization (paravirtualization), and operating system virtualization (operating system virtualization).

Full virtualization is, as it is used herein, a virtualization environment where hardware resources are managed and controlled by a hypervisor (e.g. a virtual monitor (VMM), and/or other software and hardware to create and execute virtualmachines) to provide virtual hardware resources for a virtual device. The hypervisor is called a host machine, host computer, or any other computing device. Each virtual machine that runs on the host machine is called a guest machine. The hypervisor manages guest operating system execution and provides guest systems with a virtual platform. Multiple operating system instances may be able to share the virtualized hardware resources of a host computer in certain cases.

“Operating System Virtualization” can also be referred to as container virtualization. Operating system virtualization is a method in which processes can be isolated within an operating system. A host operating system is typically installed on the server’s hardware in an operating system virtualization program. The host operating system can also be installed on a virtual machine in a full virtualization environment, or in a paravirtualization setting. A host operating systems of an operating system virtualization environment is configured to provide isolation and management for processes executed within the host system. This includes applications that run on the host system. A container is a way to isolate the processes. Multiple containers can share the same host operating system. A process that is executed within a container can be isolated from any other processes running on the host operating systems. Operating system virtualization allows for resource management and isolation without the overhead of a full virtualization environment. The host operating system can also be installed on a virtual machine in a full virtualization environment, or paravirtualization. Linux Containers LXC, LXD, Docker, OpenVZ, and others are examples of operating system virtualization environments.

“In some cases, a data centre (or pool of linked Data Centers) may contain multiple virtualization environments. A data center could include hardware resources managed by a full virtualization, paravirtualization, or an operating system virtualization environments. A workload can be distributed to any one of these virtualization environments in a datacenter.

“FIG. “FIG. As described below, the example system 100 contains an application director (106) and a manager (138). These are used to manage a computing platform provider 110. The example system 100, which is described below, facilitates the management of provider 110 but does not include provider 110. Alternativly, the system 100 could be included in provider 110.

“The computing platform provider 110 provides virtual computing resources (e.g. virtual machines or?VMs). 114) may be accessed via the computing platform 110 by users (e.g. users associated with an administrator and/or developer 118 and/or any other programs, code, or device. etc.”

“An example of a 102-based application, implemented using the computing platform provider 110 in FIG. 1 also includes multiple VMs 114. FIG. FIG. 1 provides different functions within the application102 (e.g. services, parts of the application102, etc.). An administrator 116 or a developer 118 can customize one or more VMs 114 in the illustrated example relative to a stock (or out-of-the box) version of the services or application components. The services that execute on the VMs 114 example may also depend on other VMs 114.

“As shown in FIG. “As illustrated in FIG. 1, the computing platform provider 110 can provide multiple deployment environments 112, such as for testing, development, staging, and/or production. The computing platform provider 110 may provide services to the administrator 116, developer 118, and/or any other program. A vCloud Administrator Centre is an example implementation of a RESTAPI for cloud and/or computing services. (vCAC and/or VMware Realize Automation) API for vCloud Directors and (vRA) API? API available from VMware, Inc. The computing platform provider 110 provides virtual computing resources (e.g. the VMs 114) that can be used to create the deployment environments 112 where the administrator 116 or the developer 118 may deploy multi-tier applications. A specific example of a deployment environment may be used to implement FIG. VMware, Inc. offers vCloud DataCenter cloud computing services.

“In some cases, a lighter-weight virtualization can be used by using containers instead of the VMs 114 within the development environment 1112. Containers 114a are software constructs that can run on top a host operating systems without the need to use a hypervisor. Containers 114a are not virtual machines. They do not create their own operating system. Containers 114 a, like virtual machines, are logically distinct from each other. Multiple containers can be run on the same computer, processor system and/or development environment 112. Containers 114a, which are similar to virtual machines, can execute instances or programs on a single computer, processor system, and/or in the same development environment 112.

“The FIG. 1, may be running on one or more VMs and orchestrates the deployment of multi-tiered applications onto one of these example deployment environments 112. FIG. FIG.

“The example topology generator 120 generates the basic blueprint 126 which specifies the logical topology for an application to deploy. The basic blueprint 126 is a general representation of the application structure as a collection components that execute on virtual computing resources. The basic blueprint 126 for an online store application, generated by the topology generator 120, may include a description of a web app (e.g. in the form a Java webapplication archive or?WAR). File containing dynamic web pages, static pages, Java class files, Java servlets, Java classes and/or any other property configuration or resources files that make up a Java Web Application. It executes on an Apache Tomcat application server that uses MongoDB as a datastore. The term “application” is used herein. The term “application” generally refers to a logical delivery unit that includes one or more applications, their dependent middleware, and/or operating system. Applications can be distributed over multiple VMs. In the above example, the term “application” refers to the entire online store application, including all database components. The entire online store application is referred to as “application” in the example above. This includes the application server and all database components. The application might include the hardware or virtual computing hardware used to implement the components in some cases.

“The basic blueprint 126 example of FIG. 1. may be assembled using items (e.g. templates) from a catalogue 130. This is a list of virtual computing resources (e.g. VMs and networking, storage, etc.). That may be provisioned by the computing platform provider 110. The catalog 130 lists all available application components (e.g. software services, scripts code components, and application-specific package) that can be installed on provisioned virtual computing resource. An administrator 116 (e.g. IT (Information Technology), or system administrator) may pre-populate and/or customize the example catalog 130. This administrator enters specifications, configurations and properties and/or any other details about items in catalog 130. The example blueprints 126 can indicate the order in which the components of an application component are installed based on the application. A load balancer cannot usually be configured until a web app is up and running. The developer 118 might specify a dependency from Apache to an application code bundle.

“The example deployment planner generator 122 of FIG. 1. generates a deployment program 128 using the basic blueprint126. It includes deployment settings for basic blueprint126 (e.g. cluster size, CPU, memory and networks). An execution plan for tasks that specifies the order in which virtual computing resource are provisioned, application components are configured, installed, and started. FIG. 128 shows an example deployment plan. FIG. 1 shows an IT administrator a process-oriented view 126 of the basic blueprint. It indicates the actions that must be taken to deploy the application. A single basic blueprint 126 can be used to generate different deployment plans 128 to test prototypes and scale up or down deployments. Each deployment plan 128 can be separated and distributed as local plans that have a series tasks to be performed by the VMs 114 from the deployment environment 1112. Each VM 114 coordinates the execution of each task with a centralized deploy module (e.g. the deployment director 124) in order to ensure tasks are executed according to the dependencies defined in the application blueprint 126.

FIG. 1. Executes the deployment plan 128 through communication with the computing platform provider 110 via interface 132. This allows 1 to provision and configure VMs 114 within the deployment environment 112. FIG. The example interface 132 in FIG. 1 is a communication abstraction layer that allows the application director to communicate with a heterogeneous mix of provider 110 or deployment environments 112. Each VM 114 is provided by the deployment director 124 with a set of tasks that are specific to its receiving VM 114 (herein referred as a?local deploy plan?). The VMs 114 execute tasks to install, configure and/or start one of the application components. A task could be a script which, when executed by VM 114 causes the VM 114 retrieve and install certain software packages from a central repository 134. The deployment director 124 coordinates the tasks with the VMs 114 in order to observe installation dependencies between VMs 114 as per the deployment plan 128. The application director 106 can be used to monitor, modify, or scale the deployment after the application has been deployed.

FIG. 1 interfaces with components of system 100 (e.g. the application director 106 or the provider 110) in order to facilitate management of resources for the provider 110. Manager 138 is an example. It includes a blueprint manager 140 for creating and managing multi-machine blueprints, and a resource manger 144 to recover unused cloud resources. Other components may be added to the manager 138 for managing a cloud environment.

The illustrated blueprint manager 140 manages the creation and deployment of multi-machine blueprints. as one unit. A multi-machine blueprint might include definitions for multiple basic templates that make up a service, such as an e-commerce provider with web servers, application server, and database servers. A basic blueprint is a list of policies (e.g. hardware policies, security policy, network policies, etc.). A basic blueprint is a list of policies that are defined for a single machine, such as a virtual machine like a web server or container. The blueprint manager 140 makes it easier to manage multiple virtual machines or containers than manually managing (e.g. deploying) basic blueprints.

FIG. 140 shows the example blueprint manager 140. 1. additionally annotates basic blueprints/multi-machine plans to control how workflows related to the basic blueprints/multi-machine blueprints will be executed. A workflow, as used herein is a sequence of actions or decisions that are executed on a virtual computing platform. To execute workflows, the example system 100 contains first and second distributed execution managers (DEM(s),) 146A and146B. The illustrated example shows that the first DEM 146A contains a first set if characteristics and is physically located in a first location 148A. The second DEM 146B contains a second set characteristics and is physically located in a second location 148B. A DEM’s location and characteristics may make it more suitable to perform certain workflows. A DEM might include hardware that is specifically suited to perform certain tasks (e.g. high-end calculations), and may be located in an area that meets local laws. The example blueprint manager 140 identifies basic blueprints as well as multi-machine blueprints that have similar capabilities.

The resource manager 144 in the illustrated example allows for the recovery of computing resources 110 that are not being used. Automated reclamation can include the identification, verification, and/or reclamation unused, underutilized resources. “Resources to increase the efficiency of the cloud infrastructure running.”

“Network Virtualization Examples”

Software-defined networking (SDN), allows computer networks to be programmed to initialize, control, change, and manage dynamically through open interfaces and abstraction of lower-level functionality. SDN, or network virtualization, addresses the issue that traditional networks’ static architecture does not allow for the dynamic, scalable computing, and storage requirements of modern computing environments like data centers. A network can be divided into several planes (e.g. control plane, data plan, management or policy), which allows for the separation of the systems that decide where traffic is sent (e.g. an SDN controller or control plane) from the underlying systems that forward traffic (e.g. the data plane, etc ).

A plane is an architectural component of a network or area of operation. Each plane can accommodate a different type or data traffic and is independent from the network’s hardware infrastructure. Network user traffic is carried by the data plane (sometimes referred to as forwarding plane, user plane, carrier plane or bearer plane). The control plane is responsible for signaling data traffic. The control plane carries control packets that originate from or are intended for a router. A subset of control planes is the management or policy plane. It carries administrative data traffic.

The three planes are implemented by the router firmware and switches in conventional networking. SDN separates the data and control planes. This allows for the control plane to be implemented in software, rather than hardware. Software implementation allows programmatic access and increases flexibility in network administration. Network traffic can be controlled from one central control console. This eliminates the need to adjust individual network switches. Switch rules can also be dynamically modified to prioritize, deprioritize or block certain packet types.

“Each network plane is associated with one or more data transfer/communication protocols. Management plane protocols, such as Command Line Interface (CLI), Network Configuration Protocol NETCONF, Representational state Transfer (RESTful), and application programming interface (API), are used to configure interfaces, subnets, routing protocols, and Internet Protocol (IP). A router may run control plane routing protocols, such as OSPF, EIGRP and BGP in certain cases. To discover nearby devices and network topology information. The router inserts control-plane protocol results into tables such as a Routing Information Base, Forwarding Information Base (FIB), and others. Software and/or hardware for data plane (e.g., field-programmable gate arrays or (FPGAs), application specific integrated circuits [ASICs], etc. FIB structures are used to forward data traffic over the network. Management/policy plane protocols such as Simple Network Management Protocol, (SNMP) can be used to monitor device operation and performance, interface counter(s), and so forth.

“A network virtualization platform is able to decouple the hardware and software planes so that the host hardware can be administratively programed to allocate its resources to software plane. This programming allows virtualization of CPU resources, memory, data storage, network input/output interface (IO), and/or any other network hardware resource. Virtualization of hardware resources allows for the implementation of multiple virtual network applications, such as firewalls and routers, Web filters, intrusion detection systems, etc. within one hardware appliance. Virtual or logical networks can be created. Networks can be built on top of existing physical networks. Virtual networks can have the exact same properties as the physical networks.

“In a network virtualization environment applications are interconnected via a virtual switch rather than a physical, hard-based network switch. Virtual switches can be described as software-based “switches”. These are software-based?switches that allow packets to move up and down a software stack that relies on the same processor(s). A virtual switch, also known as a soft switch, or vSwitch, can be installed on every server in a virtual network. It allows packets to be encapsulated across multiple vSwitches, which forward data packets in a network layer on top of a physical networking as directed by a controller. The controller communicates with the vSwitch using a protocol like OpenFlow.

A virtualized network, which is similar to a virtual machine and a software container, presents logical network components (e.g. logical switches routers firewalls load balancers virtual private networks (VPNs), etc.). for connected workloads. Programmatically, virtual networks are created, provisioned, and managed. The underlying physical network serves as a packet-forwarding backplane to data traffic on the virtual network. Each VM is assigned a network and security service that will be used to protect it. The VM can move among hosts in the dynamic virtualized environment by staying attached to them. A network virtualization platform, such as VMware’s NSX or other similar platforms. It can be installed on top of existing network hardware and supports geometries and fabrics from multiple vendors. Some applications and monitoring tools can be used seamlessly with the network virtualization platform, even without modifications.

“In some cases, the virtual network creates an address space that allows logical networks to appear in physical networks. A virtual network L2 can be created even though the physical network is L3 or Layer 3. Another example is that if the physical network has L2, an L3 virtual networking can be created. A data packet can be sent to a VM via lookup from the virtual networks when it leaves a VM. The packet can then be sent back to the physical network via lookup from the virtual network. The virtual network is therefore decoupled from its physical counterpart. A layer of abstraction is created between the end systems and the network infrastructure. This allows for the creation of logical networks that are not dependent on the network hardware.

“Example: Two VMs can be located in arbitrary places within a data centre (and/or across multiple centers, etc.). A logical overlay network allows two VMs to be connected so that they believe that they are connected via a single switch. The network tunnel is created between the hosts computers where the two VMs are located. The packet’s L2 header for the first VM is encapsulated in an L3 header that is addressed to the second host. A second L2 header for the first hop towards the second VM (e.g. the destination host) is then generated. The packet is then unpacked by the destination host and the original packet is provided to the second VM. The central controller cluster can orchestrate routing from the first VM into the second VM. It knows the location of each VM, and converts logical switch configurations to physical switch configurations to program the physical forwarding plan with instructions to encapsulate the packet and forward it according to the translation(s). A management server receives configuration input from the user, such as logical network setup, and transmits it to the controller cluster via one of several APIs.

“The controller cluster also manages higher-level constructs like logical L3 routings. These are distributed across hosts that have VMs connected to the logical route. Each logical router can have the capabilities of physical routers such as network address translation (NAT), secureNAT (SNAT), access list (ACL), and others. Distributed firewalls and load balancers can be implemented by the controller cluster. A configuration can specify which firewall rules are to be applied to each port.

“Certain cases provide a novel architecture for capturing contextual attributes on host computer that executes one or more virtual machine and consumes captured contextual attributes to perform service on the host computers. Some examples run a guest-introspection agent (GI) on each machine where contextual attributes are to been captured. Some examples execute one or more VMs per host computer. Others also run a context engine on each host machine and one or two attribute-based service engines. The context engine of a host collects some contextual attributes through the GI agents on VMs. The context engine provides contextual attributes to service engines. These contextual attributes are used to identify service rules which specify context-based services that will be performed on processes executing on VMs or data message flows sent to or received by VMs.

Data messages are a set of bits sent over a network in a specific format. Data messages can be used to describe various formats of bits that are sent over a network such as Ethernet frames and IP packets. TCP segments, UDP informationgrams and TCP segments may also be included. As used herein, L2, L3, and L7 layers (or layer 2, 3, layer 4, and layer 7) refer respectively to the second datalink layer, fourth network layer and seventh application layer of OSI (Open System Interconnection).

“Network Plane System & Workflow Examples”

“Networks can be divided into multiple planes or layers, which includes virtual networks. FIG. FIG. 2 shows an example network layout 200 that includes a data plane (210), a control plane (220), and a management/policy plan (230). FIG. 2 The data plane 210 facilitates data packet forwarding and switching (e.g. according to a forwarding list, etc. ), etc. The data plane 220 determines network address translation, neighbor addresses, netflow accounting and access control list (ACL). Logging, error signaling, and so on. The control plane 220 allows routing, including static routes, neighbor information and IP routing tables. It also supports link state. Protocols executing on the control plane 220 facilitate routing, interface state management, connectivity management, adjacent device discovery, topology/reachability information exchange, service provisioning, etc. The management/policy plan 230 allows network configuration and interfacing. This includes command line interface (CLI), graphic user interface (GUI), and many others.

“While the control and data planes 220 and 210 can accommodate networking constructs like switches, routers, ports, and switches, these planes 220 and 210 do not understand compute constructs like applications. Some examples create application entities in management plane 230. Certain examples are able to automatically tie applications to network behavior rather than manually doing so.

“In some cases, infrastructure 100 can be used to identify and manage applications and/or other resources at policy layer. Some examples allow the creation of applications that run in a virtualized network environment within the policy layer. Using certain examples, it is possible to define an application entity within the policy layer. An application entity is a logically manageable entity which includes a group VMs 114 that execute the application.

“In certain examples, a multi-tier application (e.g., a three-tier application, n-tier application, etc.) divides into one group of VMs 114 for each application tier. Three-tier applications (e.g. presentation tier and business logic tier) have three VM 114 groups. One tier is for Web presentation and one for application logic and one for datastore.

“Certain cases facilitate the discovery of user logins via VMs 114, and associated applications executing data and command flow via a context engine. The context engine detects individual processes within VMs 114, and/or users logging in to VMs 114. The policy layer can link process(es), user(s), and other information to create tiered applications. The flow information of the user/application can be found along with the flow information of another user/application connected to the flow.

“Network virtualization functionality can be placed in different planes of the 200 network structure, as described above. The context engine, for example, can be implemented in management plane 230, data plan 210, or control plane 220.

“In some cases, the context engine may be implemented in both the management plane (230) and the data plane (210). FIG. 2 shows this. 2 The context engine can be implemented in two parts, a context management plane (MP), 240, and a data plane (DP), 250. The context engine MP 240, context engine DP 250 and context engine DP 250 are the two components that perform the functions of context engine.

“In some cases, a policy engine (also known as a policy manager), and/or other operations or management components(s) can also be implemented in the management, data, and/or control planes 230, 210, and/or 220. The policy engine 2260 creates, stores and/or distributes policy(ies) to applications that are running on virtual networks and VM(s), 114, for instance.

“In some cases, the policy engine 260 creates rules based upon VM 114 address. It also collects context information about guest VMs 114. The network visualization manager then defines policies based off the captured content. Rules can be created using policies based on application or user. The policy engine 265 can create application-based rules. The policy engine 260 can also define application entities in policy plane 230. These entities can be based on applications that are running on the host 110.

“In some cases, the management/policy plan 230 can be divided into a management plane 230 or a policy plane 235 (see, for example, FIG. 3). FIG. 3. The policy engine 260 is located in the policy plan 235 and the context engine MP 244 remain in the management plane. FIG. 3. An application entity 302 may be defined in policy plane 335 based upon context information from processes running in the VMs 114, extracted by the context engine 240, and policies from policy engine 260.

“As illustrated in FIG. 4 The context engine MP240 pulls context from multiple sources within the management plane. The management plane 230 contains context from multiple cloud compute context 402, virtual centre (VC) compute context 400, identity context 406, mobility contextual 408, network context 412 and endpoint context 410. In some cases, the compute context may include multi-cloud context 402 of various cloud vendors like Amazon, Azure, and others. Compute context may also include virtual center inventory (e.g. CPU, memory and operating system). Identity context 406 can include user, group, and other information from directory services like Active Directory (AD), Lightweight Directory Access Protocol, Keystone, or any other identity management. Mobility or mobile context 808 also includes location, Airwatch mobile device manager, international mobile equipment identification (IMEI) code and etc. In certain examples, endpoint context 410 includes DNS, process, application inventory, etc. Network context 412 contains IP, Port, Ethernet, MAC, bandwidth and quality of service (QoS), latency, congestion, latency, etc. The context gathered in management plane 230 can then be used to group objects, for use by policies and rules. The context engine MP240, for example, can store context information 414 and generate one to three outputs, including grouping objects 416 and analytics 418 and policy 420.

“A second component can be instantiated within the context engine in the data plane 220 as the context engine 250. The context engine DP 250 collects context from a thin client agent in the data plane 210. Data plane context can include information such as user context and process context. It also includes application inventory and system context. User context includes user ID, group ID, etc. Process context includes name, path, libraries, etc. Application inventory includes company name, product name, installation path, and so on. System context can include operating system information as well as hardware information and network configuration. In some cases, guests may be able to provide user and process context on a per-flow basis. On a per VM 114 base, the application inventory and system context is gathered. These contact information may be called realized or runtime contexts.

The context of services can satisfy many use cases. These services can be implemented as plug-ins for the context engine DP 250. Services include application visibility (IDFW), packet capture (LB), process control (Vulnerability scan), load balancer (LB), and application visibility (API). The application visibility plug-in, for example gathers context for flow and stores it in the management plane. This information is used by a user interface to visualize the flow of the VMs 114 between the processes within the VM 114.

“FIG. “FIG. FIG. FIG. 5 shows how context engine MP240 communicates with context based services MP 502 in management plane 230. The context engine DP 250 can communicate with context-based plugins 504 (e.g. in a hypervisor 510 on the data plane 210). The context engine 2110, for example, can be instantiated as MP 240’s context engine, and 210 as the context engines DP 250. The context-based service engine(s), 230, can also be instantiated as context-based services MP 502 or 210 as context-based service plugins 504 in the management plan 230.

“In the context-based services example MP 502, a management plan analytics (MPA message bus 506) facilitates communication with respect services such as application visibility 508, IFW 510 and application firewall 512. Process control 514, vulnerability scanner 516, LB 518 are some examples. Each service 508-518 is associated with a plugin 520-518 (e.g. application visibility 520 and IDFW 522; application firewall 524; process control 526; vulnerability scan 528, or LB 530). Accessible via the message bus 506 from an MPA library 532

“In some cases, the application visibility tool 508 uses the application plugin 520 to collect process context per flow and store it in the management plane.230. This information is used by a user interface to visualize the flow of the VMs 114 as well as the processes within them 114.

“In some cases, the IDFW service510 is used in conjunction with the IDFW plug-in 522 to gather the user context per connection flow. This information is passed to the DFW and it creates an identity-based firewall. The application firewall service 512 works in conjunction with the plugin 524 for gathering process/application context per connection flow. This information is used to program the DFW to create an application-based firewall.

“In some cases, the process service 514 leverages process control plugin 526 in order to start, stop and pause, resume or terminate processes executing on VM(s), 114 via the network based upon the user/or application context. The LB service 518 uses the LB plugin 530 to load balance traffic based upon policies.

“Using the plugin model, you can easily add context-based services to your existing services. FIG. 5 shows this. 5 Each context service is composed of a MP component that configures the data plane 210 service components and gathers any information from the DP components to be stored in management plane 230. The communication bus that links the services 508-518 within the management plane230 can leverage MPA, remote procedure calls (RPC), and other information. For example, the context engine DP 250 can cache context that has been achieved.

“As shown at FIG. 5. The context engine DP250 includes a context engine core 534 that operates with context-based plugins 504 and caches realized context in a realized contextual cache 536. The context engine core 534 works in conjunction with an Endpoint Security (EPSec library 538. It communicates via a multiplexor 540 with one or more VMs 114. Each VM 114 creates a guest context, and each VM 114 contributes context through a thin agent 542-548.

“In some cases, context information can be stored in memory 536. A unique identifier (e.g. ID, token, etc.) can be generated once the context has been stored. Each stored context can be created and given to the respective services 508-518 instead of passing the entire context. This allows for optimization or improvement of context passing between context services and verticals like DFW, DLB, etc. The identifier (e.g. token, etc.) can also be optimized or improved. can be passed in one to several packets in a header packet (e.g. VxLAN/Geneve). can be used across hypervisors (510).

“The policy engine 260 interacts with context engine MP 240 within the management/policy plan 230. The context engine DP 250 leverages plugins 504. and VM 114 guest contents in the data plane 210. Based on context information extracted via the context engines 240 and 250, the policy engine 260 implements app entities in the management plan 230.

“In some cases, the context engine (and its components, the context engines MP 240 and DP 250), leverages information from guest intrtrospection (GI), context-based services 502, 504 and context-based services 502 to determine applications, users and/or other processes that are operating in context on the system 100, its VMs 112. GI, for example, is a set of elements that includes APIs for endpoint safety. It allows offloading of anti-malware and/or other processing to a hypervisor 510-level agent. Services 502, 504 can use context information to enable one or more user workflows, such as virtual machine, network and/or application entity creation, modification and control.

“In some cases, the network virtualization administrator provides services 502, 504 as well as the ability to create virtual networks such L2 networks (e.g. for switching, etc.). ), and an L3 network (e.g. for routing, etc. ), etc. The manager provides IDFW 510/application firewall 512 services that allow users to create rules to prevent traffic from flowing from one source to another (e.g., according to user identity, IP address, etc.). The LB 518 allows load balancing based on who is logged into, which application is responsible data/message traffic, etc.

“In the hypervisor510, each flow of data/message is collected by context engine 240 and 250. When a user/application attempts to connect to a server it sends data packets. The associated VM 114 can also be added to a guest to allow for monitoring of the flow. The identity of the user can be determined as well as other hidden information. It is possible to determine properties that are not available elsewhere on the network (e.g., which group the application belongs to, what process(es), etc.). The context engine 250, 250 gathers flow information and sends it to the service(s), 502, 504 or the policy engine, 260. The firewall 510,512, for example, can use the collected flow context. It uses rules, information and context to allow or deny connection, flow, etc. An administrator or another operator can visualize context information in certain cases (e.g., which applications are currently running in the datacenter, etc ).

“FIG. “FIG. FIG. 6 shows the example policy engine 265. FIG. 6 shows an example policy engine 260. It includes a context input process 602, a rules engines 604, and options data stores 606 as well as a policy generator 608. The context input processor 602 receives context from the context engine MP 240. It can be used to identify, group, or permission an application process. The context input processor 602 processes input context in accordance with one or more rules provided to it by the rules engine 604.

“By applying the rule(s), to the context information one or more policies can be retrieved form the options data storage 606, such the policy generator 608. The policy generator 608 uses context information and the available options to create one or more policies. One or more policies can be created to govern and/or instantiate an application entity in policy layer 235. This is for an application that executes on a VM 114 being monitored by the context engine MP 240. You can create one or more policies to control execution of the application on one or multiple VMs 114. You can create one or more policies to control the instantiation or use of virtual networks 114 or virtual VMs to host the application and the associated user/group.

“Context information, policy(-ies), etc. can be visualised and sent to users in certain cases. An interface generator 610 can be included in the policy engine 266, which provides a visual representation of policies, users, applications, connectivity, options, etc. for review by the user and modification. An administrator can, for instance, create services and networks on top of policy entities in the policy layer 237 using the resulting graphical user interface.

The template generator 612 can save a certain configuration, including an application entity, network, or service. A user or process, apart from the interface, can trigger the creation of a template based upon context, settings and other information from a current system configuration 100.

“In some cases, a modeling metalanguage (e.g. a markup languages, etc.) may be used. The policy generator 608 can be used to create a policy model, including a policy structure and associated meta information. Name, properties, metadata and relationship information (e.g. between a source object or a destination object) can all be used to create a policy model. A policy tree structure can help you define such things as name, properties, metadata, etc. Policy models allow for relations to be created and query/addressed to give information about the relationship between objects. Access from policy model to consumer or policy to policy can be made easier by using relationships. A policy tree may include policy objects that are user-managed, policy objects that have been implemented by the state, and so on. Some policy objects may not persist in some cases. Some policy objects may persist in the options database 606. The policy engine 260 can interact with the network virtualization manger to connect via a policy role to the virtualization manger from the policy engine 260. This allows the policy engine 260 to make API calls to obtain debugging information, status information and connection information. The policy model allows you to define allowed whitelist communication, denied Blacklist communication, and other permissions.

“In some cases, an endpoint can be included in a group (VMs 114 or IP addresses, VxLAN etc.). That are logically connected to provide a particular service for a particular application. For example, group members get the same policy. A group of applications is a group that has logically related applications.

“A contract is a security policy that regulates communication between application groups. A contract is a set rules that allow or deny service (e.g., port, protocol and/or classifier). For example, a group could provide multiple contracts and consume them all. In some cases, both a consumer and provider group can consume contracts. Each group can have a tag. To allow a pair of provider and consumer groups to communicate, their tags must match. The provider and consumer tags can be set up to identify the source (e.g. provider) and destination(e.g. consumer) of a contract rule. If two groups consume or provide the same contract, communication between them is permitted for groups with matching tags. If configured by the user, tags can be optionally applied in certain cases.

“FIG. 7 shows an example 700 contract with multiple providers and consumers. The policy layer 235 supports multiple tenants and provides an application-oriented view to a network management API. Tenants can be separated, so policies that affect one tenant’s applications may also affect other tenants. FIG. FIG. 7 shows that a tenant 702 has a contract 704 as well as two applications 706, 708. Each application 706, 708 has a Web group 710-712 and an app group 714-716. Each 710-716 group is associated with a 718-724-provider 727.3332 pair. Some of these consumers and/or suppliers are tagged 734-740, according to the contract 704.

“Using the policy hierarchy shown in FIG. 7. A given application 706, 708 may be controlled by policies that are associated with it and its groups. Other policies that are not related to the application 706, 708 shouldn’t have any effect. However, policies that are consumed by multiple applications 706, 708 can have an effect on the behavior of all other applications 706, 708. If policies are conflicting for group 710-716 and endpoint 718-732 in certain cases, explicit rules and/or implicit rules can be used to determine which policy is more important (e.g. allow takes precedence over deny, etc ).

“In some cases, endpoints of network virtualization managed by policy engine 260 may receive policies from both infrastructure and tenant. Infrastructure policies can be applied to any endpoint and are generally defined by infrastructure administrators. Infrastructure policies may have a higher priority than user/tenant-level policies. Rather than an application-centric view, which may be conveyed through a user/tenant level policy, infrastructure policies may span across multiple applications(s)/tenant(s), for example.”

“FIG. 8 shows an example policy mapping 800 between an infrastructure space 804 and a user/tenant area 802. FIG. 8 tenants 806-810 have the corresponding applications 812-816. Each application has a web-group 818-822, and an application-group 824-828. Each group 818-828 contains a number of endpoints from 830 to 840 within the user/tenant area 802.

“A subset 830, 832 and 836 endpoints is also connected with an infrastructure group 842 or 844. Each group 842-844 belongs to an infrastructure domain 846, or 848. The infrastructure domains 846-848 and 846-848 are respectively associated with an infrastructure tenant 845-848. FIG. 9 Individual endpoints 830-832, 836 and 838 consume policies from both the infrastructure space 804 and the user/tenant area 802. The application group 824, for example, consumes policies 860 with priority 862 from infrastructure space 804 as well as policies 802 from user/tenant area 802. This takes precedence over policies 860 from infrastructure space 804.

“FIG. 10 shows an example network policy 1000 that describes a network topology for an application. It also defines external connectivity for the app. FIG. FIG. 10 shows that a tenant 1002 can be associated with an application (s) 1004, L2 contextual 1006, and L3 context 1008, respectively. The L2 context 1006 represents a broadcast domain. You can map the context 1006 to a virtual wire or logical switch, or VLAN. A routing domain is the broadcast domain. L3 context 1008 is the representation of the routing domain. For example, the context 1008 can be mapped on a TIER1 router or an edge.

“A subnet 1012 may be configured for L2 context 1006. The subnet 1012 can be either an external subnet (e.g. reachable via an external gateway 1010 within an external group 10222) or a local subnet (e.g. reachable within its routing area). The L2 relationship connectivity 1016 can be defined between the groups 1014, and L2 context 1006. This will establish network connectivity between them. A L3 linked relationship 1018 is used to link L2 context 1006 with L3 context 1008

External connectivity 1020 can be expressed as connecting the L3 context 1008 with an external gateway 1010. External connectivity 1010 is a pre-configured router (equivalent edge in the virtual network), that provides external connectivity for applications 1004. Some examples show that policy does not manage external gateways 1010, and therefore no policies can be applied to these services.

In certain cases, you can create L2 context 1006 to isolate the network and then assign a subnet 1012 for L2 context 1006. Support for the isolated network can be provided by L2 services such as DHCP and metadata proxy. You can create L2 context 1006 by linking L2 context 1006 with L3 context 1008. This will create a routed network. Once the L2 context 1006 is created, the network can be linked to L3 context 1008. A subnet that is assigned to routing network can be routable from an external gateway 1010. In this case, the intent is to mark the subnet 1012 external and connect the L3 context 1008 with the external gateway 10.10. For example, the subnet 1012 could be advertised to an external gateway 1010.

“As illustrated in FIG. 10. An endpoint 1024 can be assigned an IP address public/floating to facilitate external connectivity (e.g. via SNAT rules). The endpoint 1024 can facilitate port forwarding and/or translation (e.g. via dynamic NAT rules (DNAT), etc ).

“As mentioned above, the interface generator 110 can create an interface that allows an administrator or other user to see configuration and operation information about a group being monitored. FIG. FIG. 11 shows an example of an application monitoring interface 1100 that is used to monitor a network virtualization manager 1102 which summarizes traffic generated 1106, applications running 1104, and flows 1108, as well as VMs 114 and 1104, respectively. FIG. FIG. 12. This is an additional interface 1202 that the network virtualization manger 1102 uses to collect data for a group (e.g. a security group etc ).

“FIG. “FIG. 13” shows an example interface 1300 that provides flow information 1302 to the network virtualization manger 1102 which includes a listing of VMs 114 as well as a visualization 1304. You can select a VM entry 1306 from a list to visualize the corresponding flow 1304. You can view additional information about the flow 1304 and/or VM 114 operations by selecting a VM representation 108.

“FIG. 14 shows an example interface 1410 with a pop up box 1410 that lists applications for the network virtualization manger 1102 on a VM 114. FIG. 1410 shows an example interface. The interface 1410 can provide information about the application name 1412 and version 1414 as well as user 1416 and status 1418. FIG. FIG. 15 shows another example interface 1500 with a pop up box 1510 that allows you to visualize an example of application flow between endpoints. This includes application, port and protocol as well as bandwidth. FIG. FIG. 16 shows another example interface 1600, showing application flows 1602 as well as associated traffic details 1604 for a particular flow 1606.

“FIG. “FIG. The example window 1706 shows the collected application data. FIG. 18 can show a percentage or other portion of completion 1708 in the window 1706 after application data collection has been initiated using the option 1702. The option 1702 can be used to cancel application data collection.

“FIG. 19 shows an example interface 1700 after application data collection is complete. The icons 1712-1720 represent the identified processes in the window 1706. To view more information about the process(es), you can select an icon 1712-1720. FIG. 20 illustrates an example interface 2000. FIG. 20 shows an example interface 2000 that provides further information 2002 about the selected web-tier process(es). 19. Each process identified 2004 can be selected by the network virtualization manger 1102 and the policy generator 260 as an app entity 302, as described above.”

“In some cases, the network virtualization manger 1102 can be implemented using the VMs 114 through the computing platform provider 110. FIG. FIG. 21 shows an example implementation for the computing platform provider 110 in host/computing platform 110. FIG. 21 shows an example implementation of the computing platform provider 110 as a host computer/computing platform 110. 21. The host computer 110 has a plurality data compute nodes or VMs 114 that communicate with a network manager 1102. The context engine 2110 of the example network virtualization management 1102 is a composite representation (MP 240, context engine DP 250) along with one or more context based service engines(s) 230. This includes, for example, a discovery and control engine 2120, process control engine 526 and load balancer 530. A threat detector 2132 and deep packet inspection (DPI), module 2135. A host computer 110 includes an attribute storage 2145 and a service-rule storage 2140 that are associated with the network virtualization management 1102.

“The DCNs/VMs114 are endpoint computers that execute on the host computer 110. You can implement the DCNs as VMs 112, containers 114a, and/or a combination of VMs 128 and containers 114a. The DCNs 114 can be referred to as VMs 114 herein for ease of reference. It is evident from the description, however, that the VMs 114 forming DCNs in this disclosure can alternatively or additionally include containers 114 a.”

“Every VM 114 contains a guest-introspection agent (GI) 2150 that executes to collect context attributes for the context engine 2100. The context engine 2110 can collect contextual attributes from the GI agent 2150 of the VMs 114 through a variety different methods. The GI agent 2150 of a VM 114 registers hooks, e.g. callbacks, with one or more modules in the VM?s operating system for new process events and network connection events.

Click here to view the patent on Google Patents.