Microsoft – Sudish Kumar Sah, Bhavana Ganji, Rajesh Joshi, Citrix Systems Inc

Abstract for “Systems and methods to redirect handling”

“The invention relates to a method of routing requests among multiple database servers. A device that acts as an intermediary between a client’s computer and a plurality database servers receives a request for access to a database maintained by the plurality database servers. A plurality of databases servers may include a first server that processes write requests, and one or more secondary servers that process read requests. The device determines that the request for access to the database is a read request. After determining that the request to access the database is a read request the device sends the request to one of the second database servers. The device transmits the request then to the second database server.

Background for “Systems and methods to redirect handling”

A client can request access to one or more databases offered by a plurality database servers. A plurality of databases servers may include a primary server that receives all client requests and one or more secondary servers. The primary database server might generate a redirect response asking the client to send the request to another server depending on the type of request. The primary database server may need to spend considerable resources in order to create and transmit these redirect replies.

“The present invention is directed to systems and methods of routing requests among a plurality database servers. A device that acts as an intermediary between a client’s computer and a number of database servers receives a request for access to a database. It identifies the type of the request, and routes it to the appropriate subset of the plurality. If the device determines the request is read, it can send the request to one set of secondary databases servers from the plurality that are configured to process read requests. The load balance among secondary databases servers may determine which secondary database server receives the request. However, if the device determines that the request does not require reading, such as a written request, it can send the request to one subset of the plurality database servers that are primary database servers.

“In one aspect, this disclosure is related to a method of routing requests among a plurality database servers. A device that acts as an intermediary between a client’s computer and one or more database servers receives a request for access to a database maintained by the plurality. A plurality of databases servers may include a first server that processes write requests, and one or more secondary servers that process read requests. The device determines that the request for access to the database is a read request. After determining that the request to access the database is a read request the device sends the request to one or more of the second database servers. The device transmits the request then to the second database server.

“In some cases, the client can send a second request to the device to access the database. The device can identify that the second request to access a database is a writing request. After determining that the second request to access the database is a writing request, the device can identify the first database server to transmit the request. The device can then send the second request to that first database server.

“In some embodiments, the virtual server of the device may determine that the request was a read request. The device can, in some cases, assign a first virtual host to route write requests to the first database server. Each of the second database servers can be assigned a second virtual server to route read requests.

“In some embodiments, a device can determine that the request for access to the database is a read-only request based upon a property of the connection over which it receives the request. The device can in some cases determine that the request for access to the database is a read request by looking at the content.

“In some embodiments, a device can choose from one or more of the second database servers the second server to send the read request. This is based on the load placed on each of the one/more second database servers. The device can monitor each of the one to three second database servers and identify the available second database servers. In some cases, the device will select the second server from the subset of available second database servers. The device can, in some cases, identify a subset among the plurality service assigned to the one-or more second databases servers to determine if a status of that second database server is available to process requests.

“In some embodiments the primary server is the first database server. The second or more database servers can be configured to serve secondary servers. They are further configured to be read-only.

“In another aspect, this disclosure is directed at a system for routing queries among a plurality database servers. The system comprises a device that acts as an intermediary between a client’s request and a plurality database servers. A plurality of databases servers include a first database server that processes write requests, and one or more secondary database servers that process read requests. The device also includes a virtual host. The request is made by the virtual server to access a plurality of databases servers. The request to access the database was deemed a read request by the virtual server. After determining that the request to access the database is a read request the device sends the request to the second database server. The device sends the request to the second database server.

“In some cases, the client can send a second request to the device to access the database. The device can identify that the second request to access a database is a writing request. After determining that the second request to access the database is a writing request, the device can identify the first database server to transmit the request. The second request can be transmitted to the first database server.

“In some cases, the device can identify that the request is read. The system can, in some instances, assign a first virtual host to route write requests to the first database server. Each of the second database servers can be assigned a second virtual server to route read requests.

“In some embodiments, a device can determine that the request for access to the database is a read-only request based upon a property of the connection over which it receives the request. The device can in some cases determine that the request for access to the database is a read request by looking at the content.

“In certain embodiments, the device may choose from one or more of the second databases servers the second server to send the read requests based on the respective loads on each of these second database servers.”

“In certain embodiments, the device is able to identify each of the one, more, or more second databases servers and determine which one or more are available. The device can choose the second server from a subset among the available second database servers.

“In some embodiments, the device is capable of identifying, by monitoring each one or more of the second databases servers, a subset or a plurality service assigned to the one/more second database servers as responsive to determining whether a status of second database server associated to the service is available for processing requests.”

“In some embodiments the primary server is the first database server. The second or more database servers can be configured to serve secondary servers. They are further configured to be read-only.

“The description and accompanying drawings show the details of different embodiments of the invention.”

“The following sections of the specification with their respective contents can be useful for reading the descriptions of various embodiments:”

“Before discussing the details of particular embodiments of systems and methods for clients and appliances, it might be useful to discuss the computing and network environments in which these embodiments could be deployed. Referring to FIG. FIG. 1A shows an example of a network environment. The network environment consists of one or several clients 102a-102n (also known as client(s), 102) in communication to one or two servers 106a-106n (also known as remote machine(s), 106) via one or multiple networks 104,104? (generally known as network 104). A client 102 can communicate with a server (106) via an appliance 200 in some embodiments.

“Although FIG. “Although FIG. The clients 102, 106 and servers 106 could be on the same network. What are the networks 104-104? It can be one type of network, or several types of networks. The network 104 or the network 104? It can be a local area network (LAN), like a company Intranet or a metropolitan area network, or a wide-area network(WAN), like the Internet or the World Wide Web. Network 104 is one example. Network 104 could be a private network, while network 104 might be a public one. Network 104 and network 104 can be considered private networks in some instances. A public network. Networks 104 and104 may be used in another way. Both networks may be private. Clients 102 could be at a branch office or corporate enterprise and communicate via a WAN connection through the network 104 to the corporate data center servers 106.

“The network 104 or 104?” Any type or form of network may be used. It can include any number of the following: a network 104 and/or 104, a broadcast network network, large area network network networks, a wide-area network network network, and a telecommunications network. The network 104 can include a wireless link such as an infrared channel, satellite band, or a wireline network. What is the topology of network 104 or 104? It could be a bus, star or ring network topology. What is the network 104 or 104? The network 104 and/or 104 may have any network or network topology that is known to those who are skilled in the art of the art.

“As shown at FIG. “As shown in FIG. 1A, the appliance 200 is shown between networks 104 and104?. The appliance 200 can be found on network 104 in some instances. An appliance 200 may be deployed at a branch office within a corporate entity, for example. The appliance 200 could also be found on network 104?. An appliance 200, for example, could be found at a corporate data centre. A plurality of appliances 200 can be deployed on network 1004. A plurality of appliances 200 can be deployed on network 104.1?. One embodiment shows a first appliance 200 communicating with a second appliance 200. Other embodiments allow the appliance 200 to be part of any client 102, server 106, or other network 104.104. as the client 102. An appliance 200 or more may be found at any point on the network or in the communications path between a client and server 102.

“In some embodiments, an appliance 200 includes any network device manufactured by Citrix Systems, Inc., Ft. Lauderdale Fla., also known as Citrix netScaler devices. Other embodiments include any product embodiments known as WebAccelerator or BigIP made by F5 Networks, Inc., Seattle, Wash. Another embodiment of the appliance 205 is any one of the DX acceleration platform platforms and/or SSL VPN series devices, such as SA 700, SA 2000, SA 4000, SA 6000, and SA 4000 devices manufactured by Juniper Networks, Inc., Sunnyvale, Calif. Another embodiment of the appliance 200 includes all application acceleration and/or security-related appliances and/or software manufactured or distributed by Cisco Systems, Inc., San Jose, Calif., including the Cisco AVS Series Application Velocity Systems and Cisco ACE Application Control Engine Modular service software.

“In one embodiment, multiple servers may be logically grouped 106. These embodiments may also include a server farm 38. The serves 106 in some embodiments may be geographically dispersed. A farm 38 can be administered as one entity in some cases. Other embodiments of the server farm 38 include a plurality server farms 38. One embodiment of the server farm executes one to several applications for one or more clients 102.

“The servers 106 in each farm 38 may be heterogeneous. One or more servers 106 may operate under one operating system platform, such as WINDOWS, which is manufactured by Microsoft Corp. of Redmond. Wash., while the other servers (106 and 106) can operate on another operating system platform, such as Unix or Linux. Servers 106 and 106 from each farm 38 don’t need to be physically close to other servers 106 in the farm 38. The servers 106 that are logically connected to form a farm 38 can be interconnected via a wide-area or medium-area connection (WAN) or MAN connections. A farm 38 could include servers 106 located on different continents, in different areas of a country, state, city or campus. The data transmission speeds between servers 38 and 106 can be improved if servers 106 are connected via a local-area networking (LAN) connection, or another type of direct connection.

Servers 106 can be called a file server or application server, a web server, proxy server or gateway server. A server 106 can be configured to serve as an application server, or master application server in some cases. A server 106 could include an Active Directory in one embodiment. Client nodes and endpoints may also be used to refer to clients 102. A client 102 can be used as both a client server that provides access to the applications of a server or as a client node to access them.

“In some embodiments, the client 102 can communicate with a server 106 in a farm 38. One embodiment shows the client 102 communicating directly with one the servers 106 within a farm 38. Another embodiment sees the client 102 running a program neighbor application to communicate with server 106 in farm 38. Another embodiment provides functionality as a master node. The client 102 can communicate with the server 38 via a network 104. The client 102 can request execution of different applications from servers 106a-106n in farm 38 over the network 104 and receive the output of application execution for display. In certain embodiments, the master node is the only one that can identify and provide address information for a server 106. hosting the requested application.”

“The server 106 is a web server in one embodiment. Another embodiment is that the server106a receives client requests 102 and forwards them to a second server106b. The server106b then responds to client 102’s request with a response from server106b. Another embodiment is that the server 106 obtains information about the address of a server hosting the application. Another embodiment presents the client 102 with the response via a web interface. The client 102 can communicate directly with the server106 in order to access the identified app. Another embodiment allows the client 102 to receive application output data (such as display data) generated by the execution of the identified app on the server.

Referring to FIG. “Referring now to FIG. 1B, an example of a network environment that deploys multiple appliances 200 is shown. One appliance 200 can be deployed on a network 104, while a second appliance 200 is possible to be deployed on a network 200. On a second network, 104?. A corporate enterprise might deploy 200 appliances at its branch offices and 200 at its headquarters. at a data center. Another embodiment would be to have the first appliance 200 and the second appliance 200. They are both deployed on the same network, 104 or network 100. A first appliance 200 could be used to deploy a server farm 38 and a second appliance 200 for 38. A second example is that a first appliance 200 could be deployed at a branch office, while the second appliance 200 might be deployed at another branch office. deployed at a second branch? In some embodiments, the second appliance 200 and first appliance 200 are used. “Work in collaboration or in combination with one another to speed up network traffic or the delivery application and data between clients and servers”

Referring to FIG. 1C is another example of a network environment that deploys the appliance 200 with one, or more, other types of appliances such as WAN optimization appliance (205, 205). is depicted. A first WAN optimization appliance 205, for example, is shown between networks 104-104. a second WAN Optimization Appliance 205? The appliance 200 may be connected to one or more servers (106). A corporate enterprise might deploy a first WAN Optimization appliance 205 in a branch office, and another WAN Optimization appliance 205 in a second office. at a data center. The appliance 205 can be found on network 104 in some embodiments. The appliance 205 may be located on network 104? in other embodiments. The appliance 205? may be found on network 104. In some embodiments, appliance 205? The appliance 205? may be found on network 104? Or network 104? The appliance 205 and the 205? are both in one embodiment. They are connected to the same network. Another embodiment is the appliance 205 and the appliance 205? They are connected to different networks. Another example is that a first WAN Optimization Appliance 205 could be deployed to a server farm 38, and a second WAN Optimization Appliance 205 for a second farm 38. for a second server farms 38?

“The appliance 205, in one embodiment, is a device that accelerates, optimizes or otherwise improves the performance, operation or quality of any type of network traffic such as traffic to and from a WAN connection. The appliance 205 can be used as a proxy to improve performance. Other embodiments refer to the appliance 205 as any form of WAN optimization/acceleration device. Sometimes also called a WAN controller, The appliance 205 can be any of the WANScaler product embodiments manufactured by Citrix Systems, Inc., Ft. Lauderdale, Fla. Other embodiments include the appliance 205, which includes the product embodiments BIG-IP link controller or WANjet manufactured in Seattle, Wash. Another embodiment of the appliance 205 is any one of the WX or WXC WAN acceleration platform platforms manufactured by Juniper Networks, Inc., Sunnyvale, Calif. The appliance 205 may include any of the Steelhead WAN optimization appliances manufactured and sold by Riverbed Technology, San Francisco, Calif. Other embodiments include any of the WAN-related devices manufactured by Expand Networks Inc., Roseland (N.J.). One embodiment of the appliance 205 comprises any of the WAN-related appliances manufactured by Packeteer Inc. in Cupertino (Calif.), such as the PacketShaper and iShared products provided by Packeteer. Another embodiment of the appliance 205 is any WAN-related appliances and/or software manufactured or distributed by Cisco Systems Inc. of San Jose (Calif.), such as the Wide Area Network Application Services software, network modules and appliances.

“In one embodiment, appliance 205 offers data acceleration and application services for remote or branch offices. One embodiment of the appliance 205 provides optimization for Wide Area File Services (WAFS). The appliance 205 can be used to accelerate file delivery, as in the Common Internet File System protocol. Other embodiments of the appliance 205 provide caching in memory or storage to speed up data and applications delivery. The appliance 205 can compress network traffic at any layer of the network stack, protocol, or network layer. The appliance 205 can also provide flow control, performance enhancements and modifications, and/or management for transport layer protocol optimizations to speed up the delivery of applications and data over a WAN link. The appliance 205, for example, provides Transport Control Protocol (TCP), optimizations. The appliance 205 can also be used to optimize, flow control, performance enhancements and modifications, as well as management for any session protocol or application layer protocol.

“In another embodiment, an appliance 205 encoded any kind of data or information in custom or standard TCP/IP header fields or option field of network packets to announce presence, functionality, or capability to another appliance.205?” An appliance 205 may communicate with another appliance 205 in another embodiment. An appliance 205 may communicate with another appliance205? Data encoded in both TCP header fields and/or IP options. TCP options, IP header fields, or options may be used by appliances to communicate one or more parameters. 205, 205 In performing functionality such as WAN acceleration or working together.

“In certain embodiments, the appliance 200 retains information encoded within TCP and/or IP headers and/or option field communication between appliances 205 or 205?. The appliance 200 can terminate a transport connection traversing the appliance 200. This could be a transport connection between clients and servers traversing appliances 205 or 205. One embodiment of the invention is that the appliance 200 can identify and preserve any encoded data in a transport packet transmitted by a 1st appliance 205 via a 1st transport layer connection. Then, it communicates the transport layer packet with this encoded information to the 2nd appliance 205. via a second transportation layer connection.”

Referring to FIG. “Referring now to FIG. 1D, is a network environment for operating and delivering a computing environment to a client (102). A server 106 may include an application delivery system (190) for delivering a computing environment, an application and/or a data file to one of the clients 102. A client 10 is communicating with server 106 over network 104,104. Appliance 200. The client 102 could be located in a remote office within a company, such as a branch office. While the server 106 might be at a corporate data centre, Client 102 includes a client agent 120 and a computing environment 15. The computing environment 15 can execute or manage an application that accesses or processes a data file. The appliance 200 and/or server 106 may deliver the computing environment 15, the application, and/or the data file.

“In certain embodiments, the appliance 200 speeds up delivery of a computing environment 15 or any portion thereof to a client 102. The appliance 200 speeds up the delivery of the computing environments 15 using the application delivery system. The embodiments described herein can be used to speed up the delivery of streaming applications and data files from the central corporate data center to remote users, such as branch offices of the company. Another embodiment of the appliance 200 speeds up transport layer traffic between client 102, server 106. The appliance 200 can provide acceleration techniques to accelerate any transport layer payload between a server (106) and a client (102), such as: 1) transport Layer Connection Pooling, 2) transport Layer Connection Multiplexing, 3), transport Control Protocol Buffering, 4), compression, and 5). The appliance 200 may provide load balancing for servers 106 to respond to client requests 102. Other embodiments act as access servers or proxy servers to allow access to one or more servers. Another embodiment provides a secure connection via a virtual private network from the first network 104 of client 102 to second network 104. The server 106 can provide an SSL VPN connection. The appliance 200, in addition to providing application firewall security and control, manages the connection and communications between client 102, and server 106.

“In certain embodiments, the app delivery management system (190) provides application delivery techniques to deliver computing environments to remote users. These delivery techniques are based on multiple execution methods and any authorization and authentication policies that have been applied via a policy engine (195). Remote users can access the server-stored applications and files on any connected device 100 using these techniques. One embodiment of the application delivery system may reside on or execute on a server. Another embodiment of the application delivery method 190 could reside on one server 106 a-106n. In other embodiments, it may execute on multiple servers 106a-106n. In some cases, the application delivery software 190 may be executed in a server farm 38. One embodiment of the application delivery system may store or provide the data file and application to the server 106. A second embodiment may have a first set of servers 106 that executes the application delivery system. Another server 106 n could store or provide the data file. In some cases, each application delivery system, the application and the data file can reside on or be located on separate servers. Another embodiment allows any part of the application delivery method 190 to reside, execute, be stored or distributed to the appliance 200 or multiple appliances.

“The client 102 might include a computing environment 15 to execute an application that processes or uses a data file. The client 102 may access networks 104,104? The appliance 200 can request a data file and an application from the server. One embodiment is that the appliance 200 could forward a request from client 102 to server 106. The client 102 might not have the application or data file stored locally. The client 102 may be notified by the application delivery system (190) and/or server (106) that the application and data files have been delivered to them. In one embodiment, server 106 may transmit an application stream to client 102 in order to run in computing environment 15.

“In one embodiment, an application delivery system (190) includes a policy engine (195) for controlling and managing access to, selections of application execution methods, and delivery of applications. The policy engine 195 can determine which applications a client 102 or user may access in some embodiments. Another embodiment of the policy engine is 195 which determines the delivery method for the application to the client or user 102. The application delivery system (190) may provide a variety of delivery options from which to choose a method for application execution. This includes streaming the application locally to the client 120, or server-based computing.

“In one embodiment, the client 102 requests execution for an application program. The application delivery system (190) comprises a server and 106 chooses the method to execute the application program. The server 106 may receive credentials from the client. Another embodiment is that the server 106 is notified by the client 102 of a request to enumerate all available applications. One embodiment responds to the client’s request for credentials. The application delivery system (190) enumerates the available applications to the client 102. The request is made by the application delivery system (190) to execute an enumerated app. The application delivery system selects from a predetermined list of methods to execute the enumerated applications, such as one that is responsive to a policy in a policy engine. The application delivery system may choose a method to execute the application that allows the client 102 access to application-output data created by the execution of an application program on a server. After retrieving multiple application files, the application delivery system (190) may choose a method to execute the program that allows the local machine 10 and the client 102 to execute the program locally. Another embodiment of the application delivery system is to choose a method to execute the application and stream it via the network to the client 102.

Client 102 can execute, operate, or provide an application. This could be any type or form of program or software, including any type or form of client-server, client-server, thin-client computing client, web-based client, web-based client, client server application, ActiveX control or Java applet or any other type or form of executable instructions that client 102 can execute. The application can be server-based, remote-based or executed for the client 102 by a server server 106. One embodiment of the server 106 can display output to client 102 using any thin/remote-display protocol such as the Independent Computing Architecture protocol from Citrix Systems Inc. of Ft. Lauderdale (Fla.) or the Remote Desktop Protocol(RDP) from Microsoft Corporation of Redmond (Wash.). Any protocol can be used by the application. It can be an HTTP client or an FTP client. An Oscar client can also work. Other embodiments include any software that is related to VoIP communications such as soft IP phones. Further embodiments include any application that is related to real time data communications such as applications for streaming audio and/or video.

“Referring to FIG. 1D may show a network environment that includes a monitoring server (106A). Any type of performance monitoring service 198 may be included in the monitoring server 106A. Monitoring, measurement, and/or management software or hardware may be included in the performance monitoring service 198. This includes data collection, analysis, management, and reporting. One embodiment of the performance monitoring system 198 includes one or more monitoring agent 197. Any software, hardware, or combination thereof can be used to monitor, measure, and collect data on any device. This includes client 102, server106, or appliance 200,205. The monitoring agent 197 can include any type of script in some instances, such as Visual Basic script or Javascript. The monitoring agent 197 can be transparent to all applications and/or users of the device in one embodiment. The monitoring agent 197 can be installed and used without any intrusion to client or application. Another embodiment of the monitoring agent 197 allows the client or application to install and operate the agent without the need for any instrumentation.

“In certain embodiments, the monitoring device 197 collects and monitors data at a predetermined frequency. The monitoring agent 197 may also monitor, measure and collect data depending on the detection of any type or form of event. The monitoring agent 197 might collect data when it detects a request for a webpage or receives an HTTP response. Another example is that the monitoring agent 197 can collect data upon detection any user input events such as a mouse click. Any data collected, monitored or measured by the monitoring agent 197 can be reported to the service 198. One embodiment of the protocol is that the monitoring agent 197 transmits information 198 to the monitoring agency 198 according to a set schedule or at a predetermined frequency. Another embodiment is that the monitoring agent 197 transmits information 198 to the monitoring system 198 when an event is detected.

“In some embodiments the monitoring service (198) and/or the monitoring agent 197 monitor and measure any network resource, network infrastructure element or client. One embodiment of the monitoring service 198 or monitoring agent 197 monitors and measures performance for any transport layer connection such as a TCP/UDP connection. Another embodiment monitors and measures latency in network connections using the monitoring service 198 or monitoring agent 197. The monitoring service 198, and/or the monitoring agent 197, monitor and measure bandwidth usage in yet another embodiment.

“In other embodiments the monitoring service (198) and/or monitoring agent (197) monitor and measure end-user responses times. The monitoring service 198 can monitor and measure the performance of an application in some instances. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors and measures performance for any session or connection to an application. The monitoring service 198, and/or the monitoring agent 197 in one embodiment monitor and measure performance of a browser. Another embodiment monitors and measures the performance of HTTP-based transactions using the monitoring service 198 or monitoring agent 197. The monitoring service 198, and/or the monitoring agent 197 in some embodiments monitor and measure performance of Voice over IP (VoIP), applications or sessions. Other embodiments of the monitoring service 198 or monitoring agent 197 monitor and measure performance of remote display protocols applications, such as ICA clients or RDP clients. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors streaming media performance and measures it. In still a further embodiment, the monitoring service 198 and/or monitoring agent 197 monitors and measures performance of a hosted application or a Software-As-A-Service (SaaS) delivery model.”

“In some embodiments the monitoring service (198) and/or monitoring agent (197) perform monitoring and performance measurement of one, or more transactions, requests, or responses related to an application. The monitoring service 198 or monitoring agent 197 can monitor and measure any part of the application layer stack such as J2EE calls and.NET calls. One embodiment of the monitoring service 198 or monitoring agent 197 monitors, measures and reports on SQL transactions and database transactions. Another embodiment has the monitoring service 198 or monitoring agent 197 monitoring and measuring any method, function, or application programming interface call.

“In one embodiment, monitoring service 198 or monitoring agent 197 monitors and measures performance of application and/or data delivery from a server via one or more appliances such as appliance 200/205. The monitoring service 198 or monitoring agent 197 may monitor and measure the performance of virtualized applications. Other embodiments of the monitoring service 198 or monitoring agent 197 monitor and measure performance of streaming applications. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors the delivery of a desktop app to a client, and/or the execution on the client of that desktop application. Another embodiment is the monitoring service 198 or monitoring agent 197 which monitors and measures the performance of client/server applications.

“In one embodiment, the monitoring system 198 and/or the monitoring agent 197 are designed and constructed to provide application management for the application delivery network 190. Monitoring service 198 or monitoring agent 197 can monitor, measure, and manage the performance and delivery of applications via Citrix Presentation Server. The monitoring service 198, and/or the monitoring agent 197 can monitor individual ICA sessions. Monitoring service 198 or monitoring agent 197 can measure total and per-session system resource usage as well as application, networking and performance. Monitoring service 198 or monitoring agent 197 can identify active servers for a user and/or session. The monitoring service 198 or monitoring agent 197 may monitor back-end connections between an application delivery system (190) and a database server. Monitoring service 198 or monitoring agent 197 can measure network latency and delay per user-session, ICA session, and volume.

“In some embodiments the monitoring service (198) and/or the monitoring agent 197 measure and monitor memory usage for application delivery system 190. This includes total memory usage per user session and/or process. The monitoring service 198, and/or the monitoring agent 197 monitor CPU usage in the application delivery system (190), such as the total CPU usage per user session and/or each process. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors and measures the time it takes to log in to an application, a client, or the delivery system such as Citrix Presentation Server. The monitoring service 198 or monitoring agent 197 monitors how long a user stays logged in to an application, server, or application delivery system 190. The monitoring service 198, and/or the monitoring agent 197 in some embodiments monitor active and inactive session counts of an application, server, or application delivery system session. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors user session latency.

“In further embodiments, monitoring service 198 or monitoring agent 197 monitors and measures any type of server metrics. The monitoring service 198 or monitoring agent 197 monitors metrics related system memory, CPU usage and disk storage. The monitoring service 198, and/or the monitoring agent 197 monitor metrics related to page errors, such as page faults every second. Other embodiments of the monitoring service 198 or monitoring agent 197 monitor round-trip time metrics. Another embodiment has the monitoring service 198 or monitoring agent 197 that monitors metrics related application crashes, errors, and/or hangs.

“In some embodiments, monitoring service 198 or monitoring agent 198 include any product embodiments referred as EdgeSight manufactured in Ft. Lauderdale, Fla. by Citrix Systems, Inc. Another embodiment of the performance monitoring service (198) and/or monitoring agency 198 include any portion of product embodiments referred as the TrueView product set manufactured by Symphoniq Corporation, Palo Alto, Calif. One embodiment of the performance monitoring service (198) and/or monitoring agency 198 include any portion of product embodiments referred as the TeaLeaf CX suite manufactured by TeaLeaf Technology Inc., San Francisco, Calif. Other embodiments include any part of the business management products, such the Patrol and BMC Performance Manager, made by BMC Software, Inc., Houston, Tex.

“The client 102 and server 106 and the appliance 200 can be deployed on any type of computing device such as a computer or network device capable of communicating with any type of network and performing operations described herein. FIGS. FIGS. 1E and 1F show block diagrams of a computing unit 100 that can be used to practice an embodiment of client 102, server106, or appliance 200. FIGS. 1E and 1F show that each computing device 100 has a central processing module 101 and a main storage unit 122. FIG. FIG. 1E shows that a computing device 100 can include a visual display device (124), a keyboard (126) and/or a point device 127. Additional elements may be added to each computing device 100, including one or more input/output components 130 a-130b (generally referred by using reference number 130) and a cache memory 140 that communicates with the central processing units 101.

The central processing unit 101 is any logic circuitry which responds and processes instructions from the main memory unit 122. Many embodiments provide the central processing unit by a microprocessor unit. These include those manufactured in California by Intel Corporation, Mountain View, Calif., or by Motorola Corporation, Schaumburg, Ill., as well as those manufactured at Transmeta Corporation, Santa Clara, Calif., and the RS/6000 processor manufactured by International Business Machines of White Plains (N.Y.) or Advanced Micro Devices, Sunnyvale, Calif. These processors or any other processor that can operate as described herein may be used to create computing device 100.

The main memory unit 122 could be made up of one or more memory chips that can store data and allow access to any storage location by the microprocessor 101. Any of the memory chips described above, or any other memory chips that can operate as described herein, may be used to create the main memory 122. FIG. FIG. 1E shows how the processor 101 communicates via a system bus 150 with main memory, 122 (described below). FIG. FIG. 1F shows an embodiment of a computing apparatus 100, in which the processor communicates with main memory via a memory port. FIG. FIG. 1F may show DRDRAM as the main memory 122.

“FIG. “FIG. Other embodiments of the main processor 101 communicating with cache memory 140 via the system bus 150. Cache memory 140 is usually faster than main memory 122 in response times and is usually provided by SRAM or BSRAM. FIG. 1F shows how the processor 101 communicates via a local bus 150 with I/O devices 130. There are many busses that can be used to link the central processing unit 101 with any I/O device 130. These busses include a VESA VL, an ISA, an EISA, a MicroChannel Architecture, (MCA), bus, a PCI, a PCIX bus or a PCI Express bus. The processor 101 can use an Advanced Graphics Port (AGP), to communicate with a display 124 in embodiments where the I/O device 124 is a video monitor 124. FIG. FIG. 1F shows an example of a computer 100 where the main processor 101 can communicate directly with I/O device 130b via HyperTransport or Rapid I/O. FIG. FIG. 1F shows another embodiment where local buses and direct communication are combined: The processor 101 communicates directly with I/O devices 130 b via a local interconnect bus, while I/O devices 130 a uses a local bus to communicate with them.

The computing device 100 could also include a network adapter 118 that allows it to interface with a Local Area Network, Wide Area Network, or the Internet via a variety connections such as standard telephone lines, LAN and WAN links (e.g. 802.11, T1, 56 kb or X.25), broadband connections (e.g. ISDN, Frame Relay or ATM), or a combination of all or any of these. The network interface 118 can include a built-in network card, network adapter or card bus network adapter. It may also be a wireless adapter, USB adapter and modem.

The computing device 100 may contain a variety of I/O units 130 a-130n. Keyboards, mice and trackpads can be used as input devices. Video displays, speakers and dye-sublimation printers are all examples of output devices. An I/O controller 123 may control the I/O devices 130 as shown in FIG. 1E. 1E. An I/O device can also be used to store 128 or an installation medium (116) for the computing device 100. Other embodiments include the computing device 100 providing USB connections for receiving USB storage devices, such as the USB Flash Drive series of devices manufactured and distributed by Twintech Industry, Inc., Los Alamitos, Calif.”

“In some instances, the computing device 100 could include or be connected with multiple display devices 124a-124n, each of which may be the same type or different. Any of the I/O device 130 a?130 n or the I/O controller123 can contain any type or combination of hardware, software, and hardware to enable, enable, or provide support for multiple display devices 124a?124n. The computing device 100 could include any type of video adapter, card, driver and/or software to connect, communicate, connect, or otherwise use multiple display devices. These embodiments can include any type or configuration of software that is designed to be used on another computer’s display device 124a for the computing device 100. A computing device 100 can be configured to support multiple display devices 124a-124n. This is something that anyone with ordinary skill in the art will appreciate and recognize.

“In other embodiments, an interface device 130 can be a bridge 170 connecting the system bus 150 to an external communication bus such as an Apple Desktop Bus, RS232 serial connection, a FireWire 800 bus or an Ethernet bus.

“In other embodiments, the computing devices 100 can have different operating systems and processors. One example is the Treo 180, 270, 1060, 600, or 650 smartphone manufactured by Palm, Inc. This embodiment uses the PalmOS operating system to control the Treo smart phones. It also includes a stylus input device and a five-way navigation device. The computing device 100 may also include any computer that can communicate and has enough processor power and memory to carry out the operations described.

“As shown at FIG. “As shown in FIG. The computing device 100 can include a parallel processor with one to several cores in some embodiments. One of these embodiments is the computing device 100, which is a shared-memory parallel device. It has multiple processors and/or cores that access all memory in a single global address area. Another embodiment of these embodiments is the computing device 100, which is a distributed parallel device that accesses local memory. Another embodiment of this design is the computing device 100. It has some memory that can be shared with some memory that can only be accessed through particular processors. Another example of this is the computing device 100. This multi-core microprocessor combines multiple processors into one package. Often, it’s an integrated circuit (IC). Another embodiment of the computing device 100 is a chip with a CELL-BROADBAND ENGINE architecture. It includes a Power processor and a plurality synergistic processor elements. The Power processor element and the plurality synergistic processor elements are linked by an internal high-speed bus, which can be called an element interconnect bus.

“In some embodiments, processors allow for simultaneous execution of multiple instructions on multiple pieces (SIMD). Other embodiments allow the processors to execute multiple instructions simultaneously on multiple pieces. Other embodiments allow the processor to use any combination SIMD or MIMD cores within a single device.

“In some embodiments, the computing unit 100 may include a graphics processing device. FIG. 1H shows one of these embodiments. FIG. 1H shows that the computing device 100 has at least one central processor unit 101 and at most one graphics processing unit. Another embodiment of this design is the computing device 100 which includes at least one parallel processor unit and at most one graphics processing unit. Another embodiment of these embodiments is the computing device 100 which includes a plurality processing units of any type and one of the plurality comprising a graphic processing unit.

“In some embodiments, the first computing device 100a executes an app on behalf of a user using a client computing device 100b. Another embodiment is that a computing device 100 executes a virtual computer, which creates an execution session in which applications can be executed on behalf of the user or client computing devices 100. One of these embodiments is a hosted desktop session. Another embodiment is where the computing device 100 runs a terminal session. A hosted desktop environment may be provided by the terminal services session. Another embodiment provides access to a computing session that allows for one or more of the following: an application, multiple applications, a desktop app, or a session in which one of several applications can execute.”

“B. Appliance Architecture

“FIG. 2A shows an example embodiment for the appliance 200. FIG. 2A shows the architecture of the appliance 200. 2A is an illustration and is not meant to limit the scope of the appliance 200. FIG. 2. Appliance 200 consists of a hardware layer (206) and a software layer (202), which are divided into user space 202, and kernel space 204.

“Hardware layer206 contains the hardware elements that allow programs and services to be executed in kernel space 204 or user space 202. Hardware layer 206 also contains the elements and structures that allow programs and services in kernel space 204, user space 202, to communicate data internally and externally to appliance 200. FIG. FIG. 2 shows the hardware layer 206, which includes a processor 262 for running software programs and services, memory 264 for data storage, network ports 266, for transmitting and receiving data over a wireless network, and an encryption process 260 for processing Secure Sockets Layer data sent and received over the network. The central processing unit 262 can perform functions similar to the encryption processor 266 in some embodiments. The hardware layer 206 can also include multiple processors for each processing unit 262 or encryption processor 260. Any of the 101 processors described above in relation to FIGS. 262 can be included in processor 262. 1E and 1.F. In one embodiment, for example, the appliance 200 includes a first processor 262 as well as a second processor 262?. Other embodiments use the processor 262 and 262. It is a multi-core processor.”

“Although appliance 200’s hardware layer 206 is illustrated with an encryption processor 266, processor 260 could be used to perform functions related any encryption protocol such as Secure Socket Layer or Transport Layer Security (TLS). The processor 260 could be a general-purpose processor (GPP) in some embodiments. Further embodiments may include executable instructions to perform processing of security-related protocols.

“Though the hardware layer (206) of appliance 200 has been illustrated with some elements in FIG. 2. The hardware components or portions of appliance 200 can include any type of hardware, hardware, or software elements. For example, the computing device 100 is illustrated and discussed in conjunction with FIGS. 1E and 1. The appliance 200 can be a server or gateway, router, switch or bridge, or any other type of computing device or network device.

“The kernel space number 204 is reserved to run the kernel 230. This includes any device drivers, kernel extensions, or other kernel-related software. The kernel 230, which is the heart of the operating system, provides access to, control and management of resources as well as hardware-related elements for the application 104, as those who are skilled in the art will know. According to an embodiment of the appliance 200 the kernel space contains 204. A number of network services and processes work in conjunction with a cache manger 232. These benefits are further described herein. The embodiment of kernel 230 will also depend on the operating system that is installed, configured, or used by the device 200.

“In one embodiment, device 200 includes one network stack 267. This can be a TCP/IP-based stack for communicating with client 102 or server 106. The network stack 267 can be used to communicate with network 108 and network 110 in one embodiment. In some embodiments, device 200 terminates the first transport layer connection (e.g. a TCP connection from a client 102) and establishes the second transport layer connections to server 106 for client 102. A single network stack 267 may establish the first and second transport layer connections. Other embodiments may include multiple network stacks (e.g. 267 and 267). The first transport layer connection can be established at one network stack 267. The second transport layer connection is made on the second network platform 267. One network stack could be used to transmit and receive network packets over a first network and another stack for transmitting and receiving network packets over a second network. One embodiment of the network stack 267 includes a buffer 243 to allow the appliance 200 to queue one or more network packets.

“As shown at FIG. 2. The kernel space 204 contains the cache manager232, a high speed layer 2-7 integrated packet engines 240, and 234, an encryption engine 234, and 234, as well as a policy engine 236, 236 and 238. These components or processes can be run in kernel space 204, kernel mode, or in user space 202 to improve the performance of each component. Kernel operation refers to the execution of these components or processes 232-240, 234, 234, 236 and 238 in the core address space. The kernel mode of the encryption engine 234 improves encryption performance by moving encryption/decryption operations to it. This reduces the transitions between the memory or a kernel thread running in kernel mode and the user mode memory space. Data obtained in kernel mode might not be needed to be passed or copied into a process or thread that is running in user mode. For example, from a kernel-level data structure to a user-level data structure, for example. Another aspect is that the number of context switches between user mode and kernel mode has been reduced. In kernel space 204, it is possible to synchronize and communicate between any component or process 232, 244, 235, 235, 236 or 238 more efficiently.

“In some embodiments, any part of the components 234, 240 and 234, 236 may run in the kernel space of 204 while other parts of these components 232 and 240, 234, 234, 236 or 238 may operate in the user space of 202. One embodiment of the appliance 200 has a kernel-level information structure that allows access to any part of one or more network packages. For example, a packet that contains a request from client 102 and a response from server 106. The packet engine 240 may have access to the kernel-level structure via a transport layer driver interface, filter, or other means. Any interface and/or data that is accessible via kernel space 204 may be included in the kernel-level structure. This includes data related to network stack 267, traffic or packets received by or transmitted by network stack 267. Other embodiments allow the kernel-level structure to be used by any component or process 232, 240 or 234, 236 or 238 in order to perform the desired operation. One embodiment shows a component 232 or 240 that runs in kernel mode 204 using the kernel level data structure. In another embodiment, the component 238, 240 and 234, 236 are running in user mode using the kernel data structure. In some embodiments, the kernel level data structure can be copied or passed on to another kernel-level structure or any user-level structure.

The cache manager 232 can be comprised of software, hardware, or any combination thereof to provide cache access control and management of all types and forms of content. This includes objects and dynamically generated objects that are served by the originating server 106. The cache manager 232 can store data, objects, and content in any format. This includes data that is written in a markup language or transmitted via any protocol. The cache manager 232 may duplicate data that has been stored elsewhere, or previously generated, transmitted or computed. In these cases, it may take longer to retrieve, compute, or obtain the original data relative to the time required to read a cache memory element. The cache memory element stores the data. Future use can be done by accessing the cached copy, rather than refetching and recomputing original data. This reduces the access time. The cache memory element could contain a data object stored in memory 264 of device 200. Other embodiments may include memory with a faster access speed than memory 264. Another embodiment of the cache memory element can include any type of storage element 200, including a portion or a hard drive. The processing unit 262 can provide cache memory to the cache manager 232. Further, the cache manager may also use memory, storage or the processing unit to cache data, objects and other content.

“Furthermore the cache manager232 includes logic, functions and rules that enable you to execute any of the embodiments of the appliances 200 described herein. The cache manager 232, for example, includes functionality or logic to invalidate objects on expiration of a invalidation time period or upon receipt a invalidation command from client 102 or server. The cache manager 232 can be implemented as a program or service, process, or task that executes in the kernel space (204) and the user space (202). One embodiment of the cache manager232 has a portion that executes in user space 202 and another portion that executes within the kernel space. The cache manager 232 may include any type of general purpose processor or integrated circuit such as a Field Programmable Gate Array, Programmable Logic Device, or Application Specific Integrated Circuit.

“The policy engine 236, which may include an intelligent statistical engine, or another programmable application, could be included. One embodiment of the policy engine 236 includes a configuration mechanism that allows a user to specify, identify, define, or configure a cache policy. In some embodiments, policy engine 236 also has access memory to support data structures like lookup tables and hash tables that enable users to make caching policy choices. The policy engine 236, in other embodiments, may include any logic, rules or functions to determine and provide access to, control and manage objects, data, or content that is being cached by appliance 200. This includes access, control, and management security, network traffic access, compression, or any other function or operation performed or requested by appliance 200. Additional examples of caching policies can be found here.

“The encryption engine 234, which includes logic, business rules and functions, is responsible for processing any security protocol such as SSL, TLS or any related function. The encryption engine 234 can decrypt and encrypt network packets or any part thereof that are sent via the appliance 200. The encryption engine 234, which can also establish SSL or TLS connections for the client 102a-102n, server 106a-106n, and appliance 200, may also do so. The encryption engine 234 can offload and accelerate SSL processing. One embodiment uses a tunneling protocol for a virtual private network between client 102a-102n and server 106a-106n. Other embodiments of the encryption engine 234 include executable instructions that run on the Encryption process 260.

“The multi-protocol compress engine 238 includes any logic, business rules or function that can be used to compress one or more protocols in a network packet. This could include any protocols used by network stack 267 on the device 200. Multi-protocol compression engine 238, in one embodiment, compresses bidirectionally between clients 102 a?102 n, servers 106 a?106 n, any TCP/IP based protocols, such as File Transfer Protocol, File Transfer Protocol, File Transfer Protocol, HyperText Transfer Protocol, (HTTP), Common Internet File System protocol (file transfer), Independent Computing Architecture protocol (ICA), Remote Desktop Protocol, Remote Application Protocol, Wireless Application Protocol, Mobile IP protocol and Voice Over IP protocol. Multi-protocol compression engine 238, in other embodiments, compresses Hypertext Markup Language(HTML) based protocols. In some embodiments it also compresses any markup languages such as Extensible Markup Languages (XML). The multi-protocol compression engines 238 provide compression for any high-performance protocol. This includes protocols that are designed for appliances 200 and 200. Another embodiment of the multi-protocol compress engine 238 allows for compression of any payload or communication using modified transport control protocols such as Transaction TCP, TCP without selection acknowledgements (TCP?SACK), TCP With Large Windows (TCP?LW), TCP prediction protocols such as TCP-Vegas, and TCP spoofing protocols.

The multi-protocol compression engines 238 speeds up performance for users who access applications via desktop clients. This includes Microsoft Outlook, non-Web thin clients and clients launched by popular enterprise apps like Oracle, SAP, Siebel and Siebel. Mobile clients such as the Pocket PC can also be used to compress the data. The multi-protocol compression engines 238 can compress all protocols that are carried by TCP/IP protocols by running in kernel mode 204, integrating with packet processing engine 244, and accessing the network stack 267 in some embodiments.

“High speed layer 2–7 integrated packet engine (240), also known as a packet processor engine or packet engine, manages the kernel-level processing packets received by appliance 200 over network ports 266, High speed layer 2-7 integrated package engine 240 can be used as a buffer to queue one or more network packets while they are being processed. This could include, for example, the receipt or transmission of network packets. The high speed layer-2-7 integrated packet engine, 240 can also communicate with one or more network stacks 266, to send and receive packets over network ports 266. The high speed layer-2-7 integrated packet engine, 240 is used in conjunction with encryption engines 234, cache manager232, policy engine 236, and multi-protocol compress logic 238. Encryption engine 234 can perform SSL processing on packets. Policy engine 236 can perform traffic management functions such as request level content switching and request-level caching redirection. Multi-protocol compression logic 238, however, is designed to compress and decompress data.

“The high-speed layer 2-7 integrated packet engines 240 includes a 242 packet processing timer. One embodiment provides a packet processing timer 242 that allows for one or more time intervals for the processing of network packets incoming, outgoing, and received. The high speed layer 2-7 integrated network packet engine 240 processes network traffic in response to the timer 242. The packet processing timer 242 can send any signal to the packet engine, 240 in order to notify, trigger or communicate a time-related event, interval, or occurrence. The packet processing timingr 242 can be used in many ways. It may operate in milliseconds such as 100 ms 50 ms and 25 ms. In some cases, it provides time intervals to the packet engine 240, while other embodiments provide a time interval of 5 ms. Other embodiments use a shorter interval, but still further embodiments allow for a time interval as short as 3 ms 2, 1 ms. During operation, the high speed layer-2-7 integrated packet engine (240) may interface, integrate, or be in communication with the encryption engines 234, cache manager 236 and policy engine 236. Any of the logic, functions or operations of encryption engine 234, cache manger 232, policy engines 236 and 238 can be executed in response to packet processing timer 242 and/or packet engine 240. Any of the functions or operations of encryption engine 234, cache manger 232, policy engines 236 or multi-protocol compression engine 238 can be performed responsive to packet processing timesr 242 and/or packet engine 242. Another embodiment allows the expiry and invalidation times of cached objects to be set in the same order as the packet processing timer 242, for example, every 10 ms.

“User space 202, which is different from kernel space 204 and other operating systems, is the memory area of the operating system that user mode applications use or programs running in user mode. User mode applications cannot directly access kernel space 204 and must use service calls to access kernel services. FIG. FIG. 2 shows that user space 202 includes a graphic user interface (GUI), 210, a command-line interface (CLI), 212, shell services, 214, 216 and daemon service 218. GUI 210 or CLI 212 allow system administrators and other users to interact with appliance 200 and manage its operation, such as through the appliance 200’s operating system. CLI 212 and GUI 210 can contain code running in kernel space 204 or user space 202. GUI 210 can be any type or form of graphical user interface. It may be presented by text, graphical, or any other type of program, such as a browser. CLI 212 can be any type of command line, text-based interface or command line that is provided by an operating system. The CLI 212 could include a shell. This is a tool that allows users to interact with an operating system. The CLI 212 can be provided in some embodiments via a bash or csh shell, tcsh or ksh type shell. Shell services 214 includes programs, services and tasks that support interaction with the appliance 200.

“Health monitoring program 216, is used to monitor, verify, report, and ensure that network systems work properly and that users receive requested content. The health monitoring program 216 includes one or more programs, services or tasks that provide logic, rules and operations to monitor any activity on the appliance 200. The health monitoring program 221 intercepts and inspects network traffic passing through the appliance 200 in some instances. Other embodiments of the health monitoring program 216, interfaces with any suitable means or mechanisms with one or more the following: policy engine 234, cache manager 234, policy engine 236, policy engine 236, policy engine 236, policy engine 236, multiprotocol compression logic 238, packet engines 240, daemons services 218 and shell services 221. The health monitoring program (216) may call any API to determine the state, health, or status of any appliance 200. The health monitoring program 216 can, for example, periodically ping or send status inquiries to determine if any program, process, or service is currently active. Another example is that the health monitoring program 221 may inspect any history, error, or status logs provided to any program, process or task in order to determine any condition, status, or error with any part of the appliance 200.

“Daemon service 218” are programs that run in the background or continuously and respond to periodic service requests from appliance 200. A daemon services may, in some instances, forward requests to other processes or programs, such as another daemon 218 as necessary. A daemon service 218 can run unattended, as is well-known to those who are skilled in the art. It may perform periodic or continuous system-wide functions such as network control or any other task. One or more daemon service 218 may run in the user space, while other embodiments run one or more in the kernel space.

“Referring to FIG. “Referring now to FIG. 2B, an alternative embodiment of the appliance 200. The appliance 200 offers one or more of these services, functionality, or operations. Each server 106 can provide one or more network-related services 270a-270n (referred to here as services 270). For example, a server 106 may provide an http service 270. One or more virtual servers or internet protocol servers are part of the appliance 200. Also known as VIP server, vServer, or simply VIP 275 a-275n (also called vServer 275). According to the configuration and operation of the appliance 200, the vServer 275 intercepts and otherwise processes communications between a client 102 (or server 106).

The vServer 275 can contain software, hardware, or any combination thereof. The vServer 275 can include any type of program, task, process, or executable instructions that operate in user mode 202 or kernel mode 204, or any combination thereof, in the appliance 200. Any logic, functions, rules or operations that are required to execute any of the methods described herein include the vServer 275. The vServer 275 may establish a connection to a service 270 on a server 106 in some embodiments. Any program, application, process or task capable of connecting to or communicating with the appliance 200, client102, or vServer275 can be included in service 275. The service 275 could include a web server or http server, ftp server, email server, database server, and/or ftp server. The service 270 may be a daemon process, or network driver, for listening, receiving, and/or transmitting communications to an application such as an email, database, or enterprise application. The service 270 can communicate with a specific IP address or port in some embodiments.

“In some instances, the vServer 275 applies one of the policies from the policy engine 236, to network communications between client 102 or server 106. The policies can be associated with a virtual server 275 in one embodiment. Another embodiment bases the policies on a user or a group. Another embodiment applies a policy to all vServers 275a-275n and all users or groups of users who communicate via the appliance 200. Some embodiments of the policy engine include conditions that allow the policy to be applied based on the content of the communication. These include the internet protocol address, port and protocol type of the packet or the context of communication such as user, group, vServer, transport layer connection and/or identification or attributes for the client 102 or server106.

“In other embodiments, an appliance 200 communicates or interfaces to the policy engine 236, to authenticate and/or authorize remote users or remote clients 102 to access the computing environments 15, application, and/or file from a server.106. Another embodiment of the appliance 200 communicates or interfaces to the policy engine 236, to authenticate and/or authorize a remote user/client 102 to allow the application delivery system (190) to deliver one or more instances of the computing environment 15, data file, and/or application. Another embodiment of the appliance 200 establishes an SSL VPN connection or VPN connection using the policy engine’s 236 authentication of remote users or remote clients 102. In one embodiment, 200 controls network traffic and communication sessions according to policies of the policy engines 236. The appliance 200, for example, may be used to control access to a computing environment 15 or a data file using the policy engine 236,

“In some instances, the vServer 275 establishes a transport-layer connection such as a TCP/UDP connection with a client 102 through the client agent 120. One embodiment of the vServer 275 listens to and receives communications from client 102. Other embodiments establish a transport layer connection such as a TCP/UDP connection with a client server 106. The vServer 275 establishes a transport layer connection to an Internet protocol address and port on a server 270 that is running on the server 106. Another embodiment associates a first transport connection to a client 102 and a second transport connection to the server 106. A vServer 275 may establish a pool transport layer connections to a server 106, multiplexing client requests through the pooled transportation layer connections.

“In certain embodiments, the appliance 200 provides an SSL VPN connection 280 between client 102 or server 106. A client 102 on a network 102 may request to connect to a server106 on a network 104?. The second network 104 may be used in some embodiments. The second network 104 is not routable with the first network. Other embodiments have the client 102 on a public network, while the server 106 on a private network. This could be a corporate network. One embodiment has the client agent 120 intercepting communications from the client 102 over the first network 104. The client agent 120 encrypts the communications and transmits them via a first transport layer connection with the appliance 200. The appliance 200 links the first transport layer connection from the first network 1004 to a second transport level connection to the server 106 on the second network 104. The appliance 200 intercepts the communication from client agent 102 and decrypts it. It then transmits the communication via the second transport layer link to server 106 on second network 104. A pooled transport layer connection could be used as the second transport layer. The appliance 200 is an end-to?end secure transport layer connection between the two networks (104, 104?).

“In one embodiment, the appliance hosts an intranet protocol or IntranetIP 282 of the client. Client 102 is equipped with a local network ID, such as an IP address and/or hostname on the first network. Connected to the second network, 104? The appliance 200 assigns and/or provides an IntranetIP Address 282 to the client 102 via the second network. The appliance 200 listens to and receives on the private network 104? Any communications directed at the client 102 use the client’s intranetIP 282. One embodiment of this arrangement is that the appliance 200 acts on behalf or as the client 102 on second private network.104 In another example, a vServer 275 listens and responds to communication to the IntranetIP 282 of client 102. If a computing device 100 is connected to the second network, 104? The appliance 200 responds to a request by transmitting it. The appliance 200 might respond to a request to connect to the client’s IntranetIP 282. Another example is that the appliance might establish a connection (e.g. a TCP/UDP connection) with computing device 100 on second network 104, requesting a connection to the client’s IntranetIP 282.”

“In some instances, the appliance 200 offers one or more of these acceleration techniques 288 to communications between client 102-server 106: 1) compression; (2) decompression; (3) Transmission Control Protocol pooling and (4) Transmission Control Protocol multiplexing; (5) Transmission Control Protocol buffering and (6) caching. The appliance 200 reduces server 106’s processing load by opening and closing multiple transport layer connections to clients 102. This is done by maintaining one or more transport layer connections with each client 106 to allow clients to access their data via the Internet. This is known as “connection pooling”.

“In certain embodiments, to seamlessly splice communication from a client102 to a server106 via a pooled transportation layer connection, the appliance 200 transforms or multiplexes communications by changing sequence number and acknowledgment numbers at transport layer protocol level. This is known as “connection multiplexing”. In certain embodiments, there is no need for application layer protocol interaction. In the example of an inbound packet, which is a packet that is received from client 102, the source network address of packet is changed to an output port on appliance 200 and the destination network addresses to the intended server. An outbound packet, which is one that is received from server 106, has its source network address changed from the server 106’s to the appliance 200’s. The destination address is also changed from appliance 200 to the client 102. On the 200 transport layer connection between the client and the appliance 102, the sequence numbers and acknowledgment number of the packet are translated to the sequence numbers or acknowledgement numbers expected to be received by client 102. These translations may cause the packet checksum from the transport layer protocol to be recalculated in some embodiments.

“In another embodiment, a 200-watt appliance provides load-balancing or switching functionality 284 for communication between client 102 and server.106 The appliance 200 may distribute traffic and direct client requests to server 106 according to layer 4 or application-layer request information. One embodiment uses the network layer, or layer 2, of the network packet to identify a destination server. However, the appliance 200 determines which server 106 will distribute the network packet using application information and the data in the payload. One embodiment of the appliance 200’s health monitoring programs 216 monitor the health servers to determine which server to distribute client requests. If the appliance 200 detects that server 106 is unavailable or has a load exceeding a predetermined threshold the appliance 200 can direct client requests to another server.

“In some instances, the appliance 200 acts either as a Domain Name Service (DNS), resolver or other means of providing resolution to DNS requests from clients 102. The appliance may intercept a DNS request from the client 102. One embodiment responds to a DNS request from a client by providing an IP address hosted or maintained by the appliance 200. The client 102 sends network communication to the appliance 200 for the domain name. Another embodiment responds to a client?s DNS request by sending an IP address or hosting information to the appliance 200. Some embodiments respond to clients’ DNS requests with IP addresses of servers 106.

Summary for “Systems and methods to redirect handling”

A client can request access to one or more databases offered by a plurality database servers. A plurality of databases servers may include a primary server that receives all client requests and one or more secondary servers. The primary database server might generate a redirect response asking the client to send the request to another server depending on the type of request. The primary database server may need to spend considerable resources in order to create and transmit these redirect replies.

“The present invention is directed to systems and methods of routing requests among a plurality database servers. A device that acts as an intermediary between a client’s computer and a number of database servers receives a request for access to a database. It identifies the type of the request, and routes it to the appropriate subset of the plurality. If the device determines the request is read, it can send the request to one set of secondary databases servers from the plurality that are configured to process read requests. The load balance among secondary databases servers may determine which secondary database server receives the request. However, if the device determines that the request does not require reading, such as a written request, it can send the request to one subset of the plurality database servers that are primary database servers.

“In one aspect, this disclosure is related to a method of routing requests among a plurality database servers. A device that acts as an intermediary between a client’s computer and one or more database servers receives a request for access to a database maintained by the plurality. A plurality of databases servers may include a first server that processes write requests, and one or more secondary servers that process read requests. The device determines that the request for access to the database is a read request. After determining that the request to access the database is a read request the device sends the request to one or more of the second database servers. The device transmits the request then to the second database server.

“In some cases, the client can send a second request to the device to access the database. The device can identify that the second request to access a database is a writing request. After determining that the second request to access the database is a writing request, the device can identify the first database server to transmit the request. The device can then send the second request to that first database server.

“In some embodiments, the virtual server of the device may determine that the request was a read request. The device can, in some cases, assign a first virtual host to route write requests to the first database server. Each of the second database servers can be assigned a second virtual server to route read requests.

“In some embodiments, a device can determine that the request for access to the database is a read-only request based upon a property of the connection over which it receives the request. The device can in some cases determine that the request for access to the database is a read request by looking at the content.

“In some embodiments, a device can choose from one or more of the second database servers the second server to send the read request. This is based on the load placed on each of the one/more second database servers. The device can monitor each of the one to three second database servers and identify the available second database servers. In some cases, the device will select the second server from the subset of available second database servers. The device can, in some cases, identify a subset among the plurality service assigned to the one-or more second databases servers to determine if a status of that second database server is available to process requests.

“In some embodiments the primary server is the first database server. The second or more database servers can be configured to serve secondary servers. They are further configured to be read-only.

“In another aspect, this disclosure is directed at a system for routing queries among a plurality database servers. The system comprises a device that acts as an intermediary between a client’s request and a plurality database servers. A plurality of databases servers include a first database server that processes write requests, and one or more secondary database servers that process read requests. The device also includes a virtual host. The request is made by the virtual server to access a plurality of databases servers. The request to access the database was deemed a read request by the virtual server. After determining that the request to access the database is a read request the device sends the request to the second database server. The device sends the request to the second database server.

“In some cases, the client can send a second request to the device to access the database. The device can identify that the second request to access a database is a writing request. After determining that the second request to access the database is a writing request, the device can identify the first database server to transmit the request. The second request can be transmitted to the first database server.

“In some cases, the device can identify that the request is read. The system can, in some instances, assign a first virtual host to route write requests to the first database server. Each of the second database servers can be assigned a second virtual server to route read requests.

“In some embodiments, a device can determine that the request for access to the database is a read-only request based upon a property of the connection over which it receives the request. The device can in some cases determine that the request for access to the database is a read request by looking at the content.

“In certain embodiments, the device may choose from one or more of the second databases servers the second server to send the read requests based on the respective loads on each of these second database servers.”

“In certain embodiments, the device is able to identify each of the one, more, or more second databases servers and determine which one or more are available. The device can choose the second server from a subset among the available second database servers.

“In some embodiments, the device is capable of identifying, by monitoring each one or more of the second databases servers, a subset or a plurality service assigned to the one/more second database servers as responsive to determining whether a status of second database server associated to the service is available for processing requests.”

“In some embodiments the primary server is the first database server. The second or more database servers can be configured to serve secondary servers. They are further configured to be read-only.

“The description and accompanying drawings show the details of different embodiments of the invention.”

“The following sections of the specification with their respective contents can be useful for reading the descriptions of various embodiments:”

“Before discussing the details of particular embodiments of systems and methods for clients and appliances, it might be useful to discuss the computing and network environments in which these embodiments could be deployed. Referring to FIG. FIG. 1A shows an example of a network environment. The network environment consists of one or several clients 102a-102n (also known as client(s), 102) in communication to one or two servers 106a-106n (also known as remote machine(s), 106) via one or multiple networks 104,104? (generally known as network 104). A client 102 can communicate with a server (106) via an appliance 200 in some embodiments.

“Although FIG. “Although FIG. The clients 102, 106 and servers 106 could be on the same network. What are the networks 104-104? It can be one type of network, or several types of networks. The network 104 or the network 104? It can be a local area network (LAN), like a company Intranet or a metropolitan area network, or a wide-area network(WAN), like the Internet or the World Wide Web. Network 104 is one example. Network 104 could be a private network, while network 104 might be a public one. Network 104 and network 104 can be considered private networks in some instances. A public network. Networks 104 and104 may be used in another way. Both networks may be private. Clients 102 could be at a branch office or corporate enterprise and communicate via a WAN connection through the network 104 to the corporate data center servers 106.

“The network 104 or 104?” Any type or form of network may be used. It can include any number of the following: a network 104 and/or 104, a broadcast network network, large area network network networks, a wide-area network network network, and a telecommunications network. The network 104 can include a wireless link such as an infrared channel, satellite band, or a wireline network. What is the topology of network 104 or 104? It could be a bus, star or ring network topology. What is the network 104 or 104? The network 104 and/or 104 may have any network or network topology that is known to those who are skilled in the art of the art.

“As shown at FIG. “As shown in FIG. 1A, the appliance 200 is shown between networks 104 and104?. The appliance 200 can be found on network 104 in some instances. An appliance 200 may be deployed at a branch office within a corporate entity, for example. The appliance 200 could also be found on network 104?. An appliance 200, for example, could be found at a corporate data centre. A plurality of appliances 200 can be deployed on network 1004. A plurality of appliances 200 can be deployed on network 104.1?. One embodiment shows a first appliance 200 communicating with a second appliance 200. Other embodiments allow the appliance 200 to be part of any client 102, server 106, or other network 104.104. as the client 102. An appliance 200 or more may be found at any point on the network or in the communications path between a client and server 102.

“In some embodiments, an appliance 200 includes any network device manufactured by Citrix Systems, Inc., Ft. Lauderdale Fla., also known as Citrix netScaler devices. Other embodiments include any product embodiments known as WebAccelerator or BigIP made by F5 Networks, Inc., Seattle, Wash. Another embodiment of the appliance 205 is any one of the DX acceleration platform platforms and/or SSL VPN series devices, such as SA 700, SA 2000, SA 4000, SA 6000, and SA 4000 devices manufactured by Juniper Networks, Inc., Sunnyvale, Calif. Another embodiment of the appliance 200 includes all application acceleration and/or security-related appliances and/or software manufactured or distributed by Cisco Systems, Inc., San Jose, Calif., including the Cisco AVS Series Application Velocity Systems and Cisco ACE Application Control Engine Modular service software.

“In one embodiment, multiple servers may be logically grouped 106. These embodiments may also include a server farm 38. The serves 106 in some embodiments may be geographically dispersed. A farm 38 can be administered as one entity in some cases. Other embodiments of the server farm 38 include a plurality server farms 38. One embodiment of the server farm executes one to several applications for one or more clients 102.

“The servers 106 in each farm 38 may be heterogeneous. One or more servers 106 may operate under one operating system platform, such as WINDOWS, which is manufactured by Microsoft Corp. of Redmond. Wash., while the other servers (106 and 106) can operate on another operating system platform, such as Unix or Linux. Servers 106 and 106 from each farm 38 don’t need to be physically close to other servers 106 in the farm 38. The servers 106 that are logically connected to form a farm 38 can be interconnected via a wide-area or medium-area connection (WAN) or MAN connections. A farm 38 could include servers 106 located on different continents, in different areas of a country, state, city or campus. The data transmission speeds between servers 38 and 106 can be improved if servers 106 are connected via a local-area networking (LAN) connection, or another type of direct connection.

Servers 106 can be called a file server or application server, a web server, proxy server or gateway server. A server 106 can be configured to serve as an application server, or master application server in some cases. A server 106 could include an Active Directory in one embodiment. Client nodes and endpoints may also be used to refer to clients 102. A client 102 can be used as both a client server that provides access to the applications of a server or as a client node to access them.

“In some embodiments, the client 102 can communicate with a server 106 in a farm 38. One embodiment shows the client 102 communicating directly with one the servers 106 within a farm 38. Another embodiment sees the client 102 running a program neighbor application to communicate with server 106 in farm 38. Another embodiment provides functionality as a master node. The client 102 can communicate with the server 38 via a network 104. The client 102 can request execution of different applications from servers 106a-106n in farm 38 over the network 104 and receive the output of application execution for display. In certain embodiments, the master node is the only one that can identify and provide address information for a server 106. hosting the requested application.”

“The server 106 is a web server in one embodiment. Another embodiment is that the server106a receives client requests 102 and forwards them to a second server106b. The server106b then responds to client 102’s request with a response from server106b. Another embodiment is that the server 106 obtains information about the address of a server hosting the application. Another embodiment presents the client 102 with the response via a web interface. The client 102 can communicate directly with the server106 in order to access the identified app. Another embodiment allows the client 102 to receive application output data (such as display data) generated by the execution of the identified app on the server.

Referring to FIG. “Referring now to FIG. 1B, an example of a network environment that deploys multiple appliances 200 is shown. One appliance 200 can be deployed on a network 104, while a second appliance 200 is possible to be deployed on a network 200. On a second network, 104?. A corporate enterprise might deploy 200 appliances at its branch offices and 200 at its headquarters. at a data center. Another embodiment would be to have the first appliance 200 and the second appliance 200. They are both deployed on the same network, 104 or network 100. A first appliance 200 could be used to deploy a server farm 38 and a second appliance 200 for 38. A second example is that a first appliance 200 could be deployed at a branch office, while the second appliance 200 might be deployed at another branch office. deployed at a second branch? In some embodiments, the second appliance 200 and first appliance 200 are used. “Work in collaboration or in combination with one another to speed up network traffic or the delivery application and data between clients and servers”

Referring to FIG. 1C is another example of a network environment that deploys the appliance 200 with one, or more, other types of appliances such as WAN optimization appliance (205, 205). is depicted. A first WAN optimization appliance 205, for example, is shown between networks 104-104. a second WAN Optimization Appliance 205? The appliance 200 may be connected to one or more servers (106). A corporate enterprise might deploy a first WAN Optimization appliance 205 in a branch office, and another WAN Optimization appliance 205 in a second office. at a data center. The appliance 205 can be found on network 104 in some embodiments. The appliance 205 may be located on network 104? in other embodiments. The appliance 205? may be found on network 104. In some embodiments, appliance 205? The appliance 205? may be found on network 104? Or network 104? The appliance 205 and the 205? are both in one embodiment. They are connected to the same network. Another embodiment is the appliance 205 and the appliance 205? They are connected to different networks. Another example is that a first WAN Optimization Appliance 205 could be deployed to a server farm 38, and a second WAN Optimization Appliance 205 for a second farm 38. for a second server farms 38?

“The appliance 205, in one embodiment, is a device that accelerates, optimizes or otherwise improves the performance, operation or quality of any type of network traffic such as traffic to and from a WAN connection. The appliance 205 can be used as a proxy to improve performance. Other embodiments refer to the appliance 205 as any form of WAN optimization/acceleration device. Sometimes also called a WAN controller, The appliance 205 can be any of the WANScaler product embodiments manufactured by Citrix Systems, Inc., Ft. Lauderdale, Fla. Other embodiments include the appliance 205, which includes the product embodiments BIG-IP link controller or WANjet manufactured in Seattle, Wash. Another embodiment of the appliance 205 is any one of the WX or WXC WAN acceleration platform platforms manufactured by Juniper Networks, Inc., Sunnyvale, Calif. The appliance 205 may include any of the Steelhead WAN optimization appliances manufactured and sold by Riverbed Technology, San Francisco, Calif. Other embodiments include any of the WAN-related devices manufactured by Expand Networks Inc., Roseland (N.J.). One embodiment of the appliance 205 comprises any of the WAN-related appliances manufactured by Packeteer Inc. in Cupertino (Calif.), such as the PacketShaper and iShared products provided by Packeteer. Another embodiment of the appliance 205 is any WAN-related appliances and/or software manufactured or distributed by Cisco Systems Inc. of San Jose (Calif.), such as the Wide Area Network Application Services software, network modules and appliances.

“In one embodiment, appliance 205 offers data acceleration and application services for remote or branch offices. One embodiment of the appliance 205 provides optimization for Wide Area File Services (WAFS). The appliance 205 can be used to accelerate file delivery, as in the Common Internet File System protocol. Other embodiments of the appliance 205 provide caching in memory or storage to speed up data and applications delivery. The appliance 205 can compress network traffic at any layer of the network stack, protocol, or network layer. The appliance 205 can also provide flow control, performance enhancements and modifications, and/or management for transport layer protocol optimizations to speed up the delivery of applications and data over a WAN link. The appliance 205, for example, provides Transport Control Protocol (TCP), optimizations. The appliance 205 can also be used to optimize, flow control, performance enhancements and modifications, as well as management for any session protocol or application layer protocol.

“In another embodiment, an appliance 205 encoded any kind of data or information in custom or standard TCP/IP header fields or option field of network packets to announce presence, functionality, or capability to another appliance.205?” An appliance 205 may communicate with another appliance 205 in another embodiment. An appliance 205 may communicate with another appliance205? Data encoded in both TCP header fields and/or IP options. TCP options, IP header fields, or options may be used by appliances to communicate one or more parameters. 205, 205 In performing functionality such as WAN acceleration or working together.

“In certain embodiments, the appliance 200 retains information encoded within TCP and/or IP headers and/or option field communication between appliances 205 or 205?. The appliance 200 can terminate a transport connection traversing the appliance 200. This could be a transport connection between clients and servers traversing appliances 205 or 205. One embodiment of the invention is that the appliance 200 can identify and preserve any encoded data in a transport packet transmitted by a 1st appliance 205 via a 1st transport layer connection. Then, it communicates the transport layer packet with this encoded information to the 2nd appliance 205. via a second transportation layer connection.”

Referring to FIG. “Referring now to FIG. 1D, is a network environment for operating and delivering a computing environment to a client (102). A server 106 may include an application delivery system (190) for delivering a computing environment, an application and/or a data file to one of the clients 102. A client 10 is communicating with server 106 over network 104,104. Appliance 200. The client 102 could be located in a remote office within a company, such as a branch office. While the server 106 might be at a corporate data centre, Client 102 includes a client agent 120 and a computing environment 15. The computing environment 15 can execute or manage an application that accesses or processes a data file. The appliance 200 and/or server 106 may deliver the computing environment 15, the application, and/or the data file.

“In certain embodiments, the appliance 200 speeds up delivery of a computing environment 15 or any portion thereof to a client 102. The appliance 200 speeds up the delivery of the computing environments 15 using the application delivery system. The embodiments described herein can be used to speed up the delivery of streaming applications and data files from the central corporate data center to remote users, such as branch offices of the company. Another embodiment of the appliance 200 speeds up transport layer traffic between client 102, server 106. The appliance 200 can provide acceleration techniques to accelerate any transport layer payload between a server (106) and a client (102), such as: 1) transport Layer Connection Pooling, 2) transport Layer Connection Multiplexing, 3), transport Control Protocol Buffering, 4), compression, and 5). The appliance 200 may provide load balancing for servers 106 to respond to client requests 102. Other embodiments act as access servers or proxy servers to allow access to one or more servers. Another embodiment provides a secure connection via a virtual private network from the first network 104 of client 102 to second network 104. The server 106 can provide an SSL VPN connection. The appliance 200, in addition to providing application firewall security and control, manages the connection and communications between client 102, and server 106.

“In certain embodiments, the app delivery management system (190) provides application delivery techniques to deliver computing environments to remote users. These delivery techniques are based on multiple execution methods and any authorization and authentication policies that have been applied via a policy engine (195). Remote users can access the server-stored applications and files on any connected device 100 using these techniques. One embodiment of the application delivery system may reside on or execute on a server. Another embodiment of the application delivery method 190 could reside on one server 106 a-106n. In other embodiments, it may execute on multiple servers 106a-106n. In some cases, the application delivery software 190 may be executed in a server farm 38. One embodiment of the application delivery system may store or provide the data file and application to the server 106. A second embodiment may have a first set of servers 106 that executes the application delivery system. Another server 106 n could store or provide the data file. In some cases, each application delivery system, the application and the data file can reside on or be located on separate servers. Another embodiment allows any part of the application delivery method 190 to reside, execute, be stored or distributed to the appliance 200 or multiple appliances.

“The client 102 might include a computing environment 15 to execute an application that processes or uses a data file. The client 102 may access networks 104,104? The appliance 200 can request a data file and an application from the server. One embodiment is that the appliance 200 could forward a request from client 102 to server 106. The client 102 might not have the application or data file stored locally. The client 102 may be notified by the application delivery system (190) and/or server (106) that the application and data files have been delivered to them. In one embodiment, server 106 may transmit an application stream to client 102 in order to run in computing environment 15.

“In one embodiment, an application delivery system (190) includes a policy engine (195) for controlling and managing access to, selections of application execution methods, and delivery of applications. The policy engine 195 can determine which applications a client 102 or user may access in some embodiments. Another embodiment of the policy engine is 195 which determines the delivery method for the application to the client or user 102. The application delivery system (190) may provide a variety of delivery options from which to choose a method for application execution. This includes streaming the application locally to the client 120, or server-based computing.

“In one embodiment, the client 102 requests execution for an application program. The application delivery system (190) comprises a server and 106 chooses the method to execute the application program. The server 106 may receive credentials from the client. Another embodiment is that the server 106 is notified by the client 102 of a request to enumerate all available applications. One embodiment responds to the client’s request for credentials. The application delivery system (190) enumerates the available applications to the client 102. The request is made by the application delivery system (190) to execute an enumerated app. The application delivery system selects from a predetermined list of methods to execute the enumerated applications, such as one that is responsive to a policy in a policy engine. The application delivery system may choose a method to execute the application that allows the client 102 access to application-output data created by the execution of an application program on a server. After retrieving multiple application files, the application delivery system (190) may choose a method to execute the program that allows the local machine 10 and the client 102 to execute the program locally. Another embodiment of the application delivery system is to choose a method to execute the application and stream it via the network to the client 102.

Client 102 can execute, operate, or provide an application. This could be any type or form of program or software, including any type or form of client-server, client-server, thin-client computing client, web-based client, web-based client, client server application, ActiveX control or Java applet or any other type or form of executable instructions that client 102 can execute. The application can be server-based, remote-based or executed for the client 102 by a server server 106. One embodiment of the server 106 can display output to client 102 using any thin/remote-display protocol such as the Independent Computing Architecture protocol from Citrix Systems Inc. of Ft. Lauderdale (Fla.) or the Remote Desktop Protocol(RDP) from Microsoft Corporation of Redmond (Wash.). Any protocol can be used by the application. It can be an HTTP client or an FTP client. An Oscar client can also work. Other embodiments include any software that is related to VoIP communications such as soft IP phones. Further embodiments include any application that is related to real time data communications such as applications for streaming audio and/or video.

“Referring to FIG. 1D may show a network environment that includes a monitoring server (106A). Any type of performance monitoring service 198 may be included in the monitoring server 106A. Monitoring, measurement, and/or management software or hardware may be included in the performance monitoring service 198. This includes data collection, analysis, management, and reporting. One embodiment of the performance monitoring system 198 includes one or more monitoring agent 197. Any software, hardware, or combination thereof can be used to monitor, measure, and collect data on any device. This includes client 102, server106, or appliance 200,205. The monitoring agent 197 can include any type of script in some instances, such as Visual Basic script or Javascript. The monitoring agent 197 can be transparent to all applications and/or users of the device in one embodiment. The monitoring agent 197 can be installed and used without any intrusion to client or application. Another embodiment of the monitoring agent 197 allows the client or application to install and operate the agent without the need for any instrumentation.

“In certain embodiments, the monitoring device 197 collects and monitors data at a predetermined frequency. The monitoring agent 197 may also monitor, measure and collect data depending on the detection of any type or form of event. The monitoring agent 197 might collect data when it detects a request for a webpage or receives an HTTP response. Another example is that the monitoring agent 197 can collect data upon detection any user input events such as a mouse click. Any data collected, monitored or measured by the monitoring agent 197 can be reported to the service 198. One embodiment of the protocol is that the monitoring agent 197 transmits information 198 to the monitoring agency 198 according to a set schedule or at a predetermined frequency. Another embodiment is that the monitoring agent 197 transmits information 198 to the monitoring system 198 when an event is detected.

“In some embodiments the monitoring service (198) and/or the monitoring agent 197 monitor and measure any network resource, network infrastructure element or client. One embodiment of the monitoring service 198 or monitoring agent 197 monitors and measures performance for any transport layer connection such as a TCP/UDP connection. Another embodiment monitors and measures latency in network connections using the monitoring service 198 or monitoring agent 197. The monitoring service 198, and/or the monitoring agent 197, monitor and measure bandwidth usage in yet another embodiment.

“In other embodiments the monitoring service (198) and/or monitoring agent (197) monitor and measure end-user responses times. The monitoring service 198 can monitor and measure the performance of an application in some instances. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors and measures performance for any session or connection to an application. The monitoring service 198, and/or the monitoring agent 197 in one embodiment monitor and measure performance of a browser. Another embodiment monitors and measures the performance of HTTP-based transactions using the monitoring service 198 or monitoring agent 197. The monitoring service 198, and/or the monitoring agent 197 in some embodiments monitor and measure performance of Voice over IP (VoIP), applications or sessions. Other embodiments of the monitoring service 198 or monitoring agent 197 monitor and measure performance of remote display protocols applications, such as ICA clients or RDP clients. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors streaming media performance and measures it. In still a further embodiment, the monitoring service 198 and/or monitoring agent 197 monitors and measures performance of a hosted application or a Software-As-A-Service (SaaS) delivery model.”

“In some embodiments the monitoring service (198) and/or monitoring agent (197) perform monitoring and performance measurement of one, or more transactions, requests, or responses related to an application. The monitoring service 198 or monitoring agent 197 can monitor and measure any part of the application layer stack such as J2EE calls and.NET calls. One embodiment of the monitoring service 198 or monitoring agent 197 monitors, measures and reports on SQL transactions and database transactions. Another embodiment has the monitoring service 198 or monitoring agent 197 monitoring and measuring any method, function, or application programming interface call.

“In one embodiment, monitoring service 198 or monitoring agent 197 monitors and measures performance of application and/or data delivery from a server via one or more appliances such as appliance 200/205. The monitoring service 198 or monitoring agent 197 may monitor and measure the performance of virtualized applications. Other embodiments of the monitoring service 198 or monitoring agent 197 monitor and measure performance of streaming applications. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors the delivery of a desktop app to a client, and/or the execution on the client of that desktop application. Another embodiment is the monitoring service 198 or monitoring agent 197 which monitors and measures the performance of client/server applications.

“In one embodiment, the monitoring system 198 and/or the monitoring agent 197 are designed and constructed to provide application management for the application delivery network 190. Monitoring service 198 or monitoring agent 197 can monitor, measure, and manage the performance and delivery of applications via Citrix Presentation Server. The monitoring service 198, and/or the monitoring agent 197 can monitor individual ICA sessions. Monitoring service 198 or monitoring agent 197 can measure total and per-session system resource usage as well as application, networking and performance. Monitoring service 198 or monitoring agent 197 can identify active servers for a user and/or session. The monitoring service 198 or monitoring agent 197 may monitor back-end connections between an application delivery system (190) and a database server. Monitoring service 198 or monitoring agent 197 can measure network latency and delay per user-session, ICA session, and volume.

“In some embodiments the monitoring service (198) and/or the monitoring agent 197 measure and monitor memory usage for application delivery system 190. This includes total memory usage per user session and/or process. The monitoring service 198, and/or the monitoring agent 197 monitor CPU usage in the application delivery system (190), such as the total CPU usage per user session and/or each process. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors and measures the time it takes to log in to an application, a client, or the delivery system such as Citrix Presentation Server. The monitoring service 198 or monitoring agent 197 monitors how long a user stays logged in to an application, server, or application delivery system 190. The monitoring service 198, and/or the monitoring agent 197 in some embodiments monitor active and inactive session counts of an application, server, or application delivery system session. Another embodiment of the monitoring service 198 or monitoring agent 197 monitors user session latency.

“In further embodiments, monitoring service 198 or monitoring agent 197 monitors and measures any type of server metrics. The monitoring service 198 or monitoring agent 197 monitors metrics related system memory, CPU usage and disk storage. The monitoring service 198, and/or the monitoring agent 197 monitor metrics related to page errors, such as page faults every second. Other embodiments of the monitoring service 198 or monitoring agent 197 monitor round-trip time metrics. Another embodiment has the monitoring service 198 or monitoring agent 197 that monitors metrics related application crashes, errors, and/or hangs.

“In some embodiments, monitoring service 198 or monitoring agent 198 include any product embodiments referred as EdgeSight manufactured in Ft. Lauderdale, Fla. by Citrix Systems, Inc. Another embodiment of the performance monitoring service (198) and/or monitoring agency 198 include any portion of product embodiments referred as the TrueView product set manufactured by Symphoniq Corporation, Palo Alto, Calif. One embodiment of the performance monitoring service (198) and/or monitoring agency 198 include any portion of product embodiments referred as the TeaLeaf CX suite manufactured by TeaLeaf Technology Inc., San Francisco, Calif. Other embodiments include any part of the business management products, such the Patrol and BMC Performance Manager, made by BMC Software, Inc., Houston, Tex.

“The client 102 and server 106 and the appliance 200 can be deployed on any type of computing device such as a computer or network device capable of communicating with any type of network and performing operations described herein. FIGS. FIGS. 1E and 1F show block diagrams of a computing unit 100 that can be used to practice an embodiment of client 102, server106, or appliance 200. FIGS. 1E and 1F show that each computing device 100 has a central processing module 101 and a main storage unit 122. FIG. FIG. 1E shows that a computing device 100 can include a visual display device (124), a keyboard (126) and/or a point device 127. Additional elements may be added to each computing device 100, including one or more input/output components 130 a-130b (generally referred by using reference number 130) and a cache memory 140 that communicates with the central processing units 101.

The central processing unit 101 is any logic circuitry which responds and processes instructions from the main memory unit 122. Many embodiments provide the central processing unit by a microprocessor unit. These include those manufactured in California by Intel Corporation, Mountain View, Calif., or by Motorola Corporation, Schaumburg, Ill., as well as those manufactured at Transmeta Corporation, Santa Clara, Calif., and the RS/6000 processor manufactured by International Business Machines of White Plains (N.Y.) or Advanced Micro Devices, Sunnyvale, Calif. These processors or any other processor that can operate as described herein may be used to create computing device 100.

The main memory unit 122 could be made up of one or more memory chips that can store data and allow access to any storage location by the microprocessor 101. Any of the memory chips described above, or any other memory chips that can operate as described herein, may be used to create the main memory 122. FIG. FIG. 1E shows how the processor 101 communicates via a system bus 150 with main memory, 122 (described below). FIG. FIG. 1F shows an embodiment of a computing apparatus 100, in which the processor communicates with main memory via a memory port. FIG. FIG. 1F may show DRDRAM as the main memory 122.

“FIG. “FIG. Other embodiments of the main processor 101 communicating with cache memory 140 via the system bus 150. Cache memory 140 is usually faster than main memory 122 in response times and is usually provided by SRAM or BSRAM. FIG. 1F shows how the processor 101 communicates via a local bus 150 with I/O devices 130. There are many busses that can be used to link the central processing unit 101 with any I/O device 130. These busses include a VESA VL, an ISA, an EISA, a MicroChannel Architecture, (MCA), bus, a PCI, a PCIX bus or a PCI Express bus. The processor 101 can use an Advanced Graphics Port (AGP), to communicate with a display 124 in embodiments where the I/O device 124 is a video monitor 124. FIG. FIG. 1F shows an example of a computer 100 where the main processor 101 can communicate directly with I/O device 130b via HyperTransport or Rapid I/O. FIG. FIG. 1F shows another embodiment where local buses and direct communication are combined: The processor 101 communicates directly with I/O devices 130 b via a local interconnect bus, while I/O devices 130 a uses a local bus to communicate with them.

The computing device 100 could also include a network adapter 118 that allows it to interface with a Local Area Network, Wide Area Network, or the Internet via a variety connections such as standard telephone lines, LAN and WAN links (e.g. 802.11, T1, 56 kb or X.25), broadband connections (e.g. ISDN, Frame Relay or ATM), or a combination of all or any of these. The network interface 118 can include a built-in network card, network adapter or card bus network adapter. It may also be a wireless adapter, USB adapter and modem.

The computing device 100 may contain a variety of I/O units 130 a-130n. Keyboards, mice and trackpads can be used as input devices. Video displays, speakers and dye-sublimation printers are all examples of output devices. An I/O controller 123 may control the I/O devices 130 as shown in FIG. 1E. 1E. An I/O device can also be used to store 128 or an installation medium (116) for the computing device 100. Other embodiments include the computing device 100 providing USB connections for receiving USB storage devices, such as the USB Flash Drive series of devices manufactured and distributed by Twintech Industry, Inc., Los Alamitos, Calif.”

“In some instances, the computing device 100 could include or be connected with multiple display devices 124a-124n, each of which may be the same type or different. Any of the I/O device 130 a?130 n or the I/O controller123 can contain any type or combination of hardware, software, and hardware to enable, enable, or provide support for multiple display devices 124a?124n. The computing device 100 could include any type of video adapter, card, driver and/or software to connect, communicate, connect, or otherwise use multiple display devices. These embodiments can include any type or configuration of software that is designed to be used on another computer’s display device 124a for the computing device 100. A computing device 100 can be configured to support multiple display devices 124a-124n. This is something that anyone with ordinary skill in the art will appreciate and recognize.

“In other embodiments, an interface device 130 can be a bridge 170 connecting the system bus 150 to an external communication bus such as an Apple Desktop Bus, RS232 serial connection, a FireWire 800 bus or an Ethernet bus.

“In other embodiments, the computing devices 100 can have different operating systems and processors. One example is the Treo 180, 270, 1060, 600, or 650 smartphone manufactured by Palm, Inc. This embodiment uses the PalmOS operating system to control the Treo smart phones. It also includes a stylus input device and a five-way navigation device. The computing device 100 may also include any computer that can communicate and has enough processor power and memory to carry out the operations described.

“As shown at FIG. “As shown in FIG. The computing device 100 can include a parallel processor with one to several cores in some embodiments. One of these embodiments is the computing device 100, which is a shared-memory parallel device. It has multiple processors and/or cores that access all memory in a single global address area. Another embodiment of these embodiments is the computing device 100, which is a distributed parallel device that accesses local memory. Another embodiment of this design is the computing device 100. It has some memory that can be shared with some memory that can only be accessed through particular processors. Another example of this is the computing device 100. This multi-core microprocessor combines multiple processors into one package. Often, it’s an integrated circuit (IC). Another embodiment of the computing device 100 is a chip with a CELL-BROADBAND ENGINE architecture. It includes a Power processor and a plurality synergistic processor elements. The Power processor element and the plurality synergistic processor elements are linked by an internal high-speed bus, which can be called an element interconnect bus.

“In some embodiments, processors allow for simultaneous execution of multiple instructions on multiple pieces (SIMD). Other embodiments allow the processors to execute multiple instructions simultaneously on multiple pieces. Other embodiments allow the processor to use any combination SIMD or MIMD cores within a single device.

“In some embodiments, the computing unit 100 may include a graphics processing device. FIG. 1H shows one of these embodiments. FIG. 1H shows that the computing device 100 has at least one central processor unit 101 and at most one graphics processing unit. Another embodiment of this design is the computing device 100 which includes at least one parallel processor unit and at most one graphics processing unit. Another embodiment of these embodiments is the computing device 100 which includes a plurality processing units of any type and one of the plurality comprising a graphic processing unit.

“In some embodiments, the first computing device 100a executes an app on behalf of a user using a client computing device 100b. Another embodiment is that a computing device 100 executes a virtual computer, which creates an execution session in which applications can be executed on behalf of the user or client computing devices 100. One of these embodiments is a hosted desktop session. Another embodiment is where the computing device 100 runs a terminal session. A hosted desktop environment may be provided by the terminal services session. Another embodiment provides access to a computing session that allows for one or more of the following: an application, multiple applications, a desktop app, or a session in which one of several applications can execute.”

“B. Appliance Architecture

“FIG. 2A shows an example embodiment for the appliance 200. FIG. 2A shows the architecture of the appliance 200. 2A is an illustration and is not meant to limit the scope of the appliance 200. FIG. 2. Appliance 200 consists of a hardware layer (206) and a software layer (202), which are divided into user space 202, and kernel space 204.

“Hardware layer206 contains the hardware elements that allow programs and services to be executed in kernel space 204 or user space 202. Hardware layer 206 also contains the elements and structures that allow programs and services in kernel space 204, user space 202, to communicate data internally and externally to appliance 200. FIG. FIG. 2 shows the hardware layer 206, which includes a processor 262 for running software programs and services, memory 264 for data storage, network ports 266, for transmitting and receiving data over a wireless network, and an encryption process 260 for processing Secure Sockets Layer data sent and received over the network. The central processing unit 262 can perform functions similar to the encryption processor 266 in some embodiments. The hardware layer 206 can also include multiple processors for each processing unit 262 or encryption processor 260. Any of the 101 processors described above in relation to FIGS. 262 can be included in processor 262. 1E and 1.F. In one embodiment, for example, the appliance 200 includes a first processor 262 as well as a second processor 262?. Other embodiments use the processor 262 and 262. It is a multi-core processor.”

“Although appliance 200’s hardware layer 206 is illustrated with an encryption processor 266, processor 260 could be used to perform functions related any encryption protocol such as Secure Socket Layer or Transport Layer Security (TLS). The processor 260 could be a general-purpose processor (GPP) in some embodiments. Further embodiments may include executable instructions to perform processing of security-related protocols.

“Though the hardware layer (206) of appliance 200 has been illustrated with some elements in FIG. 2. The hardware components or portions of appliance 200 can include any type of hardware, hardware, or software elements. For example, the computing device 100 is illustrated and discussed in conjunction with FIGS. 1E and 1. The appliance 200 can be a server or gateway, router, switch or bridge, or any other type of computing device or network device.

“The kernel space number 204 is reserved to run the kernel 230. This includes any device drivers, kernel extensions, or other kernel-related software. The kernel 230, which is the heart of the operating system, provides access to, control and management of resources as well as hardware-related elements for the application 104, as those who are skilled in the art will know. According to an embodiment of the appliance 200 the kernel space contains 204. A number of network services and processes work in conjunction with a cache manger 232. These benefits are further described herein. The embodiment of kernel 230 will also depend on the operating system that is installed, configured, or used by the device 200.

“In one embodiment, device 200 includes one network stack 267. This can be a TCP/IP-based stack for communicating with client 102 or server 106. The network stack 267 can be used to communicate with network 108 and network 110 in one embodiment. In some embodiments, device 200 terminates the first transport layer connection (e.g. a TCP connection from a client 102) and establishes the second transport layer connections to server 106 for client 102. A single network stack 267 may establish the first and second transport layer connections. Other embodiments may include multiple network stacks (e.g. 267 and 267). The first transport layer connection can be established at one network stack 267. The second transport layer connection is made on the second network platform 267. One network stack could be used to transmit and receive network packets over a first network and another stack for transmitting and receiving network packets over a second network. One embodiment of the network stack 267 includes a buffer 243 to allow the appliance 200 to queue one or more network packets.

“As shown at FIG. 2. The kernel space 204 contains the cache manager232, a high speed layer 2-7 integrated packet engines 240, and 234, an encryption engine 234, and 234, as well as a policy engine 236, 236 and 238. These components or processes can be run in kernel space 204, kernel mode, or in user space 202 to improve the performance of each component. Kernel operation refers to the execution of these components or processes 232-240, 234, 234, 236 and 238 in the core address space. The kernel mode of the encryption engine 234 improves encryption performance by moving encryption/decryption operations to it. This reduces the transitions between the memory or a kernel thread running in kernel mode and the user mode memory space. Data obtained in kernel mode might not be needed to be passed or copied into a process or thread that is running in user mode. For example, from a kernel-level data structure to a user-level data structure, for example. Another aspect is that the number of context switches between user mode and kernel mode has been reduced. In kernel space 204, it is possible to synchronize and communicate between any component or process 232, 244, 235, 235, 236 or 238 more efficiently.

“In some embodiments, any part of the components 234, 240 and 234, 236 may run in the kernel space of 204 while other parts of these components 232 and 240, 234, 234, 236 or 238 may operate in the user space of 202. One embodiment of the appliance 200 has a kernel-level information structure that allows access to any part of one or more network packages. For example, a packet that contains a request from client 102 and a response from server 106. The packet engine 240 may have access to the kernel-level structure via a transport layer driver interface, filter, or other means. Any interface and/or data that is accessible via kernel space 204 may be included in the kernel-level structure. This includes data related to network stack 267, traffic or packets received by or transmitted by network stack 267. Other embodiments allow the kernel-level structure to be used by any component or process 232, 240 or 234, 236 or 238 in order to perform the desired operation. One embodiment shows a component 232 or 240 that runs in kernel mode 204 using the kernel level data structure. In another embodiment, the component 238, 240 and 234, 236 are running in user mode using the kernel data structure. In some embodiments, the kernel level data structure can be copied or passed on to another kernel-level structure or any user-level structure.

The cache manager 232 can be comprised of software, hardware, or any combination thereof to provide cache access control and management of all types and forms of content. This includes objects and dynamically generated objects that are served by the originating server 106. The cache manager 232 can store data, objects, and content in any format. This includes data that is written in a markup language or transmitted via any protocol. The cache manager 232 may duplicate data that has been stored elsewhere, or previously generated, transmitted or computed. In these cases, it may take longer to retrieve, compute, or obtain the original data relative to the time required to read a cache memory element. The cache memory element stores the data. Future use can be done by accessing the cached copy, rather than refetching and recomputing original data. This reduces the access time. The cache memory element could contain a data object stored in memory 264 of device 200. Other embodiments may include memory with a faster access speed than memory 264. Another embodiment of the cache memory element can include any type of storage element 200, including a portion or a hard drive. The processing unit 262 can provide cache memory to the cache manager 232. Further, the cache manager may also use memory, storage or the processing unit to cache data, objects and other content.

“Furthermore the cache manager232 includes logic, functions and rules that enable you to execute any of the embodiments of the appliances 200 described herein. The cache manager 232, for example, includes functionality or logic to invalidate objects on expiration of a invalidation time period or upon receipt a invalidation command from client 102 or server. The cache manager 232 can be implemented as a program or service, process, or task that executes in the kernel space (204) and the user space (202). One embodiment of the cache manager232 has a portion that executes in user space 202 and another portion that executes within the kernel space. The cache manager 232 may include any type of general purpose processor or integrated circuit such as a Field Programmable Gate Array, Programmable Logic Device, or Application Specific Integrated Circuit.

“The policy engine 236, which may include an intelligent statistical engine, or another programmable application, could be included. One embodiment of the policy engine 236 includes a configuration mechanism that allows a user to specify, identify, define, or configure a cache policy. In some embodiments, policy engine 236 also has access memory to support data structures like lookup tables and hash tables that enable users to make caching policy choices. The policy engine 236, in other embodiments, may include any logic, rules or functions to determine and provide access to, control and manage objects, data, or content that is being cached by appliance 200. This includes access, control, and management security, network traffic access, compression, or any other function or operation performed or requested by appliance 200. Additional examples of caching policies can be found here.

“The encryption engine 234, which includes logic, business rules and functions, is responsible for processing any security protocol such as SSL, TLS or any related function. The encryption engine 234 can decrypt and encrypt network packets or any part thereof that are sent via the appliance 200. The encryption engine 234, which can also establish SSL or TLS connections for the client 102a-102n, server 106a-106n, and appliance 200, may also do so. The encryption engine 234 can offload and accelerate SSL processing. One embodiment uses a tunneling protocol for a virtual private network between client 102a-102n and server 106a-106n. Other embodiments of the encryption engine 234 include executable instructions that run on the Encryption process 260.

“The multi-protocol compress engine 238 includes any logic, business rules or function that can be used to compress one or more protocols in a network packet. This could include any protocols used by network stack 267 on the device 200. Multi-protocol compression engine 238, in one embodiment, compresses bidirectionally between clients 102 a?102 n, servers 106 a?106 n, any TCP/IP based protocols, such as File Transfer Protocol, File Transfer Protocol, File Transfer Protocol, HyperText Transfer Protocol, (HTTP), Common Internet File System protocol (file transfer), Independent Computing Architecture protocol (ICA), Remote Desktop Protocol, Remote Application Protocol, Wireless Application Protocol, Mobile IP protocol and Voice Over IP protocol. Multi-protocol compression engine 238, in other embodiments, compresses Hypertext Markup Language(HTML) based protocols. In some embodiments it also compresses any markup languages such as Extensible Markup Languages (XML). The multi-protocol compression engines 238 provide compression for any high-performance protocol. This includes protocols that are designed for appliances 200 and 200. Another embodiment of the multi-protocol compress engine 238 allows for compression of any payload or communication using modified transport control protocols such as Transaction TCP, TCP without selection acknowledgements (TCP?SACK), TCP With Large Windows (TCP?LW), TCP prediction protocols such as TCP-Vegas, and TCP spoofing protocols.

The multi-protocol compression engines 238 speeds up performance for users who access applications via desktop clients. This includes Microsoft Outlook, non-Web thin clients and clients launched by popular enterprise apps like Oracle, SAP, Siebel and Siebel. Mobile clients such as the Pocket PC can also be used to compress the data. The multi-protocol compression engines 238 can compress all protocols that are carried by TCP/IP protocols by running in kernel mode 204, integrating with packet processing engine 244, and accessing the network stack 267 in some embodiments.

“High speed layer 2–7 integrated packet engine (240), also known as a packet processor engine or packet engine, manages the kernel-level processing packets received by appliance 200 over network ports 266, High speed layer 2-7 integrated package engine 240 can be used as a buffer to queue one or more network packets while they are being processed. This could include, for example, the receipt or transmission of network packets. The high speed layer-2-7 integrated packet engine, 240 can also communicate with one or more network stacks 266, to send and receive packets over network ports 266. The high speed layer-2-7 integrated packet engine, 240 is used in conjunction with encryption engines 234, cache manager232, policy engine 236, and multi-protocol compress logic 238. Encryption engine 234 can perform SSL processing on packets. Policy engine 236 can perform traffic management functions such as request level content switching and request-level caching redirection. Multi-protocol compression logic 238, however, is designed to compress and decompress data.

“The high-speed layer 2-7 integrated packet engines 240 includes a 242 packet processing timer. One embodiment provides a packet processing timer 242 that allows for one or more time intervals for the processing of network packets incoming, outgoing, and received. The high speed layer 2-7 integrated network packet engine 240 processes network traffic in response to the timer 242. The packet processing timer 242 can send any signal to the packet engine, 240 in order to notify, trigger or communicate a time-related event, interval, or occurrence. The packet processing timingr 242 can be used in many ways. It may operate in milliseconds such as 100 ms 50 ms and 25 ms. In some cases, it provides time intervals to the packet engine 240, while other embodiments provide a time interval of 5 ms. Other embodiments use a shorter interval, but still further embodiments allow for a time interval as short as 3 ms 2, 1 ms. During operation, the high speed layer-2-7 integrated packet engine (240) may interface, integrate, or be in communication with the encryption engines 234, cache manager 236 and policy engine 236. Any of the logic, functions or operations of encryption engine 234, cache manger 232, policy engines 236 and 238 can be executed in response to packet processing timer 242 and/or packet engine 240. Any of the functions or operations of encryption engine 234, cache manger 232, policy engines 236 or multi-protocol compression engine 238 can be performed responsive to packet processing timesr 242 and/or packet engine 242. Another embodiment allows the expiry and invalidation times of cached objects to be set in the same order as the packet processing timer 242, for example, every 10 ms.

“User space 202, which is different from kernel space 204 and other operating systems, is the memory area of the operating system that user mode applications use or programs running in user mode. User mode applications cannot directly access kernel space 204 and must use service calls to access kernel services. FIG. FIG. 2 shows that user space 202 includes a graphic user interface (GUI), 210, a command-line interface (CLI), 212, shell services, 214, 216 and daemon service 218. GUI 210 or CLI 212 allow system administrators and other users to interact with appliance 200 and manage its operation, such as through the appliance 200’s operating system. CLI 212 and GUI 210 can contain code running in kernel space 204 or user space 202. GUI 210 can be any type or form of graphical user interface. It may be presented by text, graphical, or any other type of program, such as a browser. CLI 212 can be any type of command line, text-based interface or command line that is provided by an operating system. The CLI 212 could include a shell. This is a tool that allows users to interact with an operating system. The CLI 212 can be provided in some embodiments via a bash or csh shell, tcsh or ksh type shell. Shell services 214 includes programs, services and tasks that support interaction with the appliance 200.

“Health monitoring program 216, is used to monitor, verify, report, and ensure that network systems work properly and that users receive requested content. The health monitoring program 216 includes one or more programs, services or tasks that provide logic, rules and operations to monitor any activity on the appliance 200. The health monitoring program 221 intercepts and inspects network traffic passing through the appliance 200 in some instances. Other embodiments of the health monitoring program 216, interfaces with any suitable means or mechanisms with one or more the following: policy engine 234, cache manager 234, policy engine 236, policy engine 236, policy engine 236, policy engine 236, multiprotocol compression logic 238, packet engines 240, daemons services 218 and shell services 221. The health monitoring program (216) may call any API to determine the state, health, or status of any appliance 200. The health monitoring program 216 can, for example, periodically ping or send status inquiries to determine if any program, process, or service is currently active. Another example is that the health monitoring program 221 may inspect any history, error, or status logs provided to any program, process or task in order to determine any condition, status, or error with any part of the appliance 200.

“Daemon service 218” are programs that run in the background or continuously and respond to periodic service requests from appliance 200. A daemon services may, in some instances, forward requests to other processes or programs, such as another daemon 218 as necessary. A daemon service 218 can run unattended, as is well-known to those who are skilled in the art. It may perform periodic or continuous system-wide functions such as network control or any other task. One or more daemon service 218 may run in the user space, while other embodiments run one or more in the kernel space.

“Referring to FIG. “Referring now to FIG. 2B, an alternative embodiment of the appliance 200. The appliance 200 offers one or more of these services, functionality, or operations. Each server 106 can provide one or more network-related services 270a-270n (referred to here as services 270). For example, a server 106 may provide an http service 270. One or more virtual servers or internet protocol servers are part of the appliance 200. Also known as VIP server, vServer, or simply VIP 275 a-275n (also called vServer 275). According to the configuration and operation of the appliance 200, the vServer 275 intercepts and otherwise processes communications between a client 102 (or server 106).

The vServer 275 can contain software, hardware, or any combination thereof. The vServer 275 can include any type of program, task, process, or executable instructions that operate in user mode 202 or kernel mode 204, or any combination thereof, in the appliance 200. Any logic, functions, rules or operations that are required to execute any of the methods described herein include the vServer 275. The vServer 275 may establish a connection to a service 270 on a server 106 in some embodiments. Any program, application, process or task capable of connecting to or communicating with the appliance 200, client102, or vServer275 can be included in service 275. The service 275 could include a web server or http server, ftp server, email server, database server, and/or ftp server. The service 270 may be a daemon process, or network driver, for listening, receiving, and/or transmitting communications to an application such as an email, database, or enterprise application. The service 270 can communicate with a specific IP address or port in some embodiments.

“In some instances, the vServer 275 applies one of the policies from the policy engine 236, to network communications between client 102 or server 106. The policies can be associated with a virtual server 275 in one embodiment. Another embodiment bases the policies on a user or a group. Another embodiment applies a policy to all vServers 275a-275n and all users or groups of users who communicate via the appliance 200. Some embodiments of the policy engine include conditions that allow the policy to be applied based on the content of the communication. These include the internet protocol address, port and protocol type of the packet or the context of communication such as user, group, vServer, transport layer connection and/or identification or attributes for the client 102 or server106.

“In other embodiments, an appliance 200 communicates or interfaces to the policy engine 236, to authenticate and/or authorize remote users or remote clients 102 to access the computing environments 15, application, and/or file from a server.106. Another embodiment of the appliance 200 communicates or interfaces to the policy engine 236, to authenticate and/or authorize a remote user/client 102 to allow the application delivery system (190) to deliver one or more instances of the computing environment 15, data file, and/or application. Another embodiment of the appliance 200 establishes an SSL VPN connection or VPN connection using the policy engine’s 236 authentication of remote users or remote clients 102. In one embodiment, 200 controls network traffic and communication sessions according to policies of the policy engines 236. The appliance 200, for example, may be used to control access to a computing environment 15 or a data file using the policy engine 236,

“In some instances, the vServer 275 establishes a transport-layer connection such as a TCP/UDP connection with a client 102 through the client agent 120. One embodiment of the vServer 275 listens to and receives communications from client 102. Other embodiments establish a transport layer connection such as a TCP/UDP connection with a client server 106. The vServer 275 establishes a transport layer connection to an Internet protocol address and port on a server 270 that is running on the server 106. Another embodiment associates a first transport connection to a client 102 and a second transport connection to the server 106. A vServer 275 may establish a pool transport layer connections to a server 106, multiplexing client requests through the pooled transportation layer connections.

“In certain embodiments, the appliance 200 provides an SSL VPN connection 280 between client 102 or server 106. A client 102 on a network 102 may request to connect to a server106 on a network 104?. The second network 104 may be used in some embodiments. The second network 104 is not routable with the first network. Other embodiments have the client 102 on a public network, while the server 106 on a private network. This could be a corporate network. One embodiment has the client agent 120 intercepting communications from the client 102 over the first network 104. The client agent 120 encrypts the communications and transmits them via a first transport layer connection with the appliance 200. The appliance 200 links the first transport layer connection from the first network 1004 to a second transport level connection to the server 106 on the second network 104. The appliance 200 intercepts the communication from client agent 102 and decrypts it. It then transmits the communication via the second transport layer link to server 106 on second network 104. A pooled transport layer connection could be used as the second transport layer. The appliance 200 is an end-to?end secure transport layer connection between the two networks (104, 104?).

“In one embodiment, the appliance hosts an intranet protocol or IntranetIP 282 of the client. Client 102 is equipped with a local network ID, such as an IP address and/or hostname on the first network. Connected to the second network, 104? The appliance 200 assigns and/or provides an IntranetIP Address 282 to the client 102 via the second network. The appliance 200 listens to and receives on the private network 104? Any communications directed at the client 102 use the client’s intranetIP 282. One embodiment of this arrangement is that the appliance 200 acts on behalf or as the client 102 on second private network.104 In another example, a vServer 275 listens and responds to communication to the IntranetIP 282 of client 102. If a computing device 100 is connected to the second network, 104? The appliance 200 responds to a request by transmitting it. The appliance 200 might respond to a request to connect to the client’s IntranetIP 282. Another example is that the appliance might establish a connection (e.g. a TCP/UDP connection) with computing device 100 on second network 104, requesting a connection to the client’s IntranetIP 282.”

“In some instances, the appliance 200 offers one or more of these acceleration techniques 288 to communications between client 102-server 106: 1) compression; (2) decompression; (3) Transmission Control Protocol pooling and (4) Transmission Control Protocol multiplexing; (5) Transmission Control Protocol buffering and (6) caching. The appliance 200 reduces server 106’s processing load by opening and closing multiple transport layer connections to clients 102. This is done by maintaining one or more transport layer connections with each client 106 to allow clients to access their data via the Internet. This is known as “connection pooling”.

“In certain embodiments, to seamlessly splice communication from a client102 to a server106 via a pooled transportation layer connection, the appliance 200 transforms or multiplexes communications by changing sequence number and acknowledgment numbers at transport layer protocol level. This is known as “connection multiplexing”. In certain embodiments, there is no need for application layer protocol interaction. In the example of an inbound packet, which is a packet that is received from client 102, the source network address of packet is changed to an output port on appliance 200 and the destination network addresses to the intended server. An outbound packet, which is one that is received from server 106, has its source network address changed from the server 106’s to the appliance 200’s. The destination address is also changed from appliance 200 to the client 102. On the 200 transport layer connection between the client and the appliance 102, the sequence numbers and acknowledgment number of the packet are translated to the sequence numbers or acknowledgement numbers expected to be received by client 102. These translations may cause the packet checksum from the transport layer protocol to be recalculated in some embodiments.

“In another embodiment, a 200-watt appliance provides load-balancing or switching functionality 284 for communication between client 102 and server.106 The appliance 200 may distribute traffic and direct client requests to server 106 according to layer 4 or application-layer request information. One embodiment uses the network layer, or layer 2, of the network packet to identify a destination server. However, the appliance 200 determines which server 106 will distribute the network packet using application information and the data in the payload. One embodiment of the appliance 200’s health monitoring programs 216 monitor the health servers to determine which server to distribute client requests. If the appliance 200 detects that server 106 is unavailable or has a load exceeding a predetermined threshold the appliance 200 can direct client requests to another server.

“In some instances, the appliance 200 acts either as a Domain Name Service (DNS), resolver or other means of providing resolution to DNS requests from clients 102. The appliance may intercept a DNS request from the client 102. One embodiment responds to a DNS request from a client by providing an IP address hosted or maintained by the appliance 200. The client 102 sends network communication to the appliance 200 for the domain name. Another embodiment responds to a client?s DNS request by sending an IP address or hosting information to the appliance 200. Some embodiments respond to clients’ DNS requests with IP addresses of servers 106.

Click here to view the patent on Google Patents.