Working with Networks 1.8

Advanced Routing Configuration

 

You can configure the static and dynamic routing capabilities that are provided by the Networking software for your Networking Data Center for VMware vSphere edge gateways.

 

To enable dynamic routing, you configure an advanced edge gateway using the Border Gateway Protocol (BGP) or the Open Shortest Path First (OSPF) protocol

 

For detailed information about the routing capabilities that Networking provides, see Routing in the Networking Administration documentation.

You can specify static and dynamic routing for each advanced edge gateway. The dynamic routing capability provides the necessary forwarding information between Layer 2 broadcast domains, which allows you to decrease Layer 2 broadcast domains and improve network efficiency and scale. Networking extends this intelligence to the locations of the workloads for East-West routing. This capability allows more direct virtual machine to virtual machine communication without the added cost or time needed to extend hops.

 

Specify Default Routing Configurations for the Networking Data Center for VMware vSphere Edge Gateway

You can specify the default settings for static routing and dynamic routing for an edge gateway.

Note- To remove all configured routing settings, use the CLEAR GLOBAL CONFIGURATION button at the bottom of the Routing Configuration screen. This action deletes all routing settings currently specified on the subscreens: default routing settings, static routes, OSPF, BGP, and route redistribution.

Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways.

b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Routing > Routing Configuration. 

3      To enable Equal Cost Multipath (ECMP) routing for this edge gateway, turn on the ECMP

toggle.

 

As described in the Networking Administration documentation, ECMP is a routing strategy that allows next-hop packet forwarding to a single destination to occur over multiple best paths. Networking determines these best paths either statically, using configured static routes, or as a result of metric calculations by dynamic routing protocols like OSPF or BGP. You can specify the multiple paths for static routes by specifying multiple next hops on the Static Routes screen.

For more details about ECMP and Networking, see the routing topics in the Networking Troubleshooting Guide.

4      Specify settings for the default routing gateway.

 

  1. Use the Applied On drop-down list to select an interface from which the next hop towards the destination network can be reached.
  2. To see details about the selected interface, click the blue information icon.
  3. Type the gateway IP address.
  4. Type the MTU
  5. (Optional) Type an optional description. e  Click Save changes.

5      Specify default dynamic routing settings.

 

Note If you have IPsec VPN configured in your environment, you should not use dynamic routing.

a       Select a router ID.

You can select a router ID in the list or use the + icon to enter a new one. This router ID is the first uplink IP address of the edge gateway that pushes routes to the kernel for dynamic routing.

b       Configure logging by turning on the Enable Logging toggle and selecting the log level.

c     Click OK.

6      Click Save changes.

What to do next

Add static routes. See Add a Static Route.

 

Configure route redistribution. See Configure Route Redistributions. Configure dynamic routing. See the following topics:

n         Configure BGP

n         Configure OSPF

Add a Static Route

You can add a static route for a destination subnet or host.

If ECMP is enabled in the default routing configuration, you can specify multiple next hops in the static routes. See Specify Default Routing Configurations for the Networking Data Center for VMware vSphere Edge Gateway for steps on enabling ECMP.

 

Prerequisites

As described in the Networking documentation, the next hop IP address of the static route must exist in a subnet associated with one of the Networking Data Center for VMware vSphere edge gateway interfaces. Otherwise, configuration of that static route fails.

Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways.

b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Routing > Static Routes.

3      Click the Create ( ) button.

4      Configure the following options for the static route:

 

Option

Description

Network

Type the network in CIDR notation.

Next Hop

Type the IP address of the next hop.

The next hop IP address must exist in a subnet associated with one of the edge gateway interfaces.

If ECMP is enabled, you can type multiple next hops.

MTU

Edit the maximum transmission value for data packets.

The MTU value cannot be higher than the MTU value set on the selected edge gateway interface. You can see the MTU set on the edge gateway interface by default on the Routing Configuration screen.

Interface

Optionally, select the edge gateway interface on which you want to add a static route. By default, the interface is selected that matches the next hop address.

Description

Optionally, type a description for the static route.

 

5      Click Save changes.

What to do next

Configure a NAT rule for the static route. See Add a SNAT or a DNAT Rule.

Add a firewall rule to allow traffic to traverse the static route. See Add an Networking Data Center for VMware vSphere Edge Gateway Firewall Rule.

Configure OSPF

You can configure the Open Shortest Path First (OSPF) routing protocol for the dynamic routing capabilities of an Networking Data Center for VMware vSphere edge gateway. A common application of OSPF on an edge gateway in a Cyfuture Cloud Console environment is to exchange routing information between edge gateways in Cyfuture Cloud Console.

The Networking edge gateway supports OSPF, an interior gateway protocol that routes IP packets only within a single routing domain. As described in the Networking Administration documentation, configuring OSPF on an Networking edge gateway enables the edge gateway to learn and advertise routes. The edge gateway uses OSPF to gather link state information from available edge gateways and construct a topology map of the network. The topology determines the routing table presented to the Internet layer, which makes routing decisions based on the destination IP address found in IP packets.

As a result, OSPF routing policies provide a dynamic process of traffic load balancing between routes of equal cost. An OSPF network is divided into routing areas to optimize traffic flow and limit the size of routing tables. An area is a logical collection of OSPF networks, routers, and links that have the same area identification. Areas are identified by an Area ID.

Prerequisites

A Router ID must be configured . Specify Default Routing Configurations for the Networking Data Center for VMware vSphere Edge Gateway.

Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways.

b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Routing > OSPF.

3      If OSPF is not currently enabled, use the OSPF Enabled toggle to enable it.

4      Configure the OSPF settings according to the needs of your organization.

 

Option

Description

Enable Graceful Restart

Specifies that packet forwarding is to remain uninterrupted when OSPF services are restarted.

Enable Default Originate

Allows the edge gateway to advertise itself as a default gateway to its OSPF peers.

 

5      (Optional) You can either click Save changes or continue with configuring area definitions and interface mappings.

6      Add an OSPF area definition by clicking the Add ( ) button, specifying details for the mapping in the dialog box, and clicking Keep.

 

Note- By default, the system configures a not-so-stubby area (NSSA) with area ID of 51, and this area is automatically displayed in the area definitions table on the OSPF screen. You can modify or delete the NSSA area.

 

 

Option

Description

Area ID

Type an area ID in the form of an IP address or decimal number.

Area Type

Select Normal or NSSA.

NSSAs prevent the flooding of AS-external link-state advertisements (LSAs) into NSSAs. They rely on default routing to external destinations. As a result,

NSSAs must be placed at the edge of an OSPF routing domain. NSSA can import external routes into the OSPF routing domain, by that means providing transit service to small routing domains that are not part of the OSPF routing domain.

Area Authentication

Select the type of authentication for OSPF to perform at the area level.

All edge gateways within the area must have the same authentication and corresponding password configured. For MD5 authentication to work, both the receiver and transmitter must have the same MD5 key.

Choices are:

â–        None

 

No authentication is required.

â–        Password

 

With this choice, the password you specify in the Area Authentication Value field is included in the transmitted packet.

â–        MD5

 

With this choice, the authentication uses MD5 (Message Digest type 5) encryption. An MD5 checksum is included in the transmitted packet. Type the Md5 key into the Area Authentication Value field.

 

7      Click Save changes, so that the newly configured area definitions are available for selection when you add interface mappings.

8      Add an interface mapping by clicking the Add ( ) button, specifying details for the mapping in the dialog box, and clicking Keep.

These mappings map the edge gateway interfaces to the areas.

  1. In the dialog box, select the interface you want to map to an area definition. The interface specifies the external network that both edge gateways are connected to.
  2. Select the area ID for the area to map to the selected interface.
  3. (Optional) Change the OSPF settings from the default values to customize them for this interface mapping.

When configuring a new mapping, the default values for these settings are displayed. In most cases, it is recommended to retain the default settings. If you do change the settings, make sure that the OSPF peers use the same settings.

 

Option

Description

Hello Interval

Interval (in seconds) between hello packets that are sent on the interface.

Dead Interval

Interval (in seconds) during which at least one hello packet must be received from a neighbor before that neighbor is declared down.

Priority

Priority of the interface. The interface with the highest priority is the designated edge gateway router.

Cost

Overhead required to send packets across that interface. The cost of an interface is inversely proportional to the bandwidth of that interface. The larger the bandwidth, the smaller the cost.

 

         4. Click Keep.

9      Click Save changes in the OSPF screen.

 

What to do next

Configure OSPF on the other edge gateways that you want to exchange routing information with.

   Add a firewall rule that allows traffic between the OSPF-enabled edge gateways. See Add an Networking Data Center for VMware vSphere Edge Gateway Firewall Rule.

Make sure that the route redistribution and firewall configuration allow the correct routes to be advertised. See Configure Route Redistributions.

 

Configure BGP

You can configure Border Gateway Protocol (BGP) for the dynamic routing capabilities of an Networking Data Center for VMware vSphere edge gateway.

As described in the Networking Administration Guide, BGP makes core routing decisions by using a table of IP networks or prefixes, which designate network reachability among multiple autonomous systems. In the networking field, the term BGP speaker refers to a networking device that is running BGP. Two BGP speakers establish a connection before any routing information is exchanged. The term BGP neighbor refers to a BGP speaker that has established such a connection. After establishing the connection, the devices exchange routes and synchronize their tables. Each device sends keep alive messages to keep this relationship alive.

 

Procedure

 

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways.

b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Routing > BGP.

3      If BGP is not currently enabled, use the Enable BGP toggle to enable it.

4      Configure the BGP settings according to the needs of your organization.

 

Option

Description

Enable Graceful Restart

Specifies that packet forwarding is to remain uninterrupted when BGP services are restarted.

Enable Default Originate

Allows the edge gateway to advertise itself as a default gateway to its BGP neighbors.

Local AS

Required. Specify the autonomous system (AS) ID number to use for the local AS feature of the protocol. The value you specify must be a globally unique number between 1 and 65534.

The local AS is a feature of BGP. The system assigns the local AS number to the edge gateway you are configuring. The edge gateway advertises this ID when the edge gateway peers with its BGP neighbors in other autonomous systems. The path of autonomous systems that a route would traverse is used as one metric in the dynamic routing algorithm when selecting the best path to a destination.

 

5      You can either click Save changes, or continue to configure settings for the BGP routing neighbors.

6      Add a BGP neighbor configuration by clicking the Add ( ) button, specifying details for the neighbor in the dialog box, and clicking Keep.

 

Option

Description

IP Address

Type the IP address of a BGP neighbor for this edge gateway.

Remote AS

Type a globally unique number between 1-65534 for the autonomous system to which this BGP neighbor belongs. This remote AS number is used in the BGP neighbor's entry in the system's BGP neighbors table.

Weight

The default weight for the neighbor connection. Adjust as appropriate for your organization's needs.

Keep Alive Time

The frequency with which the software sends keep alive messages to its peer. The default frequency is 60 seconds. Adjust as appropriate for the needs of your organization.

Hold Down Time

The interval for which the software declares a peer dead after not receiving a keep alive message. This interval must be three times the keep alive interval. The default interval is 180 seconds. Adjust as appropriate for the needs of your organization.

Once peering between two BGP neighbors is achieved, the edge gateway starts a hold down timer. Every keep alive message it receives from the neighbor resets the hold down timer to 0. If the edge gateway fails to receive three consecutive keep alive messages, so that the hold down timer reaches three times the keep alive interval, the edge gateway considers the neighbor down and deletes the routes from this neighbor.

 

Password

If this BGP neighbor requires authentication, type the authentication

password.

Each segment sent on the connection between the neighbors is verified. MD5 authentication must be configured with the same password on both BGP neighbors, otherwise, the connection between them will not be made.

BGP Filters

Use this table to specify route filtering using a prefix list from this BGP neighbor.

 

 

Caution A block all rule is enforced at the end of the filters.

 

 

Add a filter to the table by clicking the + icon and configuring the options. Click Keep to save each filter.

â–        Select the direction to indicate whether you are filtering traffic to or from

the neighbor.

â–        Select the action to indicate whether you are allowing or denying traffic.

â–        Type the network that you want to filter to or from the neighbor. Type ANY

or a network in a CIDR format.

â–        Type the IP Prefix GE and IP Prefix LE to use the le and ge keywords in the IP prefix list.

 

7      Click Save changes to save the configurations to the system.

What to do next

Configure BGP on the other edge gateways that you want to exchange routing information with.

 

Add a firewall rule that allows traffic to and from the BGP-configured edge gateways. See Add an Networking Data Center for VMware vSphere Edge Gateway Firewall Rule for information.

 

Configure Route Redistributions

By default the router only shares routes with other routers running the same protocol. When you have configured a multi-protocol environment, you must configure route redistribution to have cross-protocol route sharing. You can configure route redistribution for an Networking Data Center for VMware vSphere edge gateway.

Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways.

b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Routing > Route Redistribution.

3      Use the protocol toggles to turn on those protocols for which you want to enable route redistribution.

 

4      Add IP prefixes to the on-screen table.

 

a         Click the Add ( ) button.

b         Type a name and the IP address of the network in CIDR format.

c      Click Keep.

5      Specify redistribution criteria for each IP prefix by clicking the Add ( ) button, specifying the criteria in the dialog box, and clicking Keep.

Entries in the table are processed sequentially. Use the up and down arrows to adjust the sequence.

 

Option

Description

Prefix Name

Select a specific IP prefix to apply this criteria to or select Any to apply the criteria to all network routes.

Learner Protocol

Select the protocol that is to learn routes from other protocols under this redistribution criteria.

Allow learning from

Select the types of networks from which routes can be learned for the protocol selected in the Learner Protocol list.

Action

Select whether to permit or deny redistribution from the selected types of networks.

 

6      Click Save changes.

 

Load Balancing

The load balancer distributes incoming service requests among multiple servers in such a way that the load distribution is transparent to users. Load balancing provides application high availability and helps achieve optimal resource utilization, maximizing throughput, minimizing response time, and avoiding overload.

Load Balancing

The load balancer distributes incoming service requests among multiple servers in such a way that the load distribution is transparent to users. Load balancing helps achieve optimal resource use, maximizing throughput, minimizing response time, and avoiding overload.

The Networking load balancer supports two load balancing engines. The layer 4 load balancer is packet- based and provides fast-path processing. The layer 7 load balancer is socket-based and supports advanced traffic management strategies and DDOS mitigation for back end services.

Load balancing for an Networking Data Center for VMware vSphere edge gateway is configured on the external interface because the edge gateway load balances incoming traffic from the external network. When configuring virtual servers for load balancing, specify one of the available IP addresses you have in your organization VDC.

Load Balancing Strategies and Concepts

A packet-based load balancing strategy is implemented on the TCP and UDP layer. Packet- based load balancing does not stop the connection or buffer the whole request. Instead, after manipulating the packet, it sends it directly to the selected server. TCP and UDP sessions are maintained in the load balancer so that packets for a single session are directed to the same server. You can select Acceleration Enable in both the global configuration and relevant virtual server configuration to enable packet-based load balancing.

A socket-based load balancing strategy is implemented on top of the socket interface. Two connections are established for a single request, a client-facing connection and a server-facing connection. The server-facing connection is established after server selection. For the HTTP socket-based implementation, the whole request is received before sending to the selected server with optional L7 manipulation. For HTTPS socket-based implementation, authentication information is exchanged either on the client-facing connection or server-facing connection.

Socket-based load balancing is the default mode for TCP, HTTP, and HTTPS virtual servers.

The key concepts of the Networking load balancer are, virtual server, server pool, server pool member, and service monitor.

 Virtual Server

Abstract of an application service, represented by a unique combination of IP, port, protocol and application profile such as TCP or UDP.

 

Server Pool

Group of back end servers.

 

Server Pool Member

Represents the back end server as member in a pool.

 

Service Monitor

Defines how to probe the health status of a back end server.

 

Application Profile

Represents the TCP, UDP, persistence, and certificate configuration for a given application.

 

Setup Overview

You begin by setting global options for the load balancer. You now create a server pool consisting of back end server members and associate a service monitor with the pool to manage and share the back end servers efficiently.

You then create an application profile to define the common application behavior in a load balancer such as client SSL, server SSL, x-forwarded-for, or persistence. Persistence sends subsequent requests with similar characteristic such as, source IP or cookie are required to be dispatched to the same pool member, without running the load balancing algorithm. The application profile can be reused across virtual servers.

You then create an optional application rule to configure application-specific settings for traffic manipulation such as, matching a certain URL or hostname so that different requests can be handled by different pools. Next, you create a service monitor that is specific to your application or you can use an existing service monitor if it meets your needs.

Optionally you can create an application rule to support advanced functionality of L7 virtual servers. Some use cases for application rules include content switching, header manipulation, security rules, and DOS protection.

Finally, you create a virtual server that connects your server pool, application profile, and any potential application rules together.

When the virtual server receives a request, the load balancing algorithm considers pool member configuration and runtime status. The algorithm then calculates the appropriate pool to distribute the traffic comprising one or more members. The pool member configuration includes settings such as, weight, maximum connection, and condition status. The runtime status includes current connections, response time, and health check status information. The calculation methods can be round-robin, weighted round-robin, least connection, source IP hash, weighted least connections, URL, URI, or HTTP header.

Each pool is monitored by the associated service monitor. When the load balancer detects a problem with a pool member, it is marked as DOWN. Only UP server is selected when choosing a pool member from the server pool. If the server pool is not configured with a service monitor, all the pool members are considered as UP.

Configure the Load Balancer Service

Global load balancer configuration parameters include overall enablement, selection of the layer 4 or layer 7 engine, and specification of the types of events to log.

Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways.

b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Load Balancer > Global Configuration.

3      Select the options you want to enable:

 

Option

Action

Status

Enable the load balancer by clicking the toggle icon.

Enable Acceleration Enabled to configure the load balancer to use the faster L4 engine rather than L7 engine. The L4 TCP VIP is processed before the edge gateway firewall so no Allow firewall rule is required.

 

Note L7 VIPs for HTTP and HTTPS are processed after the firewall, so when you do not enable acceleration, an edge gateway firewall rule must exist to allow access to the L7 VIP for those protocols. When you enable acceleration, and the server pool is in a non-transparent mode, a SNAT rule is added, so

you must ensure that the firewall is enabled on the edge gateway.

 

 

Enable Logging

Enable logging so that the edge gateway load balancer collects traffic logs.

Log Level

Choose the severity of events to be collected in the logs.

 

4      Click Save changes.

 

What to do next

Configure application profiles for the load balancer. See Create an Application Profile.

Create an Application Profile

An application profile defines the behavior of the load balancer for a particular type of network traffic. After configuring a profile, you associate it with a virtual server. The virtual server then processes traffic according to the values specified in the profile. Using profiles enhances your control over managing network traffic, and makes traffic-management tasks easier and more efficient.

When you create a profile for HTTPS traffic, the following HTTPS traffic patterns are allowed:

Client -> HTTPS -> LB (terminate SSL) -> HTTP -> servers

 

  Client -> HTTPS -> LB (terminate SSL) -> HTTPS -> servers

  Client -> HTTPS-> LB (SSL passthrough) -> HTTPS -> servers

   Client -> HTTP-> LB -> HTTP -> servers

 Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways.

b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Load Balancer > Application Profiles.

3      Click the Create ( ) button.

4      Enter a name for the profile.

 

5      Configure the application profile.

 

Option

Description

Type

Select the protocol type used to send requests to the server. The list of required parameters depends on the protocol you select. Parameters that are not applicable to the protocol you selected cannot be entered. All other parameters are required.

Enable SSL Passthrough

Click to enable SSL authentication to be passed through to the virtual server. Otherwise SSL authentication takes place at the destination address.

HTTP Redirect URL

(HTTP and HTTPS) Enter the URL to which traffic that arrives at the destination address should be redirected.

Persistence

Specify a persistence mechanism for the profile.

Persistence tracks and stores session data, such as the specific pool member that serviced a client request. This ensures that client requests are directed to the same pool member throughout the life of a session or during subsequent sessions. The options are:

â–        Source IP

 

Source IP persistence tracks sessions based on the source IP address. When a client requests a connection to a virtual server that supports source address affinity persistence, the load balancer checks to see if that client previously connected, and if so, returns the client to the same pool member.

â–        MSRDP

 

(TCP Only) Microsoft Remote Desktop Protocol persistence (MSRDP) maintains persistent sessions between Windows clients and servers that are running the Microsoft Remote Desktop Protocol (RDP) service. The recommended scenario for enabling MSRDP persistence is to create

a load balancing pool that consists of members running a Windows Server guest OS, where all members belong to a Windows cluster and participate in a Windows session directory.

â–        SSL Session ID

 

SSL Session ID persistence is available when you enable SSL passthrough. SSL Session ID persistence ensures that repeat connections from the same client are sent to the same server. Session ID persistence allows the use of SSL session resumption, which saves processing time for both the client and the server.

Cookie Name

(HTTP and HTTPS) If you specified Cookie as the persistence mechanism, enter the cookie name. Cookie persistence uses a cookie to uniquely identify the session the first time a client accesses the site. The load balancer refers to this cookie when connecting subsequent requests in the session, so that they all go to the same virtual server.

 

Mode

Select the mode by which the cookie should be inserted. The following

modes are supported:

â–        Insert

 

The edge gateway sends a cookie. When the server sends one or more cookies, the client will receive one extra cookie (the server cookies plus the edge gateway cookie). When the server does not send any cookies, the client will receive the edge gateway cookie only.

â–        Prefix

 

Select this option when your client does not support more than one cookie.

 

Note All browsers accept multiple cookies. But you might have a proprietary application using a proprietary client that supports only one cookie. The Web server sends its cookie as usual. The edge gateway injects (as a prefix) its cookie information in the server cookie value. This cookie added information is removed when the edge gateway sends it to

the server.

 

â–        App Session For this option, the server does not send a cookie. Instead, it sends the user session

information as a URL. For example, http://example.com/ admin/UpdateUserServlet;jsessionid=OI24B9ASD7BSSD, where jsessionid is the user session information and is used for the persistence. It is not possible to see the App Session persistence table for troubleshooting.

Expires in (Seconds)

Enter a length of time in seconds that persistence stays in effect. Must be a positive integer in the range 1–86400.

 

 

Note For L7 load balancing using TCP source IP persistence, the persistence entry times out if no new TCP connections are made for a period of time, even if the existing connections are still alive.

 

 

Insert X-Forwarded-For HTTP header

(HTTP and HTTPS) Select Insert X-Forwarded-For HTTP header for identifying the originating IP address of a client connecting to a Web server through the load balancer.

 Note Using this header is not supported if you enabled SSL passthrough.

 

 

Enable Pool Side SSL

(HTTPS Only) Select Enable Pool Side SSL to define the certificate, CAs, or CRLs used to authenticate the load balancer from the server side in the Pool Certificates tab.

 

6      (HTTPS only) Configure the certificates to be used with the application profile. If the certificates you need do not exist, you can create them from the Certificates tab.

 

Option

Description

Virtual Server Certificates

Select the certificate, CAs, or CRLs used to decrypt HTTPS traffic.

Pool Certificates

Define the certificate, CAs, or CRLs used to authenticate the load balancer from the server side.

Note Select Enable Pool Side SSL to enable this tab.

 

 

Cipher

Select the cipher algorithms (or cipher suite) negotiated during the SSL/TLS

handshake.

Client Authentication

Specify whether client authentication is to be ignored or required.

 

Note When set to Required, the client must provide a certificate after the request or the handshake is canceled.

 

 

 

7      To preserve your changes, click Keep.

 

What to do next

Add service monitors for the load balancer to define health checks for different types of network traffic. See Create a Service Monitor.

Create a Service Monitor

You create a service monitor to define health check parameters for a particular type of network traffic. When you associate a service monitor with a pool, the pool members are monitored according to the service monitor parameters.

Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways. b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Load Balancer > Service Monitoring.

3      Click the Create ( ) button.

4      Enter a name for the service monitor.

5      (Optional) Configure the following options for the service monitor:

 

Option

Description

Interval

Enter the interval at which a server is to be monitored using the specified

Method.

Timeout

Enter the maximum time in seconds within which a response from the server must be received.

Max Retries

Enter the number of times the specified monitoring Method must fail sequentially before the server is declared down.

Type

Select the way in which you want to send the health check request to the server—HTTP, HTTPS, TCP, ICMP, or UDP.

Depending on the type selected, the remaining options in the New Service Monitor dialog are enabled or disabled.

Expected

(HTTP and HTTPS) Enter the string that the monitor expects to match in the status line of the HTTP or HTTPS response (for example, HTTP/1.1).

 

Method

(HTTP and HTTPS) Select the method to be used to detect server status.

URL

(HTTP and HTTPS) Enter the URL to be used in the server status request.

 

Note  When you select the POST method, you must specify a value for Send.

 

 

Send

(HTTP, HTTPS, UDP) Enter the data to be sent.

Receive

(HTTP, HTTPS, and UDP) Enter the string to be matched in the response content.

Note When Expected is not matched, the monitor does not try to match the

Receive content.

 

 

Extension

(ALL) Enter advanced monitor parameters as key=value pairs. For example, warning=10 indicates that when a server does not respond within 10 seconds, its status is set as warning. All extension items should be separated with a carriage return character. For example:

 

delay=2 critical=3 escape

 

6      To preserve your changes, click Keep.

Example: Extensions Supported for Each Protocol

Table 5-4. Extensions for HTTP/HTTPS Protocols

 

Monitor Extension

 

Description

no-body

Does not wait for a document body and stops reading after the HTTP/HTTPS header.

Note An HTTP GET or HTTP POST is still sent; not a HEAD method.

 

 

max-age=SECONDS

Warns when a document is more than SECONDS old. The number can be in the form 10m for minutes, 10h for hours, or 10d for days.

content-type=STRING

Specifies a Content-Type header media type in POST calls.

linespan

Allows regex to span newlines (must precede -r or -R).

regex=STRING or ereg=STRING

Searches the page for regex STRING.

eregi=STRING

Searches the page for case-insensitive regex STRING.

invert-regex

Returns CRITICAL when found and OK when not found.

proxy-authorization=AUTH_PAIR

Specifies the username:password on proxy servers with basic authentication.

useragent=STRING

Sends the string in the HTTP header as User Agent.

header=STRING

Sends any other tags in the HTTP header. Use multiple times for additional headers.

 

Table 5-4. Extensions for HTTP/HTTPS Protocols (continued)

onredirect=ok|warning|critical|follow|sticky|stickyport

Indicates how to handle redirected pages.

sticky is like follow but stick to the specified IP address.

stickyport ensures the port stays the same.

pagesize=INTEGER:INTEGER

Specifies the minimum and maximum page sizes required in bytes.

warning=DOUBLE

Specifies the response time in seconds to result in a warning status.

critical=DOUBLE

Specifies the response time in seconds to result in a critical status.

 

Table 5-5. Extensions for HTTPS Protocol Only

 

Monitor Extension

 

Description

sni

Enables SSL/TLS hostname extension support (SNI).

certificate=INTEGER

Specifies the minimum number of days a certificate has to be valid. The port defaults to 443. When this option is used, the URL is not checked.

authorization=AUTH_PAIR

Specifies the username:password on sites with basic authentication.

 

Table 5-6. Extensions for TCP Protocol

 

Monitor Extension

 

Description

escape

Allows for the use of \n, \r, \t, or \ in a send or quit string. Must come before a send or quit option. By default, nothing is added to send and \r\n is added to the end of quit.

all

Specifies all expect strings need to occur in a server response. By default, any is used.

quit=STRING

Sends a string to the server to cleanly close the connection.

refuse=ok|warn|crit

Accepts TCP refusals with states ok, warn, or criti. By default, uses state crit.

mismatch=ok|warn|crit

Accepts expected string mismatches with states ok, warn, or crit. By default, uses state warn.

jail

Hides output from the TCP socket.

maxbytes=INTEGER

Closes the connection when more than the specified number of bytes are received.

delay=INTEGER

Waits the specified number of seconds between sending the string and polling for a response.

 

certificate=INTEGER[,INTEGER]

Specifies the minimum number of days a certificate has

to be valid. The first value is #days for warning and the second value is critical (if not specified - 0).

ssl

Uses SSL for the connection.

warning=DOUBLE

Specifies the response time in seconds to result in a warning status.

critical=DOUBLE

Specifies the response time in seconds to result in a critical status.

 

What to do next

Add server pools for your load balancer. See Add a Server Pool for Load Balancing.

Add a Server Pool for Load Balancing

You can add a server pool to manage and share backend servers flexibly and efficiently. A pool manages load balancer distribution methods and has a service monitor attached to it for health check parameters.

Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways. b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Load Balancer > Pools.

3      Click the Create ( ) button.

4      Type a name and, optionally, a description for the load balancer pool.

5      Select a balancing method for the service from the Algorithm drop-down menu:

 

Option

Description

ROUND-ROBIN

Each server is used in turn according to the weight assigned to it. This is

the smoothest and fairest algorithm when the server processing time remains equally distributed.

IP-HASH

Selects a server based on a hash of the source and destination IP address of each packet.

LEASTCONN

Distributes client requests to multiple servers based on the number of connections already open on the server. New connections are sent to the server with the fewest open connections.

 

 

URI

The left part of the URI (before the question mark) is hashed and divided by

the total weight of the running servers. The result designates which server will receive the request. This option ensures that a URI is always directed to the same server as long as the server does not go down.

HTTPHEADER

HTTP header name is looked up in each HTTP request. The header name in parenthesis is not case sensitive which is similar to the ACL 'hdr()' function.

If the header is absent or does not contain any value, the round robin algorithm is applied. The HTTP HEADER algorithm parameter has one option headerName=. For example, you can use host as the HTTP HEADER algorithm parameter.

URL

URL parameter specified in the argument is looked up in the query string of each HTTP GET request. If the parameter is followed by an equal sign = and a value, then the value is hashed and divided by the total weight of the running servers. The result designates which server receives the request. This process is used to track user identifiers in requests and ensure that a same user ID is always sent to the same server as long as no server goes up or down. If no value or parameter is found, then a round robin algorithm is applied. The URL algorithm parameter has one option urlParam=.

 

6      Add members to the pool.

 

a         Click the Add ( ) button.

b         Enter the name for the pool member.

c         Enter the IP address of the pool member.

d         Enter the port at which the member is to receive traffic from the load balancer.

e         Enter the monitor port at which the member is to receive health monitor requests.

f          In the Weight text box, type the proportion of traffic this member is to handle. Must be an integer in the range 1-256.

g         (Optional) In the Max Connections text box, type the maximum number of concurrent connections the member can handle.

When the number of incoming requests exceeds the maximum, requests are queued and the load balancer waits for a connection to be released.

h         (Optional) In the Min Connections text box, type the minimum number of concurrent connections a member must always accept.

i          Click Keep to add the new member to the pool. The operation can take a minute to complete.

7      (Optional) To make client IP addresses visible to the back end servers, select Transparent.

When Transparent is not selected (the default value), back end servers see the IP address of the traffic source as the internal IP address of the load balancer.

When Transparent is selected, the source IP address is the actual IP address of the client and the edge gateway must be set as the default gateway to ensure that return packets go through the edge gateway.

8      To preserve your changes, click Keep.

What to do next

Add virtual servers for your load balancer. A virtual server has a public IP address and services all incoming client requests. See Add a Virtual Server.

 

Add an Application Rule

You can write an application rule to directly manipulate and manage IP application traffic.

Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways. b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Load Balancer > Application Rules.

3      Click the Add ( ) button.

4      Enter the name for the application rule.

5      Enter the script for the application rule.

For information on the application rule syntax, see http://cbonte.github.io/haproxy- dconv/2.2/configuration.html.

6      To preserve your changes, click Keep.

What to do next

Associate the new application rule to a virtual server added for the load balancer. See Add a Virtual Server.

 

Add a Virtual Server

Add an Networking Data Center for VMware vSphere edge gateway internal or uplink interface as a virtual server. A virtual server has a public IP address and services all incoming client requests.

By default, the load balancer closes the server TCP connection after each client request.

Procedure

1      Open Edge Gateway Services.

a      In the top navigation bar, click Networking and click Edge Gateways.

b     Select the edge gateway that you want to edit and click Services.

2      Navigate to Load Balancer > Virtual Servers.

3      Click the Add ( ) button.

4      On the General tab, configure the following options for the virtual server:

 

Option

Description

Enable Virtual Server

Click to enable the virtual server.

Enable Acceleration

Click to enable acceleration.

Application Profile

Select an application profile to be associated with the virtual server.

Name

Type a name for the virtual server.

Description

Type an optional description for the virtual server.

IP Address

Type or browse to select the IP address that the load balancer listens on.

Protocol

Select the protocol that the virtual server accepts. You must select the same protocol used by the selected Application Profile.

Port

Type the port number that the load balancer listens on.

Default Pool

Choose the server pool that the load balancer will use.

Connection Limit

(Optional) Type the maximum concurrent connections that the virtual server can process.

Connection Rate Limit (CPS)

(Optional) Type the maximum incoming new connection requests per second.

 

5      (Optional) To associate application rules with the virtual server, click the Advanced tab and complete the following steps:

a      Click the Add ( ) button.

The application rules created for the load balancer appear. If necessary, add application rules for the load balancer. See Add an Application Rule.

6      To preserve your changes, click Keep.

 

What to do next

Create an edge gateway firewall rule to permit traffic to the new virtual server (the destination IP address). See Add an Networking Data Center for VMware vSphere Edge Gateway Firewall Rule


Was this article helpful?

mood_bad Dislike 0
mood Like 0
visibility Views: 434