Working with Kubernetes Clusters

You can create Kubernetes clusters of different node sizes from the existing organization VDC policies.

Kubernetes Container Clusters is the Container Service Extension plug-in for Cyfuture Cloud Console. You can use the Kubernetes Container Clusters plug-in in the Cyfuture Cloud Console Tenant Portal to deploy clusters with native and Cyfuture Cloud Tanzu Kubernetes Grid Integrated Edition (TKGI) clusters. You can create Tanzu Kubernetes clusters without the Kubernetes Container Clusters plug-in.

When enabled on a VMWare vSphere cluster, Cyfuture Cloud VMWare vSphere® with Cyfuture Cloud Tanzu™ provides the capability to create upstream Kubernetes clusters in dedicated resource pools. For more information, see the VMWare vSphere with Kubernetes Configuration and Management guide in the VMWare vSphere documentation.

When a service provider creates a provider VDC Kubernetes policy and publishes the policy to an organization VDC, they create an organization VDC Kubernetes policy. You can use the

Kubernetes Container Clusters plug-in to create Tanzu Kubernetes clusters by applying one of the organization VDC Kubernetes policies.

Kubernetes Runtime Options

Tanzu Kubernetes clusters - You can use the VMWare vSphere Kubernetes runtime option to create VMWare vSphere with Cyfuture Cloud Tanzu managed Tanzu Kubernetes clusters. This option offers more features, however, it might be more expensive. For more information, see the VMWare vSphere with Kubernetes Configuration and Management guide in the VMWare vSphere documentation.

Native clusters - The Kubernetes Container Clusters plug-in manages the clusters with native Kubernetes runtime. These clusters are with reduced High Availability function with a single master node, they offer fewer persistent volume choices and no networking automation. However, they might come at a lower cost.

TKGI clusters - Cyfuture Cloud Tanzu Kubernetes Grid Integrated Edition is a purpose-built container solution to operationalize Kubernetes for multi-cloud enterprises and service providers. Some of its capabilities are high availability, auto-scaling, health-checks, as well as self-healing and rolling upgrades for Kubernetes clusters. For more information on TKGI clusters, see the Cyfuture Cloud Tanzu Kubernetes Grid Integrated Edition documentation.

This chapter includes the following topics:

Add an Organization VDC Kubernetes Policy

Edit an Organization VDC Kubernetes Policy

Create a Tanzu Kubernetes Cluster

Create a Native Kubernetes Cluster

Create a Cyfuture Cloud Tanzu Kubernetes Grid Integrated Edition Cluster

Configure External Access to a Service in a Tanzu Kubernetes Cluster

Add an Organization VDC Kubernetes Policy

If you have system administrator rights, you can add an organization VDC Kubernetes policy by using a provider VDC Kubernetes policy. You can use the organization VDC Kubernetes policy to create Tanzu Kubernetes clusters.

When you add or publish a provider VDC Kubernetes policy to an organization VDC, you make the policy available to tenants by creating an organization VDC policy. Tenants can use the available organization VDC Kubernetes policies to leverage the Kubernetes capacity while creating Tanzu Kubernetes clusters. A Kubernetes policy encapsulates placement, infrastructure quality, and persistent volume storage classes. Kubernetes policies can have different compute limits.

You can add multiple organization VDC Kubernetes policies to a single organization VDC. You can use a single provider VDC Kubernetes policy to create multiple organization VDC Kubernetes policies. You can use the organization VDC Kubernetes policies as an indicator of the service quality. For example, you can publish a Gold Kubernetes policy that allows a selection of the guaranteed machine classes and a fast storage class or a Silver Kubernetes policy that allows a selection of the best effort machine classes and a slow storage class.

Prerequisites

Verify that you have a system administrator role or a role that includes an equivalent set of rights. All other roles can only view the organization VDC Kubernetes policies.

Verify that your environment has at least one provider VDC backed by a Supervisor Cluster. The provider VDCs backed by a Supervisor Cluster are marked with a Kubernetes icon on the Provider VDCs tab of the Service Provider Admin Portal. For more information on VMWare vSphere with Cyfuture Cloud Tanzu in Cyfuture Cloud Console, see Using VMWare vSphere with Kubernetes in Cyfuture Cloud Console in the Cyfuture Cloud Console Service Provider Admin Portal Guide.

Verify that you are logged in to a flex organization VDC.

Familiarize yourself with the virtual machine class types for Tanzu Kubernetes clusters. See the VMWare vSphere with Kubernetes Configuration and Management guide in the VMWare vSphere documentation.

Procedure

  1. In the top navigation bar, click Data Centers and then click Virtual Data Center.

  2. Select an organization virtual data center.

  3. In the left panel, under Settings, select Kubernetes Policies and click Add. The Publish to Organization VDC wizard appears.

  4. Enter a tenant-visible name and description for the organization VDC Kubernetes policy and click Next.

  5. Select the provider VDC Kubernetes policy that you want to use and click Next.

  6. Select CPU and Memory limits for the Tanzu Kubernetes clusters created under this policy.

The maximum limits depend on the CPU and Memory allocations of the organization VDC. When you add the policy, the selected limits act as maximums for the tenants.

     7. Choose whether you want to reserve CPU and memory for the Tanzu Kubernetes cluster nodes created in this policy and click Next.

There are two editions for each class type: guaranteed and best effort. A guaranteed class edition fully reserves its configured resources, while a best effort edition allows resources to be overcommitted. Depending on your selection, on the next page of the wizard, you can select between VM class types of the guaranteed or best effort edition.

  • Select Yes for VM class types of the guaranteed edition for full CPU and Memory reservations.

  • Select No for VM class types of the best effort edition with no CPU and memory reservations.

     8. On the Machine classes page of the wizard, select one or more VM class types available for this policy.

The selected machine classes are the only class types available to tenants when you add the policy to the organization VDC.

     9. Select one or more storage policies.

   10. Review your choices and click Publish.

Results

The information about the published policy appears in the list of Kubernetes policies. The published policy creates a Supervisor Namespace on the Supervisor Cluster with the specified resource limits from the policy.

The tenants can start using the Kubernetes policy to create Tanzu Kubernetes clusters. Cyfuture Cloud Console places each Tanzu Kubernetes cluster created under this Kubernetes policy in the same Supervisor Namespace. The policy resource limits become resource limits for the Supervisor Namespace. All tenant-created Tanzu Kubernetes clusters in the Supervisor Namespace compete for the resources within these limits.

What to do next

Delete an organization VDC Kubernetes policy.

By using the Service Provider Admin Portal, you can manage organization resource quotas. See Manage Quotas on the Resource Consumption of an Organization in the Cyfuture Cloud Console Service Provider Admin Portal Guide.

Manage the Resource Quotas of a Group or Manage the Resource Quotas of a User

Edit an Organization VDC Kubernetes Policy

If you have system administrator rights, you can modify an organization VDC Kubernetes policy to change its description and the CPU and memory limits.

Prerequisites

Verify that you have a system administrator role or a role that includes an equivalent set of rights. All other roles can only view the organization VDC Kubernetes policies.

Procedure

  1. In the top navigation bar, click Data Centers and then click Virtual Data Center.

  2. Select an organization virtual data center.

  3. In the left panel, under Settings, select Kubernetes Policies.

  4. Select the organization VDC Kubernetes policy you want to edit and click Edit. The Edit VDC Kubernetes Policy wizard appears.Edit the description of the organization VDC Kubernetes policy and click Next.

  5. The name of the policy is linked to the Supervisor Namespace, created during the publishing of the policy, and you cannot change it.

  6. Edit the CPU and Memory limit for the organization VDC Kubernetes policy and click Next. You cannot edit the CPU and Memory reservation.

  7. Review the new policy details and click Save.

What to do next

Delete an organization VDC Kubernetes policy.

By using the Service Provider Admin Portal, you can change organization resource quotas. See Manage Quotas on the Resource Consumption of an Organization in the Cyfuture Cloud Console Service Provider Admin Portal Guide.

Change group and user quotas. See Manage the Resource Quotas of a Group or Manage the Resource Quotas of a User.

Create a Tanzu Kubernetes Cluster

You can create Tanzu Kubernetes clusters by using the Kubernetes Container Clusters plug-in.

For more information about the different Kubernetes runtime options for the cluster creation, see Working with Kubernetes Clusters.

You can manage Kubernetes clusters also by using the Container Service Extension CLI. See the Container Service Extension documentation.

Cyfuture Cloud Console provisions Tanzu Kubernetes clusters with the PodSecurityPolicy Admission Controller enabled. You must create a pod security policy to deploy workloads. For information about implementing the use of pod security policies in Kubernetes, see the Using Pod Security Policies with Tanzu Kubernetes Clusters topic in the VMWare vSphere with Kubernetes Configuration and Management guide.

Prerequisites

Verify that your service provider published the Kubernetes Container Clusters plug-in to your organization. You can find the plug-in on the top navigation bar under More > Kubernetes Container Clusters.

Verify that you have at least one organization VDC Kubernetes policy in your organization VDC. To add an organization VDC Kubernetes policy, see Add an Organization VDC Kubernetes Polic.

Verify that your service provider published the Cyfuture Cloud:tkgcluster Entitlement rights bundle to your organization and granted you the Edit: Tanzu Kubernetes Guest Cluster right to create and modify Tanzu Kubernetes clusters. For the ability to delete clusters, you must have the Full Control: Tanzu Kubernetes Guest Cluster right.

Verify that your service provider created an Access Control List (ACL) entry for you with information about your access level.

Procedure

  1. From the top navigation bar, select More > Kubernetes Container Clusters.

  2. (Optional) If the organization VDC is enabled for TKGI cluster creation, on the Kubernetes Container Clusters page, select the VMWare vSphere with Tanzu & Native tab.

  3. Click New.

  4. Select the VMWare vSphere with Tanzu runtime option and click Next.

  5. Enter a name for the new Kubernetes cluster and click Next.

  6. Select the organization VDC to which you want to deploy a Tanzu Kubernetes cluster and click on Next.

  7. Select an organization VDC Kubernetes policy and a Kubernetes version, and click Next.

Cyfuture Cloud Console displays a default set of Kubernetes versions that are not tied to any organization VDC or Kubernetes policy. These versions are a global setting. To change the list of available versions, use the cell management tool to run the ./cell-management-tool manage-config --name wcp.supported.kubernetes.versions -v version_numbers command with comma-separated version numbers.

8. Select the number of control plane and worker nodes in the new cluster.

9. Select machine classes for the control plane and worker nodes, and click Next.

10. Select a Kubernetes policy storage class for the control plane and worker nodes, and click Next

11. (Optional) For Cyfuture Cloud Console 10.2.2 and later, specify a range of IP addresses for Kubernetes services and a range for Kubernetes pods, and click Next.

Classless Inter-Domain Routing (CIDR) is a method for IP routing and IP address allocation.

Option

Description

Pods CIDR

Specifies a range of IP addresses to use for Kubernetes pods. The default value is 192.168.0.0/16. The pods subnet size must be equal to or larger than /24. This value must not overlap with the Supervisor Cluster settings.

You can enter one IP range.

Services CIDR

Specifies a range of IP addresses to use for Kubernetes services. The default value is 10.96.0.0/12. This value must not overlap with the Supervisor Cluster settings. You can enter one IP range.

 

12. Review the cluster settings and click Finish.

What to do next

Resize the Kubernetes cluster if you want to change the number of worker nodes. 

Download the kubeconfig file. The kubectl command-line tool uses kubeconfig files to obtain information about clusters, users, namespaces, and authentication mechanisms.

Delete a Kubernetes cluster.

Create a Native Kubernetes Cluster

You can create Container Service Extension 3.0 managed Kubernetes clusters by using the Kubernetes Container Clusters plug-in.

For more information about the different Kubernetes runtime options for the cluster creation, see Working with Kubernetes Clusters.

You can manage Kubernetes clusters also by using the Container Service Extension CLI. See the Container Service Extension documentation.

Prerequisites

Verify that your service provider published the Kubernetes Container Clusters plug-in to your organization. Kubernetes Container Clusters is the Container Service Extension plug-in for Cyfuture Cloud Console. You can find the plug-in on the top navigation bar under More > Kubernetes Container Clusters.

Verify that your service provider completed the Container Service Extension 3.0 server setup and published a Container Service Extension native placement policy to the organization VDC.

Verify that your service provider published the cse:nativeCluster Entitlement rights bundle to your organization and granted you the Edit CSE:NATIVECLUSTER right to create and modify native Kubernetes clusters. For the ability to delete clusters, you must have the Full Control CSE:NATIVECLUSTER right.

Verify that your service provider created an Access Control List (ACL) entry for you with information about your access level.

Procedure

  1. From the top navigation bar, select More > Kubernetes Container Clusters.
  2. (Optional) If the organization VDC is enabled for TKGI cluster creation, on the Kubernetes Container Clusters page, select the VMWare vSphere with Tanzu & Native tab.
  3. Click New.
  4. Select the Native Kubernetes runtime option.
  5. Enter a name and select a Kubernetes Template from the list.
  6. (Optional) Enter a description for the new Kubernetes cluster and an SSH public key.Click Next.
  7. Select the organization VDC to which you want to deploy a native cluster and click Next.
  8. Select the number of control plane and worker nodes and, optionally, sizing policies for the nodes.Click Next.
  9. If you want to deploy an additional VM with NFS software, turn on the Enable NFS toggle.
  10. (Optional) Select storage policies for the control plane and worker nodes.Click Next.
  11.  Select a network for the Kubernetes cluster and click Next.
  12.  Review the cluster settings and click Finish.

What to do next

Resize the Kubernetes cluster if you want to change the number of worker nodes.

Download the kubeconfig file. The kubectl command-line tool uses kubeconfig files to obtain information about clusters, users, namespaces, and authentication mechanisms.

Delete a Kubernetes cluster.

Create a Cyfuture Cloud Tanzu Kubernetes Grid Integrated Edition Cluster

You can create Cyfuture Cloud Tanzu Kubernetes Grid Integrated Edition (TKGI) clusters by using the Container Service Extension.

For more information about the different Kubernetes runtime options for the cluster creation, see Chapter 4 Working with Kubernetes Clusters.

You can manage Kubernetes clusters also by using the Container Service Extension CLI. See the Container Service Extension documentation.

Prerequisites

Verify that your service provider published the Kubernetes Container Clusters plug-in to your organization. Kubernetes Container Clusters is the Container Service Extension plug-in for Cyfuture Cloud Console. You can find the plug-in on the top navigation bar under More > Kubernetes Container Clusters.

Verify that your service provider completed the Container Service Extension 3.0 server setup and published a Container Service Extension TKGI enablement metadata to the organization VDC.

Verify that you have the {cse}:PKS DEPLOY RIGHT right.

 Procedure

  1. From the top navigation bar, select More > Kubernetes Container Clusters.
  2. On the Kubernetes Container Clusters page, select the TKGI tab, and click New. The Create New TKGI Cluster wizard opens.
  3. Select the organization VDC to which you want to deploy a TKGI cluster and click Next.The list might take longer to load because Cyfuture Cloud Console requests the information from the CSE server.
  4. Enter a name for the new TKGI cluster and select the number of worker nodes. TKGI clusters must have at least one worker node.
  5. Click Next.
  6. Review the cluster settings and click Finish.
  7. (Optional) Click the Refresh button on the right side of the page for the new TKGI cluster to appear in the list of clusters.

What to do next

Resize the Kubernetes cluster if you want to change the number of worker nodes.

Download the kubeconfig file. The kubectl command-line tool uses kubeconfig files to obtain information about clusters, users, namespaces, and authentication mechanisms.

Delete a Kubernetes cluster.

Configure External Access to a Service in a Tanzu Kubernetes Cluster

Starting with Cyfuture Cloud Console 10.2.2, Tanzu Kubernetes clusters are by default only reachable from IP subnets of networks within the same organization virtual data center in which a cluster is created. If necessary, you can manually configure external access to specific services in a Tanzu Kubernetes cluster.

When a VDC Kubernetes policy is published to an organization VDC, a firewall policy is automatically provisioned on the cluster edge gateway to allow access to the cluster from authorized sources within the VDC. Additionally, a system SNAT rule is automatically added to the advanced networking Data Center edge gateways within the organization VDC to ensure that the cluster edge gateway is reachable by the workloads within the organization VDC.

Note- If the organization virtual data center is part of a Advanced Networking Data Center group, the cluster edge gateway is not reachable by the other VDCs in the data center group.

Both the firewall policy that is provisioned on the cluster edge gateway and the SNAT rule on the ADVANCED NETWORKING Data Center edge gateway cannot be removed unless a system administrator deletes the Kubernetes policy from the VDC.

If necessary, you can manually configure access from an external network to a specific service in a Tanzu Kubernetes cluster. To do that, you must create a DNAT rule on the Advanced networking Data Center edge gateway which ensures that the traffic coming from external locations is forwarded to the cluster edge gateway.

Prerequisites

Verify that your cloud infrastructure is backed by VMWare vSphere 7.0 Update 1C, 7.0 Update 2, or later. Contact your system administrator.

Verify that you are an organization administrator.

Verify that your system administrator has created an Advanced networking Data Center edge gateway within the organization virtual data center in which the Tanzu Kubernetes cluster is located.

Verify that the public IP address that you want to use for the service was allocated to the edge gateway interface on which you want to add a DNAT rule.

Use the get services my-service command of the kubectl command-line tool to retrieve the external IP for the service that you want to expose.

Procedure

  1. In the top navigation bar, click Networking and click the Edge Gateways tab.
  2. Click the edge gateway and, under Services, click NAT.
  3. To add a rule, click New
  4. Configure a DNAT rule for the service that you want to connect to an external network.

 

Option

Description

Name

Enter a meaningful name for the rule.

Description

(Optional) Enter a description for the rule.

State

To enable the rule upon creation, turn on the State toggle.

Interface type

From the drop-down menu, select DNAT.

External IP

Enter the public IP address of the service.

The IP address that you enter must belong to the suballocated IP range of the ADVANCED NETWORKING Data Center edge gateway.

Application

Leave the box empty.

Internal IP

Enter the service IP address that was allocated from the Kubernetes ingress pool.

Internal Port

(Optional) Enter a port number to which inbound traffic is directed.

Logging

(Optional) To have the address translation performed by this rule logged, toggle on the Logging option.

5. Click Save.

What to do next

If you want to provide access to other applications published as Kubernetes services from external networks, you must configure additional DNAT rules for each one of them.

 


Was this article helpful?

mood_bad Dislike 0
mood Like 1
visibility Views: 1049