Updating a Node Pool

Find out how to update a managed node pool using Container Engine for Kubernetes (OKE).

For general information about updating node pools, see Modifying Node Pool and Worker Node Properties.

  • To modify the properties of node pools and worker nodes of existing Kubernetes clusters:

    1. Open the navigation menu and click Developer Services. Under Containers & Artifacts, click Kubernetes Clusters (OKE).
    2. Select the compartment that contains the cluster.
    3. On the Cluster List page, click the name of the cluster you want to modify.
    4. On the Cluster details page, under Resources, click Node pools.
    5. Click the name of the node pool that you want to modify.
    6. Use the Node pool details tab to view information about the node pool, including:

      • The status of the node pool.
      • The node pool's OCID.
      • The type of the worker nodes in the node pool (managed or virtual).
      • The configuration currently used when starting new worker nodes in the node pool, including:
        • the version of Kubernetes to run on worker nodes
        • the shape to use for worker nodes
        • the image to use on worker nodes
      • The availability domains, fault domains, and different regional subnets (recommended) or AD-specific subnets hosting worker nodes.

      The type of worker nodes in the node pool (managed or virtual) determine which node pool and worker node properties you can change.

    7. (optional) In the case of a managed node pool and managed nodes, change properties as follows:

      1. Click Edit and specify:
        • Name: A different name for the node pool. Avoid entering confidential information.
        • Version: A different version of Kubernetes to run on new worker nodes in the node pool when performing an in-place upgrade. The Kubernetes version on worker nodes must be either the same version as that on the control plane nodes, or an earlier version that is still compatible (see Kubernetes Versions and Container Engine for Kubernetes).

          Note that if you specify an OKE image for worker nodes, the Kubernetes version you select here must be the same as the version of Kubernetes in the OKE image.

          To start new worker nodes running the Kubernetes version you specify, 'drain' existing worker nodes in the node pool (to prevent new pods starting and to delete existing pods) and then terminate each of the existing worker nodes in turn.

          You can also specify a different version of Kubernetes to run on new worker nodes by performing an out-of-place upgrade. For more information about upgrading worker nodes, see Upgrading Managed Nodes to a Newer Kubernetes Version.

        • Node count: A different number of nodes in the node pool. See Scaling Node Pools.
        • Node placement configuration:
          • Availability domain: An availability domain in which to place worker nodes.
          • Worker node subnet: A regional subnet (recommended) or AD-specific subnet configured to host worker nodes. If you specified load balancer subnets, the worker node subnets must be different. The subnets you specify can be private (recommended) or public. See Subnet Configuration.
          • Fault domains: (Optional) One or more fault domains in the availability domain in which to place worker nodes.

          Optionally click Show advanced options to specify a capacity type to use (see Managing Worker Node Capacity Types). If you specify a capacity reservation, note that the node shape, availability domain, and fault domain in the managed node pool's placement configuration must match the capacity reservation's instance type, availability domain, and fault domain respectively. See Using Capacity Reservations to Provision Managed Nodes.

          Optionally click Another row to select additional domains and subnets in which to place worker nodes.

          When the worker nodes are created, they are distributed as evenly as possible across the availability domains and fault domains you select. If you don't select any fault domains for a particular availability domain, the worker nodes are distributed as evenly as possible across all the fault domains in that availability domain.

        • Node shape: A different shape to use for worker nodes in the node pool. The shape determines the number of CPUs and the amount of memory allocated to each node.

          Only those shapes available in your tenancy that are supported by Container Engine for Kubernetes are shown.

          If you select a flexible shape, you can explicitly specify the number of CPUs and the amount of memory.

          See Supported Images (Including Custom Images) and Shapes for Worker Nodes.

        • Image: A different image to use on worker nodes in the node pool. An image is a template of a virtual hard drive that determines the operating system and other software for the node.

          To change the image, click Change image. In the Browse all images window, choose an Image source and select an image as follows:

          • OKE Worker Node Images: Recommended. Provided by Oracle and built on top of platform images. OKE images are optimized to serve as base images for worker nodes, with all the necessary configurations and required software. Select an OKE image if you want to minimize the time it takes to provision worker nodes at runtime when compared to platform images and custom images.

            OKE image names include the version number of the Kubernetes version they contain. Note that if you specify a Kubernetes version for the node pool, the OKE image you select here must have the same version number as the node pool's Kubernetes version.

          • Platform images: Provided by Oracle and only contain an Oracle Linux operating system. Select a platform image if you want Container Engine for Kubernetes to download, install, and configure required software when the compute instance hosting a worker node boots up for the first time.

          See Supported Images (Including Custom Images) and Shapes for Worker Nodes.

        • Use security rules in Network Security Group (NSG): Control access to the node pool using security rules defined for one or more network security groups (NSGs) that you specify (up to a maximum of five). You can use security rules defined for NSGs instead of, or as well as, those defined for security lists (NSGs are recommended). For more information about the security rules to specify for the NSG, see Security Rules for Worker Nodes.
        • Boot volume: Change the size and encryption options for the worker node's boot volume:

          • To specify a custom size for the boot volume, select the Specify a custom boot volume size check box. Then, enter a custom size from 50 GB to 32 TB. The specified size must be larger than the default boot volume size for the selected image. See Custom Boot Volume Sizes for more information.

            Note that if you increase the boot volume size, you also need to extend the partition for the boot volume (the root partition) to take advantage of the larger size. See Extending the Partition for a Boot Volume. Oracle Linux platform images include the oci-utils package. You can use the oci-growfs command from that package in a custom cloud-init script to extend the root partition and then grow the file system. For more information, see Extending the Root Partition of Worker Nodes.

          • For VM instances, you can optionally select the Use in-transit encryption check box. For bare metal instances that support in-transit encryption, it is enabled by default and is not configurable. See Block Volume Encryption for more information about in-transit encryption. If you are using your own Vault service encryption key for the boot volume, then this key is also used for in-transit encryption. Otherwise, the Oracle-provided encryption key is used.
          • Boot volumes are encrypted by default, but you can optionally use your own Vault service encryption key to encrypt the data in this volume. To use the Vault service for your encryption needs, select the Encrypt this volume with a key that you manage check box. Select the vault compartment and vault that contains the master encryption key that you want to use, and then select the master encryption key compartment and master encryption key. If you enable this option, this key is used for both data at rest encryption and in-transit encryption.
            Important

            The Block Volume service does not support encrypting volumes with keys encrypted using the Rivest-Shamir-Adleman (RSA) algorithm. When using your own keys, you must use keys encrypted using the Advanced Encryption Standard (AES) algorithm. This applies to block volumes and boot volumes.

          Note that to use your own Vault service encryption key to encrypt data, an IAM policy must grant access to the service encryption key. See Create Policy to Access User-Managed Encryption Keys for Encrypting Boot Volumes, Block Volumes, and/or File Systems.

        • Pod communication: When the cluster's Network type is VCN-native pod networking, change how pods in the node pool communicate with each other using a pod subnet: :
          • Subnet: A regional subnet configured to host pods. The pod subnet you specify can be public or private. In some situations, the worker node subnet and the pod subnet can be the same subnet (in which case, Oracle recommends defining security rules in network security groups rather than in security lists). See Subnet Configuration.
          • Network Security Group: Control access to the pod subnet using security rules defined for one or more network security groups (NSGs) that you specify (up to a maximum of five). You can use security rules defined for NSGs instead of, or as well as, those defined for security lists (NSGs are recommended). For more information about the security rules to specify for the NSG, see Security Rules for Worker Nodes and Pods.

          Optionally click Show advanced options to specify the maximum number of pods that you want to run on a single worker node in a node pool, up to a limit of 110. The limit of 110 is imposed by Kubernetes. If you want more than 31 pods on a single worker node, the shape you specify for the node pool must support three or more VNICs (one VNIC to connect to the worker node subnet, and at least two VNICs to connect to the pod subnet). See Maximum Number of VNICs and Pods Supported by Different Shapes.

          For more information about pod communication, see Pod Networking.

      2. Either accept the existing values for advanced node pool options, or click Show Advanced Options and specify alternatives as follows:

        • Cordon and drain: Change when and how to cordon and drain worker nodes before terminating them.

          • Eviction grace period (mins): The length of time to allow to cordon and drain worker nodes before terminating them. Either accept the default (60 minutes) or specify an alternative. For example, when scaling down a node pool or changing its placement configuration, you might want to allow 30 minutes to cordon worker nodes and drain them of their workloads. To terminate worker nodes immediately, without cordoning and draining them, specify 0 minutes.
          • Force terminate after grace period: Whether to terminate worker nodes at the end of the eviction grace period, even if they haven't been successfully cordoned and drained. By default, this option isn't selected.

            Select this option if you always want worker nodes terminated at the end of the eviction grace period, even if they haven't been successfully cordoned and drained.

            De-select this option if you don't want worker nodes that haven't been successfully cordoned and drained to be terminated at the end of the eviction grace period. Node pools containing worker nodes that can't be terminated within the eviction grace period have the Needs attention status. The status of the work request that initiated the termination operation is set to Failed, and the termination operation is cancelled. For more information, see Monitoring Clusters.

          For more information, see Notes on Cordoning and Draining Managed Nodes Before Termination.

        • Initialization Script: (Optional) A different script for cloud-init to run on instances hosting worker nodes when the instance boots up for the first time. The script you specify must be written in one of the formats supported by cloud-init (for example, cloud-config), and must be a supported filetype (for example, .yaml). Specify the script as follows:
          • Choose Cloud-Init Script: Select a file containing the cloud-init script, or drag and drop the file into the box.
          • Paste Cloud-Init Script: Copy the contents of a cloud-init script, and paste it into the box.

          If you have not previously written cloud-init scripts for initializing worker nodes in clusters created by Container Engine for Kubernetes, you might find it helpful to click Download Custom Cloud-Init Script Template. The downloaded file contains the default logic provided by Container Engine for Kubernetes. You can add your own custom logic either before or after the default logic, but do not modify the default logic. For examples, see Example Usecases for Custom Cloud-init Scripts.

        • Kubernetes Labels: (Optional) One or more labels (in addition to a default label) to add to worker nodes in the node pool to enable the targeting of workloads at specific node pools. For example, to exclude all the nodes in a node pool from the list of backend servers in a load balancer backend set, specify node.kubernetes.io/exclude-from-external-load-balancers=true (see node.kubernetes.io/exclude-from-external-load-balancers).
        • Public SSH Key: (Optional) A different public key portion of the key pair you want to use for SSH access to the nodes in the node pool. The public key is installed on all worker nodes in the cluster. Note that if you don't specify a public SSH key, Container Engine for Kubernetes will provide one. However, since you won't have the corresponding private key, you will not have SSH access to the worker nodes. Note that you cannot use SSH to access directly any worker nodes in private subnets (see Connecting to Managed Nodes in Private Subnets Using SSH).
      3. Click Save Changes to save the updated properties.
    8. (optional) In the case of a virtual node pool and virtual nodes, change properties as follows:

      1. Click Edit and specify:
        • Name: A different name for the node pool. Avoid entering confidential information.
        • Node count: A different number of virtual nodes to create in the virtual node pool, placed in the availability domains you select, and in the regional subnet (recommended) or AD-specific subnet you specify for each availability domain. See Scaling Node Pools.
        • Node Placement Configuration:
          • Availability domain: An availability domain in which to place virtual nodes.
          • Fault domains: (Optional) One or more fault domains in the availability domain in which to place virtual nodes.

          Optionally click Another Row to select more domains and subnets in which to place virtual nodes.

          When the virtual nodes are created, they're distributed as evenly as possible across the availability domains and fault domains you select. If you don't select any fault domains for a particular availability domain, the virtual nodes are distributed as evenly as possible across all the fault domains in that availability domain.

        • Virtual Node Communication:
          • Subnet: A different regional subnet (recommended) or AD-specific subnet configured to host virtual nodes. If you specified load balancer subnets, the virtual node subnets must be different. The subnets you specify can be private (recommended) or public, and can be regional (recommended) or AD-specific. We recommend that the pod subnet and the virtual node subnet are the same subnet (in which case, the virtual node subnet must be private). For more information, see Subnet Configuration.
          • Use security rules in Network Security Group (NSG): Control access to the virtual node subnet using security rules defined for one or more network security groups (NSGs) that you specify (up to a maximum of five). You can use security rules defined for NSGs instead of, or as well as, those defined for security lists (NSGs are recommended). For more information about the security rules to specify for the NSG, see Security Rules for Worker Nodes and Pods.
        • Pod Communication:
          • Subnet: A different regional subnet configured to host pods. The pod subnet you specify for virtual nodes must be private. We recommend that the pod subnet and the virtual node subnet are the same subnet (in which case, Oracle recommends defining security rules in network security groups rather than in security lists). For more information, see Subnet Configuration.
          • Use security rules in Network Security Group (NSG): Control access to the pod subnet using security rules defined for one or more network security groups (NSGs) that you specify (up to a maximum of five). You can use security rules defined for NSGs instead of, or as well as, those defined for security lists (NSGs are recommended). For more information about the security rules to specify for the NSG, see Security Rules for Worker Nodes and Pods.

          For more information about pod communication, see Pod Networking.

        • Kubernetes labels and taints: (Optional) Enable the targeting of workloads at specific node pools by adding labels and taints to virtual nodes:
          • Labels: One or more labels (in addition to a default label) to add to virtual nodes in the virtual node pool to enable the targeting of workloads at specific node pools.
          • Taints: One or more taints to add to virtual nodes in the virtual node pool. Taints enable virtual nodes to repel pods, and so ensure that pods don't run on virtual nodes in a particular virtual node pool. You can apply taints only to virtual nodes.

          For more information, see Assigning Pods to Nodes in the Kubernetes documentation.

      2. Click Save Changes to save the updated properties.
    9. Use the Node pool tags tab and the Node tags tab to add or modify tags applied to the node pool, and tags applied to compute instances hosting worker nodes in the node pool. Tagging enables you to group disparate resources across compartments, and also enables you to annotate resources with your own metadata. See Tagging Kubernetes Cluster-Related Resources.
    10. Under Resources:
      • Click Nodes to see information about specific worker nodes in a managed node pool. Optionally edit the configuration details of a specific worker node by clicking the worker node's name.
      • Click Virtual Nodes to see information about specific worker nodes in a virtual node pool.
      • Click Metrics to monitor the health, capacity, and performance of a managed node pool. For more information, see Container Engine for Kubernetes Metrics.
      • Click Work requests to:
        • Get the details of a particular work request for the node pool resource.
        • List the work requests for the node pool resource.

        For more information, see Viewing Work Requests.

  • Use the oci ce node-pool update command and required parameters to update a managed node pool:

    oci ce node-pool update --node-pool-id <node-pool-ocid> [OPTIONS]

    For a complete list of parameters and values for CLI commands, see the CLI Command Reference.

  • Run the UpdateNodePool operation to update a managed node pool.