Upgrading Clusters to Newer Kubernetes Versions
Find out about the different ways to upgrade control plane nodes and worker nodes to newer Kubernetes versions using Kubernetes Engine (OKE).
After a new version of Kubernetes has been released and when Kubernetes Engine supports the new version, you can upgrade the Kubernetes version running on control plane nodes and worker nodes in a cluster.
The control plane nodes and worker nodes that comprise the cluster can run different versions of Kubernetes, provided you follow the Kubernetes version skew support policy described in the Kubernetes documentation.
You upgrade control plane nodes and worker nodes differently:
-
Control plane node upgrade: You upgrade control plane nodes by upgrading the cluster and specifying a more recent Kubernetes version for the cluster. Control plane nodes running older versions of Kubernetes are upgraded. Because Kubernetes Engine distributes the Kubernetes Control Plane on multiple Oracle-managed control plane nodes to ensure high availability (distributed across different availability domains in a region where supported), you're able to upgrade the Kubernetes version running on control plane nodes with zero downtime.
Having upgraded control plane nodes to a new version of Kubernetes, you can subsequently create new node pools with worker nodes running the newer version. Alternatively, you can continue to create new node pools with worker nodes running older versions of Kubernetes (providing those older versions are compatible with the Kubernetes version running on the control plane nodes).
For more information about control plane node upgrade, see Upgrading the Kubernetes Version on Control Plane Nodes in a Cluster.
- Worker node upgrade: You upgrade managed nodes, self-managed nodes, and virtual nodes differently:
- Managed node upgrade: You upgrade worker nodes in one of the following ways:
- By performing an 'in-place' upgrade of a node pool in the cluster, specifying a more recent Kubernetes version for the existing node pool, and then cycling the nodes to automatically replace all existing worker nodes.
- By performing an 'in-place' upgrade of a node pool in the cluster, specifying a more recent Kubernetes version for the existing node pool, and then manually replacing each existing worker node with a new worker node.
- By performing an 'out-of-place' upgrade of a node pool in the cluster, replacing the original node pool with a new node pool for which you've specified a more recent Kubernetes version.
For more information about managed node upgrade, see Upgrading Managed Nodes to a Newer Kubernetes Version.
- Self-managed node upgrade: You upgrade self-managed nodes by replacing an existing self-managed node with a new self-managed node hosted on a new compute instance. For more information about self-managed node upgrade, see Upgrading Self-Managed Nodes to a Newer Kubernetes Version by Replacing an Existing Self-Managed Node.
- Virtual node upgrade: You upgrade virtual nodes by upgrading the control plane nodes in a cluster. When you upgrade the Kubernetes version running on control plane nodes, the virtual nodes in every virtual node pool in the cluster are also automatically upgraded to that Kubernetes version. For more information about virtual node upgrade, see Upgrading Virtual Nodes to a Newer Kubernetes Version.
- Managed node upgrade: You upgrade worker nodes in one of the following ways:
To find out more about the Kubernetes versions currently and previously supported by Kubernetes Engine, see Supported Versions of Kubernetes.
Notes about Upgrading Clusters
Note the following when upgrading clusters:
- Kubernetes Engine only upgrades the Kubernetes version running on control plane nodes when you explicitly initiate the upgrade operation.
- After upgrading control plane nodes to a newer version of Kubernetes, you cannot downgrade the control plane nodes to an earlier Kubernetes version.
- Before you upgrade the version of Kubernetes running on the control plane nodes, it is your responsibility to test that applications deployed on the cluster are compatible with the new Kubernetes version. For example, before upgrading the existing cluster, you might create a new separate cluster with the new Kubernetes version to test your applications.
- The versions of Kubernetes running on the control plane nodes and the worker nodes must be compatible. That is, the Kubernetes version on the control plane nodes must be no more than two minor versions (or three minor versions, starting from Kubernetes version 1.28) ahead of the Kubernetes version on the worker nodes. See the Kubernetes version skew support policy described in the Kubernetes documentation.
- If the version of Kubernetes currently running on the control plane nodes is more than one version behind the most recent supported version, you are given a choice of versions to upgrade to. If you want to upgrade to a version of Kubernetes that is more than one version ahead of the version currently running on the control plane nodes, you must upgrade to each intermediate version in sequence without skipping versions (as described in the Kubernetes documentation).
-
To successfully upgrade control plane nodes in a cluster, the Kubernetes Dashboard service must be of type ClusterIP. If the Kubernetes Dashboard service is not of type ClusterIP (for example, if the service is of type NodePort), the upgrade will fail. In this case, change the type of the Kubernetes Dashboard service back to ClusterIP (for example, by entering
kubectl -n kube-system edit service kubernetes-dashboard
and changing the type). - Prior to Kubernetes version 1.14, Kubernetes Engine created clusters with kube-dns as the DNS server. However, from Kubernetes version 1.14 onwards, Kubernetes Engine creates clusters with CoreDNS as the DNS server. When you upgrade a cluster created by Kubernetes Engine from an earlier version to Kubernetes 1.14 or later, the cluster's kube-dns server is automatically replaced with the CoreDNS server. Note that if you customized kube-dns behavior using the original kube-dns ConfigMap, those customizations are not carried forward to the CoreDNS ConfigMap. You will have to create and apply a new ConfigMap containing the customizations to override settings in the CoreDNS Corefile. For more information about upgrading to CoreDNS, see Configuring DNS Servers for Kubernetes Clusters.