Comparing Virtual Nodes with Managed Nodes
When creating a node pool with Container Engine for Kubernetes, you specify the type of worker nodes to create in the node pool as one or other of the following:
- Virtual nodes: Virtual nodes are fully managed by Oracle. See Virtual Nodes and Virtual Node Pools.
- Managed nodes: Managed nodes run on compute instances (either bare metal or virtual machine) in your tenancy, and are at least partly managed by you. See Managed Nodes and Managed Node Pools.
You can only create virtual nodes in enhanced clusters. You can create managed nodes in both basic clusters and enhanced clusters.
All references to 'nodes' and 'worker nodes' in the Container Engine for Kubernetes documentation refer to both virtual nodes and managed nodes, unless explicitly stated otherwise.
Virtual Nodes and Virtual Node Pools
Virtual nodes run in the Container Engine for Kubernetes tenancy. You create virtual nodes by creating a virtual node pool. Virtual nodes and virtual node pools are fully managed by Oracle.
Virtual nodes provide a 'serverless' Kubernetes experience, enabling you to run containerized applications at scale without the operational overhead of upgrading the data plane infrastructure and managing the capacity of clusters.
You can only create virtual nodes and node pools in enhanced clusters.
Notable features supported differently by virtual nodes
Some features are supported differently when using virtual nodes rather than managed nodes:
-
Resource Allocation: Resource allocation is at the pod level, rather than at the worker node level. Consequently, you specify CPU and memory resource requirements (as requests and limits) in the pod specification, rather than for the worker nodes in a node pool. See CPU and Memory Resources Allocated to Pods Provisioned by Virtual Nodes.
- Load Balancing: Load balancing is between pods, rather than between worker nodes (as is the case with managed nodes). In clusters with virtual nodes, load balancer security list management is never enabled and you always have to manually configure security rules. Load balancers distribute traffic among pods' IP addresses and an assigned node port. When connecting to pods running on virtual nodes, use the syntax
<pod-ip>:<nodeport>
, rather than<node-ip>:<nodeport>
. If you use different subnets for pods and nodes, configure node port ingress on the pod subnet. - Pod Networking: Only VCN-Native Pod Networking is supported (the flannel CNI plugin is not supported). Moreover, support is slightly different when using virtual nodes:
- Only one VNIC is attached to each virtual node.
- IP addresses are not pre-allocated before pods are created.
- The VCN-Native Pod Networking CNI plugin is not shown as running in the kube-system namespace.
- Since only VCN-Native Pod Networking is supported, the pod subnet route table must have route rules defined for a NAT gateway (not an internet gateway) and a service gateway.
- Autoscaling: Virtual nodes automatically scale to support 500 pods. Because Oracle manages the underlying resources for virtual nodes, it is easier to work with the Kubernetes Horizontal Pod Autoscaler. It's not necessary to use the Kubernetes Cluster Autoscaler (which is not yet supported with virtual nodes in any case). The Kubernetes Vertical Pod Autoscaler is not supported with virtual nodes.
Notable Kubernetes features and capabilities not supported when using virtual nodes
Some Kubernetes features and capabilities are not supported, or not yet available, when using virtual nodes rather than managed nodes.
Kubernetes features not supported | Additional information |
---|---|
Liveness and readiness probes of type gRPC, exec, and TCP are not supported. probe.terminationGracePeriodSeconds is not supported. |
For example the following are not supported:
|
Liveness and readiness probes can only use HTTP (not HTTPS) | For example, the following is not supported:
|
StartupProbe | For example, the following is not supported:
|
Live logs | For example, the following is not supported:
|
Ephemeral containers | No additional information. |
Init-containers | No additional information. |
Unsupported Volume Types |
Only the following types are supported
|
Maximum of 1 volume of type emptyDir can currently be defined in the pod spec. | No additional information. |
Pod
|
For example, the following is not supported:
|
Pod securityContext
|
For example, the following is not supported:
|
Container securityContext
|
For example, the following is not supported:
|
Container.Port
|
For example, the following is not supported:
|
Container
|
Note that Kubernetes adds TerminationMessagePolicy and TerminationMessagePath by default. |
Container port range (1, 65535) cannot conflict with NodePort range (30000-32767). | For example, the following is not supported:
|
Pod.Volumes.EmptyDirVolumeSource:SizeLimit | For example, the following is not supported:
|
Pod.Volumes.EmptyDirVolumeSource:Medium - can only be "" or "Memory" | For example, the following is not supported:
|
Pod:Volumes - Mode must be specified as 0644 for the following:
|
For example, the following is not supported:
|
Pod:Volumes - if DefaultMode specified for the following, DefaultMode must be 0644:
|
For example, the following is not supported:
|
Container.Resources.Requests | For example, the following is not supported:
|
Volumes:DownwardAPI:ResourceFieldRef | For example, the following is not supported:
|
TerminationGracePeriodSeconds | For example, the following is not supported:
|
DeletionGracePeriodSeconds | For example, the following is not supported:
|
Exec Container | For example, the following is not supported:
|
port-forward and proxy | No additional information. |
UpdatePod requests with mutations to pod.spec.containers[].image | No additional information. |
Propagation to pods of updates to mounted configmaps and secrets | No additional information. |
Container-level metrics in virtual kubelet metrics endpoint | No additional information. |
Container:ResourceRequirements Subcore | No additional information. |
Container stdin/stdinOnce, tty | No additional information. |
Preserve client IP addresses when externalTrafficPolicy: Local | No additional information. |
ImagePullSecret types other than config and configJson | No additional information. |
ProjectedVolumeSource:ServiceAccountTokenProjection:ExpirationSeconds | No additional information. |
Kubernetes daemonsets. | No additional information. |
Network providers that support NetworkPolicy resources alongside the CNI plugin used in the cluster (such as Calico and Cilium). | No additional information. |
Flannel and other third party CNI plugins. | Virtual nodes only support the OCI VCN-Native Pod Networking CNI plugin. |
The kubectl attach command to interact with a process that is already running inside an existing container. |
No additional information. |
Persistent volume claims (PVCs). | No additional information. |
Service mesh products | Such as Oracle Cloud Infrastructure Service Mesh and Istio Service Mesh. |
Network policies (the Kubernetes NetworkPolicy resource) | No additional information. |
Notable Container Engine for Kubernetes features and capabilities not supported when using virtual nodes
Some Container Engine for Kubernetes features and capabilities are not available, or not yet available, when using virtual nodes rather than managed nodes.
Container Engine for Kubernetes features not supported | Additional information |
---|---|
SSH connections to worker nodes (including via a bastion) | Not available. |
Use of custom cloud-init scripts | Not available. |
Node Doctor script | Not available. |
Support for Kubernetes versions prior to version 1.25 | Virtual nodes require the cluster to be running at least Kubernetes version 1.25. |
Mixed clusters, containing both virtual nodes and managed nodes. | Not yet available. |
Autoscale the number of virtual nodes. | Not yet available. |
Capacity reservations to provision virtual nodes. | Not yet available. |
Pods with Intel, Arm, and GPU shapes. | Not yet available. |
Setting up an Nginx ingress controller as described in Example: Setting Up an Ingress Controller on a Cluster | Not yet available. |
Common deployments not supported, and supported differently, when using virtual nodes
The following common deployments are not supported when using virtual nodes rather than managed nodes, or are supported differently:
Deployment | Notes |
---|---|
kube-proxy in the kube-system namespace, and the kube-proxy cluster add-on | kube-proxy runs in pods scheduled on virtual nodes, but is not deployed in the kube-system namespace. |
Kubernetes Dashboard | Not supported when using virtual nodes. |
Nginx ingress controller | Not supported when using virtual nodes. |
Kubernetes Cluster Autoscaler | Not supported when using virtual nodes. |
Vertical Pod Autoscaler | Not supported when using virtual nodes. |
Kubernetes Metrics Server | Deploy differently when using virtual nodes (see Deploying the Kubernetes Metrics Server on a Cluster Using Kubectl). |
Managed Nodes and Managed Node Pools
Managed nodes run on compute instances (either bare metal or virtual machine) in your tenancy. You create managed nodes by creating a managed node pool. Managed nodes and managed node pools are managed by you.
As you are responsible for managing managed nodes, you have the flexibility to configure them to meet your specific requirements. You are responsible for upgrading Kubernetes on managed nodes, and for managing cluster capacity.
When using managed nodes, you pay for the compute instances that execute applications.
You can create managed nodes and node pools in both basic clusters and enhanced clusters.
Notable features supported differently by managed nodes
Some features are supported differently when using managed nodes rather than virtual nodes:
- Resource Allocation: Resource allocation is at the worker node level, rather than at the pod level. Consequently, you specify CPU and memory resource requirements for the worker nodes in a node pool, rather than in the pod specification.
- Load Balancing: Load balancing is between worker nodes, rather than between pods (as is the case with virtual nodes).
- Pod Networking: Both the VCN-Native Pod Networking CNI plugin and the flannel CNI plugin are supported.
- Autoscaling: Use of the Kubernetes Cluster Autoscaler and the Vertical Pod Autoscaler are supported.
Notable features not supported, or not yet available, when using managed nodes
- Kubernetes taints