Managed nodes: Managed nodes run on compute instances (either bare metal or virtual machine) in your tenancy, and are at least partly managed by you. See Managed Nodes and Managed Node Pools.
You can only create virtual nodes in enhanced clusters. You can create managed nodes in both basic clusters and enhanced clusters.
All references to 'nodes' and 'worker nodes' in the Kubernetes Engine documentation refer to both virtual nodes and managed nodes, unless explicitly stated otherwise.
Virtual Nodes and Virtual Node Pools
Virtual nodes run in the Kubernetes Engine tenancy. You create virtual nodes by creating a virtual node pool. Virtual nodes and virtual node pools are fully managed by Oracle.
Virtual nodes provide a 'serverless' Kubernetes experience, enabling you to run containerized applications at scale without the operational overhead of upgrading the data plane infrastructure and managing the capacity of clusters.
You can only create virtual nodes and node pools in enhanced clusters.
Notable features supported differently by virtual nodes 🔗
Some features are supported differently when using virtual nodes rather than managed nodes:
Resource Allocation: Resource allocation is at the pod level, rather than at the worker node level. Consequently, you specify CPU and memory resource requirements (as requests and limits) in the pod specification, rather than for the worker nodes in a node pool. See Resources Allocated to Pods Provisioned by Virtual Nodes.
Load Balancing: Load balancing is between pods, rather than between worker nodes (as is the case with managed nodes). In clusters with virtual nodes, load balancer security list management is never enabled and you always have to manually configure security rules. Load balancers distribute traffic among pods' IP addresses and an assigned node port. When connecting to pods running on virtual nodes, use the syntax <pod-ip>:<nodeport>, rather than <node-ip>:<nodeport>. If you use different subnets for pods and nodes, configure node port ingress on the pod subnet.
Pod Networking: Only VCN-Native Pod Networking is supported (the flannel CNI plugin is not supported). Moreover, support is slightly different when using virtual nodes:
Only one VNIC is attached to each virtual node.
IP addresses are not pre-allocated before pods are created.
The VCN-Native Pod Networking CNI plugin is not shown as running in the kube-system namespace.
Since only VCN-Native Pod Networking is supported, the pod subnet route table must have route rules defined for a NAT gateway (not an internet gateway) and a service gateway.
Autoscaling: Virtual nodes automatically scale to support 500 pods. Because Oracle manages the underlying resources for virtual nodes, it is easier to work with the Kubernetes Horizontal Pod Autoscaler. It's not necessary to use the Kubernetes Cluster Autoscaler (which is not yet supported with virtual nodes in any case). The Kubernetes Vertical Pod Autoscaler is not supported with virtual nodes.
Notable Kubernetes features and capabilities not supported when using virtual nodes 🔗
Some Kubernetes features and capabilities are not supported, or not yet available, when using virtual nodes rather than managed nodes.
Kubernetes features not supported
Additional information
Flannel and other third party CNI plugins are not supported.
Virtual nodes only support the OCI VCN-Native Pod Networking CNI plugin.
Kubernetes daemonsets are not supported.
For example, the following is not supported:
apiVersion: apps/v1
kind: DaemonSet
Persistent volume claims (PVCs) are not supported.
No additional information.
Network providers that support NetworkPolicy resources alongside the CNI plugin used in the cluster (such as Calico and Cilium) are not supported.
No additional information.
Network policies (the Kubernetes NetworkPolicy resource) are not supported.
No additional information.
Service mesh products are not supported.
Products such as Oracle Cloud Infrastructure Service Mesh and Istio Service Mesh are not supported.
Liveness and readiness probes of type HTTP are supported.
HTTPS and exec probes are supported.
Startup probes are supported.
gRPC probes are not supported.
probe.terminationGracePeriodSeconds is not supported.
Managed nodes run on compute instances (either bare metal or virtual machine) in your tenancy. You create managed nodes by creating a managed node pool. Managed nodes and managed node pools are managed by you.
As you are responsible for managing managed nodes, you have the flexibility to configure them to meet your specific requirements. You are responsible for upgrading Kubernetes on managed nodes, and for managing cluster capacity.
When using managed nodes, you pay for the compute instances that execute applications.
You can create managed nodes and node pools in both basic clusters and enhanced clusters.