Creating Self-Managed Nodes
Find out how to create a new self-managed node and add it to an existing cluster.
You use the Compute service to create the compute instance on which to run a self-managed node. Having created the self-managed node, you then add it to an existing enhanced cluster.
If you want a self-managed node to use the flannel CNI plugin for pod networking, you can create the self-managed node using the Console, the CLI, and the API. If you want a self-managed node to use the OCI VCN-Native Pod Networking CNI plugin for pod networking, you can create the self-managed node using the CLI, and the API.
To create a self-managed node using the Console:
- Create the cloud-init script containing the Kubernetes API private endpoint and base64-encoded CA certificate of the enhanced cluster to which you want to add the self- managed node. See Creating Cloud-init Scripts for Self-managed Nodes.
- Create a new compute instance to host the self-managed node:
- Open the navigation menu and click Compute. Under Compute, click Instances.
- Follow the instructions in the Compute service documentation to create a new compute instance. Note that appropriate policies must exist to allow the new compute instance to join the enhanced cluster. See Creating a Dynamic Group and a Policy for Self-Managed Nodes.
- In the Image and Shape section, click Change image.
- Click My images, select the Image OCID option, and then enter the OCID of the OKE Oracle Linux 7 (OL7) or Oracle Linux 8 (OL8) image you want to use. See Image Requirements.
- Click Show advanced options, and on the Management tab, select the Paste cloud-init script option.
- Copy and paste the cloud-init script containing the Kubernetes API private endpoint and base64-encoded CA certificate into the Cloud-init script field. See Creating Cloud-init Scripts for Self-managed Nodes.
- Click Create to create the compute instance to host the self-managed node.
When the compute instance is created, it is added as a self-managed node to the cluster with the Kubernetes API endpoint that you specified .
- Verify that the self-managed node has been added to the Kubernetes cluster and confirm the node's readiness status by entering:
kubectl get nodes
For example:
kubectl get nodes NAME STATUS ROLES AGE VERSION 10.0.103.170 Ready <none> 40m v1.25.4
- Confirm that labels have been added to the node and set as expected by entering:
kubectl get node <node-name> -o json | jq '.metadata.labels'
For example
kubectl get node 10.0.103.170 -o json | jq '.metadata.labels' { ... "displayName": "oke-self-managed-node", "oci.oraclecloud.com/node.info.byon": "true", ... }
Use the oci Compute instance launch command and required parameters to create a self-managed node:
oci compute instance launch --availability-domain <availability-domain> --compartment-id <compartment-ocid> --shape <shape> --subnet-id <subnet-ocid> [OPTIONS]
For a complete list of flags and variable options for CLI commands, see the Command Line Reference.
Tips:
- Specify the name of the file containing the cloud-init script (required to add the compute instance to the cluster as a self-managed node) using the oci Compute instance launch command's
--user-data-file
parameter. See Creating Cloud-init Scripts for Self-managed Nodes. - Specify the image to use to create the self-managed node by setting the oci Compute instance launch command's
--image-id
parameter. See Image Requirements. - If you want the self-managed node to use the OCI VCN-Native Pod Networking CNI plugin for pod networking, add the
--metadata
parameter to theoci compute instance launch
command, as follows:--metadata '{"oke-native-pod-networking": "true", "oke-max-pods": "<max-pods-per-node>", "pod-subnets": "<pod-subnet-ocid>",
"pod-nsgids": "<nsg-ocid>"
}'where:
"oke-native-pod-networking": "true"
specifies that you want the self-managed node to use the OCI VCN-Native Pod Networking CNI plugin for pod networking."oke-max-pods": "<max-pods-per-node>"
specifies the maximum number of pods that you want to run on the self-managed node."pod-subnets": "<pod-subnet-ocid>"
specifies the OCID of the pod subnet that supports communication between pods and direct access to individual pods using private pod IP addresses."pod-nsgids": "<nsg-ocid>"
optionally specifies the OCID of one or more network security groups (NSGs) containing security rules to route network traffic to pods. When specifying multiple NSGs, use a comma-delimited list in the format"pod-nsgids": "<nsg-ocid-1>,<nsg-ocid-2>"
For more information about the OCI VCN-Native Pod Networking CNI plugin, see Using the OCI VCN-Native Pod Networking CNI plugin for pod networking.
Examples:
Example 1: Command to create a self-managed node that uses the flannel CNI plugin for pod networking.
oci compute instance launch \ --profile oc1 \ --compartment-id ocid1.compartment.oc1..aaaaaaa______neoq \ --subnet-id ocid1.subnet.oc1.phx.aaaaaaa______hzia \ --shape VM.Standard2.2 \ --availability-domain zkJl:PHX-AD-1 \ --image-id ocid1.image.oc1.phx.aaaaaaa______lcra \ --display-name smn \ --user-data-file my-cloud-init-file \ --ssh-authorized-keys-file my-ssh-key-file
Example 2: Command to create a self-managed node that uses the OCI VCN-Native Pod Networking CNI plugin for pod networking.
oci compute instance launch \ --profile oc1 \ --compartment-id ocid1.compartment.oc1..aaaaaaa______neoq \ --subnet-id ocid1.subnet.oc1.phx.aaaaaaa______hzia \ --shape VM.Standard2.2 \ --availability-domain zkJl:PHX-AD-1 \ --image-id ocid1.image.oc1.phx.aaaaaaa______lcra \ --display-name smn-npn \ --user-data-file my-cloud-init-file \ --ssh-authorized-keys-file my-ssh-key-file \ --metadata '{"oke-native-pod-networking": "true", "oke-max-pods": "21", "pod-subnets": "ocid1.subnet.oc1.phx.aaaaaaa______4wka"}, "pod-nsgids": "ocid1.networksecuritygroup.oc1.phx.aaaaaaa______qfca,ocid1.networksecuritygroup.oc1.phx.aaaaaaa______ohea"'
Example 3: Alternative command to create a self-managed node that uses the OCI VCN-Native Pod Networking CNI plugin for pod networking.
oci compute instance launch \ --profile oc1 \ --compartment-id ocid1.compartment.oc1..aaaaaaa______neoq \ --subnet-id ocid1.subnet.oc1.phx.aaaaaaa______hzia \ --shape VM.Standard2.2 \ --availability-domain zkJl:PHX-AD-1 \ --image-id ocid1.image.oc1.phx.aaaaaaa______lcra \ --display-name smn-npn \ --metadata '{"ssh_authorized_keys": "ssh-rsa AAAAB3NzaC1yc2EAAAA...", "oke-native-pod-networking": "true", "oke-max-pods": "21", "pod-subnets": "ocid1.subnet.oc1.phx.aaaaaaa______4wka,ocid1.subnet.oc1.phx.aaaaaaa______hzia", "pod-nsgids": "ocid1.networksecuritygroup.oc1.phx.aaaaaaa______qfca,ocid1.networksecuritygroup.oc1.phx.aaaaaaa______", "user_data": "IyEvdXNyL2Jpbi9lbnYgYmFzaA..."}'
- Specify the name of the file containing the cloud-init script (required to add the compute instance to the cluster as a self-managed node) using the oci Compute instance launch command's
Run the LaunchInstance operation to create a self-managed node.
If you want the self-managed node to use the OCI VCN-Native Pod Networking CNI plugin for pod networking, use the metadata attribute to specify values for the following keys:
- oke-native-pod-networking: Set to true to specify that you want the self-managed node to use the OCI VCN-Native Pod Networking CNI plugin for pod networking.
- oke-max-pods: The maximum number of pods that you want to run on the self-managed node.
- pod-subnets: The OCID of the pod subnet that supports communication between pods and direct access to individual pods using private pod IP addresses.
- pod-nsgids: (optional) The OCID of one or more network security groups (NSGs) containing security rules to route network traffic to pods. When specifying multiple NSGs, use a comma-delimited list in the format
"pod-nsgids": "<nsg-ocid-1>,<nsg-ocid-2>"
For more information about the OCI VCN-Native Pod Networking CNI plugin, see Using the OCI VCN-Native Pod Networking CNI plugin for pod networking.