Defining Kubernetes Services of Type LoadBalancer

Find out how to create different types of load balancer to distribute traffic between the nodes of a cluster you've created using Container Engine for Kubernetes (OKE).

Note

The ability to create new fixed-shape (dynamic) load balancers has reached End-of-Life. Therefore, Oracle recommends you implement Kubernetes services of type LoadBalancer as cost-efficient flexible load balancers rather than as fixed-shape (dynamic) load balancers (see Specifying Flexible Load Balancer Shapes). Existing fixed-shape (dynamic) load balancers will continue to be supported.

Note

The Oracle Cloud Infrastructure load balancers and network load balancers that Container Engine for Kubernetes provisions for Kubernetes services of type LoadBalancer appear in the Console. However, do not use the Console (or the Oracle Cloud Infrastructure CLI or API) to modify these load balancers and network load balancers. Any modifications you make will either be reverted by Container Engine for Kubernetes or will conflict with its operation and possibly result in service interruption. Instead, to change load balancer or network load balancer properties, modify the appropriate annotation in the manifest and re-apply the manifest.

Note

When load balancing across virtual nodes (as opposed to managed nodes), load balancing is actually across pods running on the virtual nodes rather than across the virtual nodes themselves.

When you define a Kubernetes service of type LoadBalancer to expose an application to the Internet or to a local network, you can specify how Container Engine for Kubernetes implements the service of type LoadBalancer:

  • Using an Oracle Cloud Infrastructure load balancer, set up in the Oracle Cloud Infrastructure Load Balancer service.

    An OCI load balancer is an OSI layer 4 (TCP) and layer 7 (HTTP) proxy, which supports features such as SSL termination and advanced HTTP routing policies. It provides the utmost flexibility, with responsive scaling up and down. You choose a custom minimum bandwidth and an optional maximum bandwidth, both between 10 Mbps and 8,000 Mbps. The minimum bandwidth is always available and provides instant readiness for your workloads. For more information about OCI load balancers, see Overview of Load Balancer.

    For more information about provisioning an OCI load balancer for a Kubernetes service of type LoadBalancer, see Provisioning OCI Load Balancers for Kubernetes Services of Type LoadBalancer.

  • Using an Oracle Cloud Infrastructure network load balancer, set up in the Oracle Cloud Infrastructure Network Load Balancer service.

    An OCI network load balancer is a non-proxy load balancing solution that performs pass-through load balancing of OSI layer 3 and layer 4 (TCP/UDP/ICMP) workloads. It offers an elastically scalable regional virtual IP (VIP) address that can scale up or down based on client traffic with no minimum or maximum bandwidth configuration requirement. It also provides the benefits of flow high availability, source and destination IP address, and port preservation. For more information about OCI network load balancers, see Overview of Flexible Network Load Balancer.

    For more information about provisioning an OCI network load balancer for a Kubernetes service of type LoadBalancer, see Provisioning OCI Network Load Balancers for Kubernetes Services of Type LoadBalancer.

Provisioning OCI Load Balancers for Kubernetes Services of Type LoadBalancer

This section describes how to provision an OCI load balancer for a Kubernetes service of type LoadBalancer.

An OCI load balancer is an OSI layer 4 (TCP) and layer 7 (HTTP) proxy, which supports features such as SSL termination and advanced HTTP routing policies. It provides the utmost flexibility, with responsive scaling up and down. You choose a custom minimum bandwidth and an optional maximum bandwidth, both between 10 Mbps and 8,000 Mbps. The minimum bandwidth is always available and provides instant readiness for your workloads.

For more information about OCI load balancers, see Overview of Load Balancer.

Provisioning an OCI load balancer for a Kubernetes service of type LoadBalancer enables you to:

  • load balance transport Layer 4 and Layer 7 (TCP and HTTP) traffic
  • terminate SSL/TLS at the load balancer

Note that when Container Engine for Kubernetes provisions an OCI load balancer for a Kubernetes service of type LoadBalancer, security rules to allow inbound and outbound traffic to and from the load balancer's subnet are created automatically by default. See Security Rules for Load Balancers and Network Load Balancers.

Use OCI load balancer metrics to monitor the health of an OCI load balancer provisioned for a Kubernetes service of type LoadBalancer (see Load Balancer Metrics).

Specifying the Annotation for an OCI Load Balancer

To provision an Oracle Cloud Infrastructure load balancer for a Kubernetes service of type LoadBalancer, define a service of type LoadBalancer that includes the following annotation in the metadata section of the manifest file:
oci.oraclecloud.com/load-balancer-type: "lb"

Note that lb is the default value of the oci.oraclecloud.com/load-balancer-type annotation. If you do not explicitly include the annotation in the service definition, the default value of the annotation is used.

For example, consider the following configuration file, nginx_lb.yaml. It defines a deployment (kind: Deployment) for the nginx app, followed by a definition of a service of type of LoadBalancer (type: LoadBalancer) that balances http traffic on port 80 for the nginx app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

The first part of the configuration file defines an Nginx deployment, requesting that it be hosted on 3 pods running the nginx:latest image, and accept traffic to the containers on port 80.

The second part of the configuration file defines the Nginx service, which uses type LoadBalancer to balance Nginx traffic on port 80 amongst the available pods.

To create the deployment and service defined in nginx_lb.yaml while connected to your Kubernetes cluster, enter the command:

kubectl apply -f nginx_lb.yaml

This command outputs the following upon successful creation of the deployment and the load balancer:

deployment "my-nginx" created
service "my-nginx-svc" created

The load balancer may take a few minutes to go from a pending state to being fully operational. You can view the current state of your cluster by entering:

kubectl get all

The output from the above command shows the current state:


NAME                                  READY     STATUS    RESTARTS   AGE
po/my-nginx-431080787-0m4m8           1/1       Running   0          3m
po/my-nginx-431080787-hqqcr           1/1       Running   0          3m
po/my-nginx-431080787-n8125           1/1       Running   0          3m

NAME               CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
svc/kubernetes     203.0.113.1     <NONE>           443/TCP        3d
svc/my-nginx-svc   203.0.113.7     192.0.2.22       80:30269/TCP   3m

NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/my-nginx           3         3         3            3           3m

NAME                            DESIRED   CURRENT   READY     AGE
rs/my-nginx-431080787           3         3         3         3m

The output shows that the my-nginx deployment is running on 3 pods (the po/my-nginx entries), that the load balancer is running (svc/my-nginx-svc) and has an external IP (192.0.2.22) that clients can use to connect to the app that's deployed on the pods.

Terminating SSL/TLS at the Load Balancer

When Container Engine for Kubernetes provisions a load balancer for a Kubernetes service of type LoadBalancer, you can specify that you want to terminate SSL at the load balancer. This configuration is known as frontend SSL. To implement frontend SSL, you define a listener at a port such as 443, and associate an SSL certificate with the listener.

Note that you can implement full point-to-point SSL encryption between clients and application pods running on worker nodes. To do so, create a load balancer with SSL termination (as described in this section), and also associate an SSL certificate with the load balancer's backend set (see Implementing SSL/TLS between the Load Balancer and Worker Nodes).

This example provides a walkthrough of the configuration and creation of a load balancer with SSL support.

Consider the following configuration file, nginx-demo-svc-ssl.yaml, which defines an Nginx deployment and exposes it via a load balancer that serves http on port 80, and https on port 443. This sample creates an Oracle Cloud Infrastructure load balancer, by defining a service with a type of LoadBalancer (type: LoadBalancer).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: nginx-service
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/oci-load-balancer-tls-secret: ssl-certificate-secret
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 80

The Load Balancer's annotations are of particular importance. The ports on which to support https traffic are defined by the value of service.beta.kubernetes.io/oci-load-balancer-ssl-ports. You can declare multiple SSL ports by using a comma-separated list for the annotation's value. For example, you could set the annotation's value to "443, 3000" to support SSL on ports 443 and 3000.

The required TLS secret, ssl-certificate-secret, needs to be created in Kubernetes. This example creates and uses a self-signed certificate. However, in a production environment, the most common scenario is to use a public certificate that's been signed by a certificate authority.

The following command creates a self-signed certificate, tls.crt, with its corresponding key, tls.key:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"

Now that you created the certificate, you need to store both it and its key as a secret in Kubernetes. The name of the secret must match the name from the service.beta.kubernetes.io/oci-load-balancer-tls-secret annotation of the load balancer's definition. Use the following command to create a TLS secret in Kubernetes, whose key and certificate values are set by --key and --cert, respectively.

kubectl create secret tls ssl-certificate-secret --key tls.key --cert tls.crt

You must create the Kubernetes secret before you can create the service, since the service references the secret in its definition. Create the service using the following command:

kubectl create -f manifests/demo/nginx-demo-svc-ssl.yaml

Watch the service and wait for a public IP address (EXTERNAL-IP) to be assigned to the Nginx service (nginx-service) by entering:

kubectl get svc --watch

The output from the above command shows the load balancer IP to use to connect to the service.


NAME            CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
nginx-service   192.0.2.1      198.51.100.1     80:30274/TCP   5m

The load balancer is now running, which means the service can now be accessed as follows:

  • using http, by entering:
    curl http://198.51.100.1
  • using https, by entering:
    curl --insecure https://198.51.100.1

    The "--insecure" flag is used to access the service using https due to the use of self-signed certificates in this example. Do not use this flag in a production environment where the public certificate was signed by a certificate authority.

Note: When a cluster is deleted, a load balancer that's dynamically created when a service is created will not be removed. Before deleting a cluster, delete the service, which in turn will result in the removal of the load balancer. The syntax for this command is:

kubectl delete svc SERVICE_NAME

For example, to delete the service from the previous example, enter:

kubectl delete svc nginx-service

Updating the TLS Certificates of Existing Load Balancers

To update the TLS certificate of an existing load balancer:
  1. Obtain a new TLS certificate. In a production environment, the most common scenario is to use a public certificate that's been signed by a certificate authority.
  2. Create a new Kubernetes secret. For example, by entering:

    kubectl create secret tls new-ssl-certificate-secret --key new-tls.key --cert new-tls.crt
    
  3. Modify the service definition to reference the new Kubernetes secret by changing the service.beta.kubernetes.io/oci-load-balancer-tls-secret annotation in the service configuration. For example:
    
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
      annotations:
        service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443"
        service.beta.kubernetes.io/oci-load-balancer-tls-secret: new-ssl-certificate-secret
    spec:
      selector:
        app: nginx
      type: LoadBalancer
      ports:
      - name: http
        port: 80
        targetPort: 80
      - name: https
        port: 443
        targetPort: 80
  4. Update the service. For example, by entering:
    kubectl apply -f new-nginx-demo-svc-ssl.yaml

Implementing SSL/TLS between the Load Balancer and Worker Nodes

When Container Engine for Kubernetes provisions a load balancer for a Kubernetes service of type LoadBalancer, you can specify that you want to implement SSL between the load balancer and the backend servers (worker nodes) in the backend set. This configuration is known as backend SSL. To implement backend SSL, you associate an SSL certificate with the load balancer's backend set.

Note that you can implement full point-to-point SSL encryption between clients and application pods running on worker nodes. To do so, associate an SSL certificate with the load balancer's backend set (as described in this section), and also create a load balancer with SSL termination (see Terminating SSL/TLS at the Load Balancer).

To specify the certificate to associate with the backend set, add the following annotation in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-tls-backendset-secret: <value>

where <value> is the name of a Kubernetes secret you've created to contain a signed certificate and the private key to the certificate. Note that you must create the Kubernetes secret before you can create the service, since the service references the secret in its definition.

The following example creates and uses a self-signed certificate, which is usually acceptable for internal communication between the load balancer and the backend set. However, if you prefer, you could use a public certificate that's been signed by a certificate authority.

For example:

  1. Generate a private key by entering:

    openssl genrsa -out ca.key 2048
  2. Generate a certificate by entering:

    openssl req -x509 -new -nodes -key ca.key -subj "/CN=nginxsvc/O=nginxsvc" -days 10000 -out ca.crt
  3. Store the certificate and the key as a secret in Kubernetes by entering:

    kubectl create secret generic ca-ser-secret --from-file=tls.crt=tls.crt --from-file=tls.key=tls.key --from-file=ca.crt=ca.crt
  4. Define an Nginx deployment and expose it via a load balancer that serves http on port 80, and https on port 443. This sample creates an Oracle Cloud Infrastructure load balancer, by defining a service with a type of LoadBalancer (type: LoadBalancer).
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    metadata:
      name: nginx-service
      annotations:
        oci.oraclecloud.com/load-balancer-type: "lb"
        service.beta.kubernetes.io/oci-load-balancer-tls-backendset-secret: ca-ser-secret
        service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443" 
    spec:
      selector:
        app: nginx
      type: LoadBalancer
      ports:
      - name: http
        port: 80
        targetPort: 80
      - name: https
        port: 443
        targetPort: 443

Communication between the load balancer and the worker nodes in the backend set is encrypted using the key and certificate stored in the ca-ser-secret Kubernetes secret that you created earlier.

Specifying Alternative Load Balancer Shapes

The shape of an Oracle Cloud Infrastructure load balancer specifies its maximum total bandwidth (that is, ingress plus egress). By default, load balancers are created with a shape of 100Mbps. Other shapes are available, including 400Mbps and 8000Mbps.

To specify an alternative shape for a load balancer, add the following annotation in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-shape: <value>

where value is the bandwidth of the shape (for example, 100Mbps, 400Mbps, 8000Mbps).

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    service.beta.kubernetes.io/oci-load-balancer-shape: 400Mbps
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Sufficient load balancer quota must be available in the region for the shape you specify. Enter the following kubectl command to confirm that load balancer creation did not fail due to lack of quota:

kubectl describe service <service-name>

Note that Oracle recommends you implement Kubernetes services of type LoadBalancer as cost-efficient flexible load balancers rather than as fixed-shape (dynamic) load balancers (see Specifying Flexible Load Balancer Shapes).

Specifying Flexible Load Balancer Shapes

The shape of an Oracle Cloud Infrastructure load balancer specifies its maximum total bandwidth (that is, ingress plus egress). As described in Specifying Alternative Load Balancer Shapes, you can specify different load balancer shapes.

In addition, you can also specify a flexible shape for an Oracle Cloud Infrastructure load balancer, by defining a minimum and a maximum bandwidth for the load balancer.

To specify a flexible shape for a load balancer, add the following annotations in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "<min-value>"
service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "<max-value>"

where:

  • "<min-value>" is the minimum bandwidth for the load balancer, in Mbps (for example, "10")
  • "<max-value>" is the maximum bandwidth for the load balancer, in Mbps (for example, "100")

Note that you do not include a unit of measurement when specifying bandwidth values for flexible load balancer shapes (unlike for pre-defined shapes). For example, specify the minimum bandwidth as 10 rather than as 10Mbps.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    service.beta.kubernetes.io/oci-load-balancer-shape: "flexible"
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "10"
    service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "100"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Specifying Load Balancer Connection Timeout

When provisioning an Oracle Cloud Infrastructure load balancer for a Kubernetes service of type LoadBalancer, you can specify the maximum idle time (in seconds) allowed between two successive receive or two successive send operations between the client and backend servers.

To explicitly specify a maximum idle time, add the following annotation in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-connection-idle-timeout: <value>

where value is the number of seconds.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    service.beta.kubernetes.io/oci-load-balancer-connection-idle-timeout: 100
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Note that if you don't explicitly specify a maximum idle time, a default value is used. The default value depends on the type of listener:

  • for TCP listeners, the default maximum idle time is 300 seconds
  • for HTTP listeners, the default maximum idle time is 60 seconds

Specifying Listener Protocols

When Container Engine for Kubernetes provisions a load balancer for a Kubernetes service of type LoadBalancer, you can define the type of traffic accepted by the listener by specifying the protocol on which the listener accepts connection requests.

Note that if you don't explicitly specify a protocol, "TCP" is used as the default value.

To explicitly specify the load balancer listener protocol when Container Engine for Kubernetes provisions a load balancer for a Kubernetes service of type LoadBalancer, add the following annotation in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-backend-protocol: <value>

where <value> is the protocol that defines the type of traffic accepted by the listener. For example, "HTTP". Valid protocols include "HTTP" and "TCP".

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    service.beta.kubernetes.io/oci-load-balancer-backend-protocol: "HTTP"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Specifying Security List Management Options When Provisioning an OCI Load Balancer

You can use the security list management feature to configure how security list rules are managed for an Oracle Cloud Infrastructure load balancer that Container Engine for Kubernetes provisions for a Kubernetes service of type LoadBalancer. This feature is useful if you are new to Kubernetes, or for basic deployments.

Note

You might encounter scalability and other issues if you use the Kubernetes security list management feature in complex deployments, and with tools like Terraform. For these reasons, Oracle does not recommend using the Kubernetes security list management feature in production environments.

To specify how the Kubernetes security list management feature manages security lists when Container Engine for Kubernetes provisions a load balancer for a Kubernetes service of type LoadBalancer, add the following annotation in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-security-list-management-mode: <value>

where <value> is one of:

  • "All": All required security list rules for load balancer services are managed.
  • "Frontend": Only security list rules for ingress to load balancer services are managed. You have to set up a security rule that allows inbound traffic to the appropriate ports for node port ranges, the kube-proxy health port, and the health check port ranges.
  • "None": No security list management is enabled. You have to set up a security rule that allows inbound traffic to the appropriate ports for node port ranges, the kube-proxy health port, and the health check port ranges. Additionally, you have to set up security rules to allow inbound traffic to load balancers (see Security Rules for Load Balancers and Network Load Balancers).

Note that in clusters with managed nodes, if you don't explicitly specify a management mode or you specify an invalid value, all security list rules are managed (equivalent to "All"). However, in clusters with virtual nodes, security list management is never enabled and you always have to manually configure security rules (equivalent to "None").

Also note that there are limits to the number of ingress and egress rules that are allowed in a security list (see Security List Limits). If the number of ingress or egress rules exceeds the limit, and <value> is set to "All" or "Frontend", creating or updating the load balancer fails.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    service.beta.kubernetes.io/oci-load-balancer-security-list-management-mode: "Frontend"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Provisioning OCI Network Load Balancers for Kubernetes Services of Type LoadBalancer

This section describes how to provision an OCI network load balancer for a Kubernetes service of type LoadBalancer.

An Oracle Cloud Infrastructure network load balancer is a non-proxy load balancing solution that performs pass-through load balancing of OSI layer 3 and layer 4 (TCP/UDP/ICMP) workloads. It offers an elastically scalable regional virtual IP (VIP) address that can scale up or down based on client traffic with no minimum or maximum bandwidth configuration requirement. It also provides the benefits of flow high availability, source and destination IP address, and port preservation.

For more information about Oracle Cloud Infrastructure network load balancers, see Overview of Flexible Network Load Balancer.

Provisioning an OCI network load balancer for a Kubernetes service of type LoadBalancer enables you to:

  • load balance traffic with a high throughput and low latency
  • preserve source and destination IP addresses and ports
  • handle TCP and UDP traffic

Note that when Container Engine for Kubernetes provisions an OCI network load balancer for a Kubernetes service of type LoadBalancer, security rules to allow inbound and outbound traffic to and from the network load balancer's subnet are not created automatically by default. You must define appropriate security rules to allow inbound and outbound traffic to and from the load balancer's or network load balancer's subnet. See Security Rules for Load Balancers and Network Load Balancers.

Use OCI network load balancer metrics to monitor the health of an OCI network load balancer provisioned for a Kubernetes service of type LoadBalancer (see Network Load Balancer Metrics).

Specifying the Annotation for an OCI Network Load Balancer

To provision a network load balancer for a Kubernetes service of type LoadBalancer, add the following annotation in the metadata section of the manifest file:

oci.oraclecloud.com/load-balancer-type: "nlb"
For example:
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Note that lb is the default value of the oci.oraclecloud.com/load-balancer-type annotation. If you do not explicitly include the annotation in the service definition, the default value of the annotation is used and a load balancer (rather than a network load balancer) is provisioned.

Terminating Requests at the Receiving Node

When provisioning a network load balancer for a Kubernetes service of type LoadBalancer, you can specify that requests terminate at the client IP address specified in the headers of IP packets, rather than being proxied to other worker nodes in the cluster.

By default, requests are proxied to other worker nodes in the cluster.

Specifying that requests terminate at the client IP address (rather than being proxied) can improve performance in very large clusters with thousands of worker nodes by eliminating traffic between worker nodes. Specifying that requests terminate at the client IP address can also simplify implementation and remove potential security concerns by enabling you to set up security rules (in a network security group (recommended) and/or a security list) for the worker nodes in the cluster that only allow ingress traffic from the network load balancer's CIDR block.

To terminate requests at the client IP address, add the following setting in the spec section of the manifest file:

externalTrafficPolicy: Local

To proxy requests to other worker nodes in the cluster, add the following setting in the spec section of the manifest file:

externalTrafficPolicy: Cluster

Note that Cluster is the default value of the externalTrafficPolicy setting. If you do not explicitly include the setting in the service definition, the default value of the setting is used.

Also note that if externalTrafficPolicy is set to Cluster, client IP addresses are not preserved regardless of the value of the oci-network-load-balancer.oraclecloud.com/is-preserve-source annotation. Requests fail with an error if externalTrafficPolicy is set to Cluster and the oci-network-load-balancer.oraclecloud.com/is-preserve-source annotation is explicitly set to either true or false. See Preserving The Client IP Address.

To terminate requests at the client IP address, you must also have set up the following security rules:

  • You must have set up a security rule (in a network security group (recommended) and/or a security list) for the worker nodes in the cluster to allow ingress traffic from the CIDR block where the client connections are made, to all node ports (30000 to 32767). If the application is exposed to the Internet, set the security rule's Source CIDR block to 0.0.0.0/0. Alternatively, set the security rule's Source CIDR block to a specific CIDR block (for example, if the client connections come from a specific subnet).
    State Source Protocol/Dest. Port Description
    Stateful 0.0.0.0/0 or subnet CIDR ALL/30000-32767 Allow worker nodes to receive connections through OCI Network Load Balancer.
  • You must have set up the ingress and egress security rules for the network load balancer, as described in Security Rules for Load Balancers and Network Load Balancers.

For example, here is a Kubernetes service definition to terminate requests at the client IP address (rather than being proxied):

apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
    oci-network-load-balancer.oraclecloud.com/oci-network-security-groups: "ocid1.networksecuritygroup.oc1.phx.aaaaaa....vdfw"
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
  - port: 80
  selector:
    app: nginx

Preserving The Client IP Address

When provisioning a network load balancer for a Kubernetes service of type LoadBalancer, you can specify whether to preserve, or prevent the preservation of, the client IP address in the headers of IP packets.

You only have the option to preserve client IP addresses when requests are terminated at the client IP addresses specified in the IP packet headers. That is, when the externalTrafficPolicy setting is set to Local. If externalTrafficPolicy is set to Cluster, client IP addresses are not preserved. See Terminating Requests at the Receiving Node.

To prevent the preservation of client IP addresses, add the following annotation in the metadata section of the manifest file:

oci-network-load-balancer.oraclecloud.com/is-preserve-source: "false"

To preserve the client IP address, add the following annotation in the metadata section of the manifest file:

oci-network-load-balancer.oraclecloud.com/is-preserve-source: "true"

Note that true is the default value of the oci-network-load-balancer.oraclecloud.com/is-preserve-source annotation. If you do not explicitly include the annotation in the service definition, the default value of the annotation is used.

Also note that if externalTrafficPolicy is set to Cluster, client IP addresses are not preserved regardless of the value of the oci-network-load-balancer.oraclecloud.com/is-preserve-source annotation. Requests fail with an error if externalTrafficPolicy is set to Cluster and the oci-network-load-balancer.oraclecloud.com/is-preserve-source annotation is explicitly set to either true or false. Therefore do not add the oci-network-load-balancer.oraclecloud.com/is-preserve-source annotation if externalTrafficPolicy is set to Cluster.

You can preserve client IP addresses when using managed node pools, but not when using virtual node pools.

To preserve the client IP address, you must also have set up the following security rules:

  • You must have set up a security rule (in a network security group (recommended) and/or a security list) for the worker nodes in the cluster to allow ingress traffic from the CIDR block where the client connections are made, to all node ports (30000 to 32767). If the application is exposed to the Internet, set the security rule's Source CIDR block to 0.0.0.0/0. Alternatively, set the security rule's Source CIDR block to a specific CIDR block (for example, if the client connections come from a specific subnet).
    State Source Protocol/Dest. Port Description
    Stateful 0.0.0.0/0 or subnet CIDR ALL/30000-32767 Allow worker nodes to receive connections through OCI Network Load Balancer.
  • You must have set up the ingress and egress security rules for the network load balancer, as described in Security Rules for Load Balancers and Network Load Balancers.

For example, here is a Kubernetes service definition that prevents the preservation of the client IP address:

apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
    oci-network-load-balancer.oraclecloud.com/oci-network-security-groups: "ocid1.networksecuritygroup.oc1.phx.aaaaaa....vdfw"
    oci-network-load-balancer.oraclecloud.com/is-preserve-source: "false"
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
  - port: 80
  selector:
    app: nginx

Exposing TCP and UDP Applications

When Container Engine for Kubernetes provisions a network load balancer for a Kubernetes service of type LoadBalancer, you can define the type of traffic accepted by the listener by specifying the protocol on which the listener accepts connection requests.

Note that if you don't explicitly specify a protocol, "TCP" is used as the default value.

To explicitly specify the listener protocol when Container Engine for Kubernetes provisions a network load balancer for a Kubernetes service of type LoadBalancer, add the following setting in the spec section of the manifest file:

protocol: <value>

where <value> is the protocol that defines the type of traffic accepted by the listener. For example, "UDP". Valid protocols include "UDP" and "TCP".

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: UDP
  selector:
    app: nginx

Specifying the Backend Set Policy

When Container Engine for Kubernetes provisions a network load balancer for a Kubernetes service of type LoadBalancer, you can define a policy for the backend set to specify how to distribute incoming traffic to the backend servers. For more information, see Network Load Balancer Policies.

Note that if you don't explicitly specify a policy for the backend set, "FIVE_TUPLE" is used as the default value.

To specify a policy for the backend set when Container Engine for Kubernetes provisions a network load balancer for a Kubernetes service of type LoadBalancer, add the following annotation in the metadata section of the manifest file:

oci-network-load-balancer.oraclecloud.com/backend-policy: <value>

where <value> is one of:

  • "TWO_TUPLE": Routes incoming traffic based on 2-Tuple (source IP, destination IP) Hash.
  • "THREE_TUPLE": Routes incoming traffic based on 3-Tuple (source IP, destination IP, protocol) Hash.
  • "FIVE_TUPLE": Routes incoming traffic based on 5-Tuple (source IP and port, destination IP and port, protocol) Hash.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
    oci-network-load-balancer.oraclecloud.com/backend-policy: "THREE_TUPLE"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Specifying Security List Management Options When Provisioning an OCI Network Load Balancer

You can use the security list management feature to configure how security list rules are managed for an Oracle Cloud Infrastructure network load balancer that Container Engine for Kubernetes provisions for a Kubernetes service of type LoadBalancer. This feature is useful if you are new to Kubernetes, or for basic deployments.

Note

You might encounter scalability and other issues if you use the Kubernetes security list management feature in complex deployments, and with tools like Terraform. For these reasons, Oracle does not recommend using the Kubernetes security list management feature in production environments.

To specify how the Kubernetes security list management feature manages security lists when Container Engine for Kubernetes provisions a network load balancer for a Kubernetes service of type LoadBalancer, add the following annotation in the metadata section of the manifest file:

oci-network-load-balancer.oraclecloud.com/security-list-management-mode: <value>

where <value> is one of:

  • "All": All required security list rules for network load balancer services are managed.
  • "Frontend": Only security list rules for ingress to network load balancer services are managed. You have to set up a security rule that allows inbound traffic to the appropriate ports for node port ranges, the kube-proxy health port, and the health check port ranges.
  • "None": No security list management is enabled. You have to set up a security rule that allows inbound traffic to the appropriate ports for node port ranges, the kube-proxy health port, and the health check port ranges. Additionally, you have to set up security rules to allow inbound traffic to network load balancers (see Security Rules for Load Balancers and Network Load Balancers).

Note that in clusters with managed nodes, if you don't explicitly specify a management mode or you specify an invalid value, all security list rules are managed (equivalent to "All"). However, in clusters with virtual nodes, security list management is never enabled and you always have to manually configure security rules (equivalent to "None").

Also note that there are limits to the number of ingress and egress rules that are allowed in a security list (see Security List Limits). If the number of ingress or egress rules exceeds the limit, and <value> is set to "All" or "Frontend", creating or updating the load balancer fails.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
    oci-network-load-balancer.oraclecloud.com/security-list-management-mode: "Frontend"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Configuring Load Balancers and Network Load Balancers

This section describes how to define the Oracle Cloud Infrastructure load balancers and network load balancers that Container Engine for Kubernetes provisions for a Kubernetes service of type LoadBalancer.

Creating Internal Load Balancers

You can create Oracle Cloud Infrastructure load balancers and network load balancers to control access to services running on a cluster:

  • When you create a cluster in the 'Custom Create' workflow you select an existing VCN that contains the network resources to be used by the new cluster. If you want to use a load balancer or network load balancer to control traffic into the VCN, you select an existing public or private subnet in that VCN to host it.
  • When you create a cluster in the 'Quick Create' workflow, the VCN that's automatically created contains a public regional subnet to host a load balancer or network load balancer. If you want to host a load balancer or a network load balancer in a private subnet, you can add a private subnet to the VCN later.

Alternatively, you can define an internal Kubernetes service of type LoadBalancer (often referred to simply as an 'internal load balancer') in a cluster to enable other programs running in the same VCN as the cluster to access services in the cluster. An internal load balancer can be provisioned:

  • as a load balancer, or as a network load balancer
  • with a public IP address, or with a private IP address (assigned by the Load Balancer service or the Network Load Balancer service)
  • in a public subnet, or in a private subnet

A load balancer or network load balancer with a public IP address is referred to as public. A public load balancer or network load balancer can be hosted in a public subnet or in a private subnet.

A load balancer or network load balancer with a private IP address is referred to as private. A private load balancer or network load balancer can be hosted in a public subnet or in a private subnet.

By default, internal load balancers are provisioned with public IP addresses and hosted in public subnets.

For more information:

Create an internal load balancer as an OCI load balancer

To create an internal load balancer as an OCI load balancer with a private IP address, hosted on the subnet specified for load balancers when the cluster was created, add the following annotation in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-internal: "true"

To create an internal load balancer as an OCI load balancer with a private IP address hosted, hosted on an alternative subnet to the one specified for load balancers when the cluster was created, add both the following annotations in the metadata section of the manifest file:

service.beta.kubernetes.io/oci-load-balancer-internal: "true"
service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid1.subnet.oc1..aaaaaa....vdfw"

where ocid1.subnet.oc1..aaaaaa....vdfw is the OCID of the alternative subnet. The alternative subnet can be a private subnet or a public subnet.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    service.beta.kubernetes.io/oci-load-balancer-internal: "true"
    service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid1.subnet.oc1..aaaaaa....vdfw"
spec:
  type: LoadBalancer
  ports:
  - port: 8100
  selector:
    app: nginx
Create an internal network load balancer as an OCI network load balancer

To create an internal network load balancer as an OCI network load balancer with a private IP address, hosted on the subnet specified for load balancers when the cluster was created, add the following annotation in the metadata section of the manifest file:

oci-network-load-balancer.oraclecloud.com/internal: "true"

To create an internal network load balancer as an OCI network load balancer with a private IP address, hosted on an alternative subnet to the one specified for load balancers when the cluster was created, add both of the following annotations in the metadata section of the manifest file:

oci-network-load-balancer.oraclecloud.com/internal: "true"
oci-network-load-balancer.oraclecloud.com/subnet: "ocid1.subnet.oc1..aaaaaa....vdfw"

where ocid1.subnet.oc1..aaaaaa....vdfw is the OCID of the private subnet. The alternative subnet can be a private subnet or a public subnet.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
    oci-network-load-balancer.oraclecloud.com/internal: "true"
    oci-network-load-balancer.oraclecloud.com/subnet: "ocid1.subnet.oc1..aaaaaa....vdfw"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Specifying Reserved Public IP Addresses

When a Kubernetes service of type LoadBalancer is deployed on a cluster, Container Engine for Kubernetes creates an Oracle Cloud Infrastructure public load balancer or network load balancer to accept traffic into the cluster. By default, the Oracle Cloud Infrastructure public load balancer or network load balancer is assigned an ephemeral public IP address. However, an ephemeral public IP address is temporary, and only lasts for the lifetime of the public load balancer or network load balancer.

If you want the Oracle Cloud Infrastructure public load balancer or network load balancer that Container Engine for Kubernetes creates to have the same public IP address deployment after deployment, you can assign it a reserved public IP address from the pool of reserved public IP addresses. For more information about creating and viewing reserved public IP addresses, see Public IP Addresses.

To assign a reserved public IP address to the Oracle Cloud Infrastructure public load balancer or network load balancer that Container Engine for Kubernetes creates, add the LoadBalancerIP property in the spec section of the manifest file that defines the service of type LoadBalancer, and specify the reserved public IP address.

Assign a reserved public IP address to a public load balancer

Example:

apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
spec:
  loadBalancerIP: 144.25.97.173
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx
Assign a reserved public IP address to a public network load balancer

Example:

apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
spec:
  loadBalancerIP: 144.25.97.173
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Note the following:

  • If you do set the loadBalancerIP property of the LoadBalancer service, you cannot later directly change the IP address of the Oracle Cloud Infrastructure public load balancer or network load balancer that Container Engine for Kubernetes creates. If you do want to change the IP address, delete the LoadBalancer service, specify a different reserved public IP address in the manifest file, and deploy the LoadBalancer service again.
  • If you don't set the loadBalancerIP property of the LoadBalancer service, you cannot later directly switch the IP address of the Oracle Cloud Infrastructure public load balancer or network load balancer that Container Engine for Kubernetes creates from an ephemeral IP address to a reserved public IP address. If you do want to switch the ephemeral IP address to a reserved public IP address, delete the service of type LoadBalancer, set the loadBalancerIP property to a reserved public IP address in the manifest file, and deploy the service of type LoadBalancer again.
  • You can delete the service of type LoadBalancer and release the reserved public IP address for other uses (for example, to assign it to another service of type LoadBalancer).
  • You cannot specify a reserved public IP address for a service of type LoadBalancer if the same IP address is already assigned to another resource (such as a compute instance, or another service of type LoadBalancer).
  • You cannot add the loadBalancerIP property to the manifest file for an internal load balancer service (that is, a manifest file that includes the service.beta.kubernetes.io/oci-load-balancer-internal: "true" or oci-network-load-balancer.oraclecloud.com/internal: "true" annotation).
  • By default, the reserved public IP address that you specify as the loadBalancerIP property of a service of type LoadBalancer in the manifest file is expected to be a resource in the same compartment as the cluster. If you want to specify a reserved public IP address in a different compartment:

    • for public load balancers, add the following policy to the tenancy:
      ALLOW any-user to read public-ips in tenancy where request.principal.type = 'cluster'
      ALLOW any-user to manage floating-ips in tenancy where request.principal.type = 'cluster'
    • for network load balancers, add the following policy to the tenancy:
      ALLOW any-user to use private-ips in TENANCY where ALL {request.principal.type = 'cluster', request.principal.compartment.id=target.compartment.id}
      ALLOW any-user to manage public-ips in TENANCY where ALL {request.principal.type = 'cluster', request.principal.compartment.id=target.compartment.id}

Specifying Network Security Groups (recommended)

Oracle Cloud Infrastructure network security groups (NSGs) enable you to control traffic into and out of resources, and between resources. The security rules defined for an NSG ensure that all the resources in that NSG have the same security posture. For more information, see Network Security Groups.

You can use an existing NSG to manage access to the Oracle Cloud Infrastructure load balancer or network load balancer that Container Engine for Kubernetes provisions for a Kubernetes service of type LoadBalancer.

When using an NSG to manage access, appropriate security rules must exist to allow inbound and outbound traffic to and from the load balancer's or network load balancer's subnet. See Security Rules for Load Balancers and Network Load Balancers.

To use an NSG to manage access, you include annotations in the manifest file to specify the NSG to which you want to add the load balancer or network load balancer.

Add a load balancer to an NSG

To add the Oracle Cloud Infrastructure load balancer created by Container Engine for Kubernetes to an NSG, add the following annotation in the metadata section of the manifest file:

oci.oraclecloud.com/oci-network-security-groups: "<nsg-ocid>"

where <nsg-ocid> is the OCID of an existing NSG.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    oci.oraclecloud.com/oci-network-security-groups: "ocid1.networksecuritygroup.oc1.phx.aaaaaa....vdfw"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx
Add a network load balancer to an NSG

To add the Oracle Cloud Infrastructure network load balancer created by Container Engine for Kubernetes to an NSG, add the following annotation in the metadata section of the manifest file:

oci-network-load-balancer.oraclecloud.com/oci-network-security-groups: "<nsg-ocid>"

where <nsg-ocid> is the OCID of an existing NSG.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
    oci-network-load-balancer.oraclecloud.com/oci-network-security-groups: "ocid1.networksecuritygroup.oc1.phx.aaaaaa....vdfw"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Note the following:

  • The NSG you specify must be in the same VCN as the Oracle Cloud Infrastructure load balancer or network load balancer.
  • If the NSG you specify belongs to a different compartment to the cluster, you must include a policy statement similar to the following in an IAM policy:
    ALLOW any-user to use network-security-groups in TENANCY where ALL { request.principal.type = 'cluster' }

    If you consider this policy statement to be too permissive, you can restrict the permission to explicitly specify the compartment to which the NSG belongs, and/or to explicitly specify the cluster. For example:

    Allow any-user to use network-security-groups in compartment <compartment-ocid> where all { request.principal.id = '<cluster-ocid>' }
  • You can specify up to five NSGs, in a comma-separated list, in the format:
    oci.oraclecloud.com/oci-network-security-groups: "<nsg1-ocid>,<nsg2-ocid>,<nsg3-ocid>,<nsg4-ocid>,<nsg5-ocid>"
  • To remove a load balancer or network load balancer from an NSG, or to change the NSG that the load balancer or network load balancer is in, update the annotation and re-apply the manifest.
  • If you decide to use an NSG to manage access to the Oracle Cloud Infrastructure load balancer, Oracle recommends that you disable Kubernetes security list management by adding the following annotation in the metadata section of the manifest file:

    service.beta.kubernetes.io/oci-load-balancer-security-list-management-mode: "None"

    If you do follow the recommendation and add the annotation, Kubernetes security list management is not enabled. You have to set up NSGs with ingress and egress security rules for node pools and for the Kubernetes API endpoint (for more information, see Security Rule Configuration in Network Security Groups and/or Security Lists and Example Network Resource Configurations). You also have to set up NSGs with ingress and egress security rules for the kube-proxy health port, for the health check port range, and for load balancers.

Specifying Health Check Parameters

Oracle Cloud Infrastructure load balancers and network load balancers apply a health check policy to continuously monitor backend servers. A health check is a test to confirm backend server availability, and can be a request or a connection attempt. If a server fails the health check, the load balancer or network load balancer takes the server temporarily out of rotation. If the server subsequently passes the health check, the load balancer or network load balancer returns it to the rotation.

Health check policies include a number of parameters, which each have a default value. When Container Engine for Kubernetes provisions an OCI load balancer or network load balancer for a Kubernetes service of type LoadBalancer, you can override health check parameter default values by including annotations in the metadata section of the manifest file. You can later add, modify, and delete those annotations. If you delete an annotation that specified a value for a health check parameter, the load balancer or network load balancer uses the parameter's default value instead.

Configure health check parameters for load balancers

To configure health check parameters when Container Engine for Kubernetes provisions a load balancer for a Kubernetes service of type LoadBalancer, add the following annotations in the metadata section of the manifest file:

  • To specify how many unsuccessful health check requests to attempt before a backend server is considered unhealthy, add the following annotation in the metadata section of the manifest file:

    service.beta.kubernetes.io/oci-load-balancer-health-check-retries: "<value>"

    where <value> is the number of unsuccessful health check requests.

  • To specify the interval between health check requests, add the following annotation in the metadata section of the manifest file:

    service.beta.kubernetes.io/oci-load-balancer-health-check-interval: "<value>"

    where <value> is a numeric value in milliseconds. The minimum is 1000.

  • To specify the maximum time to wait for a response to a health check request, add the following annotation in the metadata section of the manifest file:

    service.beta.kubernetes.io/oci-load-balancer-health-check-timeout: "<value>"

    where <value> is a numeric value in milliseconds. A health check is successful only if the load balancer receives a response within this timeout period.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    service.beta.kubernetes.io/oci-load-balancer-health-check-retries: "5"
    service.beta.kubernetes.io/oci-load-balancer-health-check-interval: "15000"
    service.beta.kubernetes.io/oci-load-balancer-health-check-timeout: "4000"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx
Configure health check parameters for network load balancers

To configure health check parameters when Container Engine for Kubernetes provisions a network load balancer for a Kubernetes service of type LoadBalancer, add the following annotations in the metadata section of the manifest file:

  • To specify how many unsuccessful health check requests to attempt before a backend server is considered unhealthy, add the following annotation in the metadata section of the manifest file:

    oci-network-load-balancer.oraclecloud.com/health-check-retries: "<value>"

    where <value> is the number of unsuccessful health check requests.

  • To specify the interval between health check requests, add the following annotation in the metadata section of the manifest file:

    oci-network-load-balancer.oraclecloud.com/health-check-interval: "<value>"

    where <value> is a numeric value in milliseconds. The minimum is 1000.

  • To specify the maximum time to wait for a response to a health check request, add the following annotation in the metadata section of the manifest file:

    oci-network-load-balancer.oraclecloud.com/health-check-timeout: "<value>"

    where <value> is a numeric value in milliseconds. A health check is successful only if the network load balancer receives a response within this timeout period.

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
    oci-network-load-balancer.oraclecloud.com/health-check-retries: "5"
    oci-network-load-balancer.oraclecloud.com/health-check-interval: "15000"
    oci-network-load-balancer.oraclecloud.com/health-check-timeout: "4000"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Note that if you don't explicitly specify health check parameter values by including annotations in the metadata section of the manifest file, the following defaults are used:

Load Balancer Annotation Network Load Balancer Annotation

Default Value Used

service.beta.kubernetes.io/oci-load-balancer-health-check-retries oci-network-load-balancer.oraclecloud.com/health-check-retries "3"
service.beta.kubernetes.io/oci-load-balancer-health-check-interval oci-network-load-balancer.oraclecloud.com/health-check-interval "10000"
service.beta.kubernetes.io/oci-load-balancer-health-check-timeout oci-network-load-balancer.oraclecloud.com/health-check-timeout "3000"

For more information about Oracle Cloud Infrastructure load balancer and network load balancer health check policies, see:

Selecting Worker Nodes To Include In Backend Sets

Incoming traffic to an Oracle Cloud Infrastructure load balancer or network load balancer is distributed between the backend servers in a backend set. By default, when Container Engine for Kubernetes provisions an Oracle Cloud Infrastructure load balancer or network load balancer for a Kubernetes service of type LoadBalancer, all the worker nodes in the cluster are included in the backend set.

However, you have the option to select only a subset of worker nodes in a cluster to include in the backend set of a given load balancer or network load balancer. Including subsets of a cluster's worker nodes in the backend sets of different load balancers and network load balancers enables you to present a single Kubernetes cluster as multiple logical clusters (services).

Select worker nodes to include in load balancer backend set

To select the worker nodes to include in the backend set when Container Engine for Kubernetes provisions a load balancer for a Kubernetes service of type LoadBalancer, add the following annotation in the metadata section of the manifest file:

oci.oraclecloud.com/node-label-selector: <label>

where <label> is one or more label keys and values, identified using standard Kubernetes label selector notation. For example, lbset=set1

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "lb"
    oci.oraclecloud.com/node-label-selector: lbset=set1
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Select worker nodes to include in network load balancer backend set

To select the worker nodes to include in the backend set when Container Engine for Kubernetes provisions a network load balancer for a Kubernetes service of type LoadBalancer, add the following annotation in the metadata section of the manifest file:

oci-network-load-balancer.oraclecloud.com/node-label-selector: <label>

where <label> is one or more label keys and values, identified using standard Kubernetes label selector notation. For example, lbset=set1

For example:


apiVersion: v1
kind: Service
metadata:
  name: my-nginx-svc
  labels:
    app: nginx
  annotations:
    oci.oraclecloud.com/load-balancer-type: "nlb"
    oci-network-load-balancer.oraclecloud.com/node-label-selector: lbset=set1
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx

Use standard Kubernetes label selector notation to specify the label keys and values in the annotations in the metadata section of the manifest file. For more information about standard Kubernetes label selector notation, see Label selectors in the Kubernetes documentation.

The table gives some examples of standard Kubernetes label selector notation.

Load Balancer Annotation Network Load Balancer Annotation

Include in the backend set:

oci.oraclecloud.com/node-label-selector: lbset=set1 oci-network-load-balancer.oraclecloud.com/node-label-selector: lbset=set1

All worker nodes with the label key lbset that has the value set1

oci.oraclecloud.com/node-label-selector: lbset in (set1, set3) oci-network-load-balancer.oraclecloud.com/node-label-selector: lbset in (set1, set3)

All worker nodes with the label key lbset that has the value set1 or set3

oci.oraclecloud.com/node-label-selector: lbset oci-network-load-balancer.oraclecloud.com/node-label-selector: lbset All worker nodes with the label key lbset, regardless of its value.
oci.oraclecloud.com/node-label-selector: env=prod,lbset in (set1, set3) oci-network-load-balancer.oraclecloud.com/node-label-selector: env=prod,lbset in (set1, set3)

All worker nodes with the label key env that has the value prod, and with the label key lbset that has the value set1 or the value set3

oci.oraclecloud.com/node-label-selector: env!=test oci-network-load-balancer.oraclecloud.com/node-label-selector: env!=test

All worker nodes with the label key env that does not have the value test

Preventing Nodes from Handling Traffic

You can exclude particular worker nodes from the list of backend servers in the backend set of an Oracle Cloud Infrastructure load balancer or network load balancer. For more information, see node.kubernetes.io/exclude-from-external-load-balancers.

Tagging Load Balancers and Network Load Balancers

You can add tags to a load balancer or network load balancer that Container Engine for Kubernetes provisions for a Kubernetes service of type LoadBalancer. Tagging enables you to group disparate resources across compartments, and also enables you to annotate resources with your own metadata. See Applying Tags to Load Balancers.

Summary of Annotations for Load Balancers and Network Load Balancers

Annotations for Load Balancers

Annotation for Load Balancers Details
oci.oraclecloud.com/load-balancer-type: "lb" Specifying the Annotation for an OCI Load Balancer
service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443" Terminating SSL/TLS at the Load Balancer
service.beta.kubernetes.io/oci-load-balancer-tls-secret: ssl-certificate-secret Terminating SSL/TLS at the Load Balancer
service.beta.kubernetes.io/oci-load-balancer-tls-backendset-secret: <value> Implementing SSL/TLS between the Load Balancer and Worker Nodes
service.beta.kubernetes.io/oci-load-balancer-shape: <value> Specifying Alternative Load Balancer Shapes
service.beta.kubernetes.io/oci-load-balancer-shape: "flexible" Specifying Flexible Load Balancer Shapes
service.beta.kubernetes.io/oci-load-balancer-shape-flex-min: "<min-value>" Specifying Flexible Load Balancer Shapes
service.beta.kubernetes.io/oci-load-balancer-shape-flex-max: "<max-value>" Specifying Flexible Load Balancer Shapes
service.beta.kubernetes.io/oci-load-balancer-connection-idle-timeout: <value> Specifying Load Balancer Connection Timeout
service.beta.kubernetes.io/oci-load-balancer-internal: "true" Creating Internal Load Balancers
service.beta.kubernetes.io/oci-load-balancer-subnet1: "<subnet-OCID>" Creating Internal Load Balancers
oci.oraclecloud.com/oci-network-security-groups: "<nsg-ocid>" Specifying Network Security Groups (recommended)
service.beta.kubernetes.io/oci-load-balancer-security-list-management-mode: <value> Specifying Security List Management Options When Provisioning an OCI Load Balancer
service.beta.kubernetes.io/oci-load-balancer-backend-protocol: <value> Specifying Listener Protocols
service.beta.kubernetes.io/oci-load-balancer-health-check-retries: "<value>" Specifying Health Check Parameters
service.beta.kubernetes.io/oci-load-balancer-health-check-timeout: "<value>" Specifying Health Check Parameters
service.beta.kubernetes.io/oci-load-balancer-health-check-interval: "<value>" Specifying Health Check Parameters
oci.oraclecloud.com/node-label-selector: <label> Selecting Worker Nodes To Include In Backend Sets
oci.oraclecloud.com/initial-defined-tags-override: '{"<tag-namespace>": {"<tag-key>": "<tag-value>"}}' Tagging Load Balancers and Network Load Balancers
oci.oraclecloud.com/initial-freeform-tags-override: '{"<tag-key>": "<tag-value>"}' Tagging Load Balancers and Network Load Balancers

Annotations for Network Load Balancers

Load Balancer Annotation Details
oci.oraclecloud.com/load-balancer-type: "nlb" Specifying the Annotation for an OCI Network Load Balancer
oci-network-load-balancer.oraclecloud.com/internal: "true" Creating Internal Load Balancers
oci-network-load-balancer.oraclecloud.com/subnet: "<subnet-OCID>" Creating Internal Load Balancers
oci-network-load-balancer.oraclecloud.com/oci-network-security-groups: "<nsg-ocid>" Specifying Network Security Groups (recommended)
oci-network-load-balancer.oraclecloud.com/security-list-management-mode: <value> Specifying Security List Management Options When Provisioning an OCI Load Balancer
oci-network-load-balancer.oraclecloud.com/is-preserve-source: "<value>" Terminating Requests at the Receiving Node
oci-network-load-balancer.oraclecloud.com/backend-policy Specifying the Backend Set Policy
oci-network-load-balancer.oraclecloud.com/health-check-retries: "<value>" Specifying Health Check Parameters
oci-network-load-balancer.oraclecloud.com/health-check-timeout: "<value>" Specifying Health Check Parameters
oci-network-load-balancer.oraclecloud.com/health-check-interval: "<value>" Specifying Health Check Parameters
oci-network-load-balancer.oraclecloud.com/node-label-selector: <label> Selecting Worker Nodes To Include In Backend Sets
oci-network-load-balancer.oraclecloud.com/defined-tags: '{"<tag-namespace>": {"<tag-key>": "<tag-value>"}}' Tagging Load Balancers and Network Load Balancers
oci-network-load-balancer.oraclecloud.com/freeform-tags: '{"<tag-key>": "<tag-value>"}' Tagging Load Balancers and Network Load Balancers