Set up Service Mesh for your Application using kubectl

To set up Service Mesh for your application, you must configure a bunch of Service Mesh resources.

This section provides an example of managing Service Mesh with kubectl (for more information see, Managing Service Mesh with Kubernetes). An assumption is that a Kubernetes namespace <app-namespace> is created and the application is deployed in that namespace. You can create the Service Mesh custom resources in the same namespace as your application or a different namespace. In this example, we use the <app-namespace> for Service Mesh custom resources.

Application design

This example assumes an application composed of the following.

  • A front-end microservice named ui.
  • Two backend microservices ms1 and ms2.
  • The backend microservice ms2 has two versions ms2-v1 and ms2-v2.

The following are the assumptions for the Kubernetes cluster.

  • Each of the microservices ui, ms1, and ms2 have a Kubernetes service defined with the same name to enable DNS-based hostname lookup for them in the cluster.
  • The ui Kubernetes service definition has a selector that matches the ui pod.
  • The ms1 Kubernetes service definition has a selector that matches the ms1 pod.
  • The ms2 Kubernetes service definition has a selector that matches the ms2-v1 and ms2-v2 pods.
  • The cluster has an ingress Kubernetes service of type load balancer to allow ingress traffic into the cluster and has a selector that matches the ui pod.
Sample app-name.yaml
apiVersion: v1
kind: Service
metadata:
  name: ui
  namespace: app-namespace
spec:
  ports:
    - port: 9080
      name: http
  selector:
    app: ui
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ui
  namespace: app-namespace
  labels:
    app: ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ui
  template:
    metadata:
      namespace: app-namespace
      labels:
        app: ui
      annotations:
        servicemesh.oci.oracle.com/proxy-log-level: error
    spec:
      containers:
        - name: ui
        ...
---
apiVersion: v1
kind: Service
metadata:
  name: ms1
  namespace: app-namespace
spec:
  ports:
    - port: 9080
      name: http
  selector:
    app: ms1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ms1
  namespace: app-namespace
  labels:
    app: ms1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ms1
  template:
    metadata:
      namespace: app-namespace
      labels:
        app: ms1
      annotations:
        servicemesh.oci.oracle.com/proxy-log-level: error
    spec:
      containers:
        - name: ms1
        ...
---
apiVersion: v1
kind: Service
metadata:
  name: ms2
  namespace: app-namespace
spec:
  ports:
    - port: 9080
      name: http
  selector:
    app: ms2
---
apiVersion: v1
kind: Service
metadata:
  name: ms2-v1
  namespace: app-namespace
spec:
  ports:
    - port: 9080
      name: http
  selector:
    app: ms2
    version: v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ms2-v1
  namespace: app-namespace
  labels:
    app: ms2
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ms2
      version: v1
  template:
    metadata:
      namespace: app-namespace
      labels:
        app: ms2
        version: v1
      annotations:
        servicemesh.oci.oracle.com/proxy-log-level: error
    spec:
      containers:
        - name: ms2
        ...
---
apiVersion: v1
kind: Service
metadata:
  name: ms2-v2
  namespace: app-namespace
spec:
  ports:
    - port: 9080
      name: http
  selector:
    app: ms2
    version: v2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ms2-v2
  namespace: app-namespace
  labels:
    app: ms2
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ms2
      version: v2
  template:
    metadata:
      namespace: app-namespace
      labels:
        app: ms2
        version: v2
      annotations:
        servicemesh.oci.oracle.com/proxy-log-level: error
    spec:
      containers:
        - name: ms2
        ...
---
apiVersion: v1
kind: Service
metadata:
  name: application-ingress
  namespace: app-namespace
spec:
  ports:
  - port: 80
    targetPort: 9080
    name: http
  selector:
    app: ui
  type: LoadBalancer
---
Note

Kubernetes Service Mesh resource creation YAML configuration data must be in a particular order.
  1. Mesh
  2. Virtual Service
  3. Virtual Deployment
  4. Virtual Service Route Table
  5. Ingress Gateway
  6. Ingress Gateway Route Table
  7. Access Polices
  8. Virtual Deployment Binding
  9. Ingress Gateway Deployment

Whether your Kubernetes configuration resources are in a single YAML file or multiple YAML files, the ordering of resources remains the same.

Create Service Mesh Resources

To enable Service Mesh for your application, you need to create two sets of resources:

  1. Service Mesh Control Plane resources
  2. Service Mesh binding resources.

In this example, we manage the Service Mesh with kubectl and create custom resources in the Kubernetes cluster to create the control plane resources. The Service Mesh binding resources are always created as custom resources in the Kubernetes cluster.

You create the Service Mesh control plane resources based on your application design. The following suggestion is how you would model the Service Mesh resources based on the preceding application design.

  1. Mesh: Create a service mesh named app-name
  2. Virtual Service: Create three virtual services (ui, ms1, ms2) corresponding to the three microservices
  3. Virtual Deployment: Create four virtual deployments, one for each version of the microservice (ui, ms1, ms2-v1, ms2-v2)
  4. Virtual Service Route Table: Create three virtual service route tables, one for each of the virtual services to define the traffic split to the virtual service versions
  5. Ingress Gateway: Create one ingress gateway to enable ingress into the mesh
  6. Ingress Gateway Route Table: Create one ingress gateway route table, to define the traffic routing for incoming traffic on the ingress gateway
  7. Access Policies: Create one access policy with rules enabling access for traffic between the microservices in the mesh
Create the Service Mesh control plane resources using a local Service Mesh configuration file on your system:
kubectl apply -f meshify.yaml
Sample meshify.yaml File

The following is a sample YAML file that configures the service mesh resources.

---
kind: Mesh
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: app-name
  namespace: app-namespace
spec:
  compartmentId: ocid1.compartment.oc1..aaaa...
  certificateAuthorities:
    - id: ocid1.certificateauthority.oc1.iad.aaa...
  mtls:
    minimum: PERMISSIVE
---
kind: VirtualService
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ms1
  namespace: app-namespace
spec:
  mesh:
    ref:
      name: app-name
 
  defaultRoutingPolicy:
    type: UNIFORM
  compartmentId: ocid1.compartment.oc1..aaaa...
 
  hosts:
    - ms1
    - ms1:9080
---
kind: VirtualDeployment
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ms1
  namespace: app-namespace
spec:
  virtualService:
    ref:
      name: ms1
  compartmentId: ocid1.compartment.oc1..aaaa...
  listener:
    - port: 9080
      protocol: HTTP
  accessLogging:
    isEnabled: true
  serviceDiscovery:
    type: DNS
    hostname: ms1
---
apiVersion: servicemesh.oci.oracle.com/v1beta1
kind: VirtualServiceRouteTable
metadata:
  name: ms1-route-table
  namespace: app-namespace
spec:
  compartmentId: ocid1.compartment.oc1..aaaa...
  virtualService:
    ref:
      name: ms1
  routeRules:
    - httpRoute:
        destinations:
          - virtualDeployment:
              ref:
                name: ms1
            weight: 100
---
kind: VirtualService
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ms2
  namespace: app-namespace
spec:
  mesh:
    ref:
      name: app-name
 
  defaultRoutingPolicy:
    type: UNIFORM
  compartmentId: ocid1.compartment.oc1..aaaa...
 
  hosts:
    - ms2
    - ms2:9080
---
kind: VirtualDeployment
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ms2-v1
  namespace: app-namespace
spec:
  virtualService:
    ref:
      name: ms2
  compartmentId: ocid1.compartment.oc1..aaaa...
  listener:
    - port: 9080
      protocol: HTTP
  accessLogging:
    isEnabled: true
  serviceDiscovery:
    type: DNS
    hostname: ms2-v1
---
kind: VirtualDeployment
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ms2-v2
  namespace: app-namespace
spec:
  virtualService:
    ref:
      name: ms2
  compartmentId: ocid1.compartment.oc1..aaaa...
  listener:
    - port: 9080
      protocol: HTTP
  accessLogging:
    isEnabled: true
  serviceDiscovery:
    type: DNS
    hostname: ms2-v2
---
apiVersion: servicemesh.oci.oracle.com/v1beta1
kind: VirtualServiceRouteTable
metadata:
  name: ms2-route-table
  namespace: app-namespace
spec:
  compartmentId: ocid1.compartment.oc1..aaaa...
  virtualService:
    ref:
      name: ms2
  routeRules:
    - httpRoute:
        destinations:
          - virtualDeployment:
              ref:
                name: ms2-v1
            weight: 50
          - virtualDeployment:
              ref:
                name: ms2-v2
            weight: 50
---
kind: VirtualService
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ui
  namespace: app-namespace
spec:
  mesh:
    ref:
      name: app-name
 
  defaultRoutingPolicy:
    type: UNIFORM
  compartmentId: ocid1.compartment.oc1..aaaa...
 
  hosts:
    - ui
    - ui:9080
---
kind: VirtualDeployment
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ui
  namespace: app-namespace
spec:
  virtualService:
    ref:
      name: ui
  compartmentId: ocid1.compartment.oc1..aaaa...
  listener:
    - port: 9080
      protocol: HTTP
  accessLogging:
    isEnabled: true
  serviceDiscovery:
    type: DNS
    hostname: ui
---
apiVersion: servicemesh.oci.oracle.com/v1beta1
kind: VirtualServiceRouteTable
metadata:
  name: ui-route-table
  namespace: app-namespace
spec:
  compartmentId: ocid1.compartment.oc1..aaaa...
  virtualService:
    ref:
      name: ui
  routeRules:
    - httpRoute:
        destinations:
          - virtualDeployment:
              ref:
                name: ui
            weight: 100
---
kind: IngressGateway
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: app-name-ingress-gateway
  namespace: app-namespace
spec:
  compartmentId: ocid1.compartment.oc1..aaaa...
  mesh:
    ref:
      name: app-name
  hosts:
    - name: exampleHost
      hostnames:
        - hostname.example.com
        - hostname.example.com:80
        - hostname.example.com:443
      listeners:
        - port: 9080
          protocol: HTTP
          tls:
            serverCertificate:
              ociTlsCertificate:
                certificateId: ocid1.certificate.oc1.iad.aaa...
            mode: TLS
  accessLogging:
    isEnabled: true
---
apiVersion: servicemesh.oci.oracle.com/v1beta1
kind: IngressGatewayRouteTable
metadata:
  name: app-name-ingress-gateway-route-table
  namespace: app-namespace
spec:
  compartmentId: ocid1.compartment.oc1..aaaa...
  ingressGateway:
    ref:
      name: app-name-ingress-gateway
  routeRules:
    - httpRoute:
        ingressGatewayHost:
          name: exampleHost
        destinations:
          - virtualService:
              ref:
                name: ui
---
kind: AccessPolicy
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: app-name-policy
  namespace: app-namespace
spec:
  mesh:
    ref:
      name: app-name
  compartmentId: ocid1.compartment.oc1..aaaa...
  rules:
    - action: ALLOW
      source:
        virtualService:
          ref:
            name: ui
      destination:
        virtualService:
          ref:
            name: ms1
    - action: ALLOW
      source:
        virtualService:
          ref:
            name: ms2
      destination:
        allVirtualServices: {}
    - action: ALLOW
      source:
        ingressGateway:
          ref:
            name: app-name-ingress-gateway
      destination:
        virtualService:
          ref:
            name: ui
---

After applying the service mesh resources using the kubectl command, you need to wait until all the resources are in active state:

  1. List all custom resources.
    kubectl get crd
    NAME                                                   CREATED AT
    accesspolicies.servicemesh.oci.oracle.com              2022-05-10T21:50:24Z
    autonomousdatabases.oci.oracle.com                     2022-05-10T21:50:24Z
    catalogsources.operators.coreos.com                    2022-05-10T21:48:21Z
    clusterserviceversions.operators.coreos.com            2022-05-10T21:48:23Z
    ingressgatewaydeployments.servicemesh.oci.oracle.com   2022-05-10T21:50:24Z
    ingressgatewayroutetables.servicemesh.oci.oracle.com   2022-05-10T21:50:24Z
    ingressgateways.servicemesh.oci.oracle.com             2022-05-10T21:50:24Z
    installplans.operators.coreos.com                      2022-05-10T21:48:24Z
    meshes.servicemesh.oci.oracle.com                      2022-05-10T21:50:24Z
    mysqldbsystems.oci.oracle.com                          2022-05-10T21:50:24Z
    olmconfigs.operators.coreos.com                        2022-05-10T21:48:24Z
    operatorconditions.operators.coreos.com                2022-05-10T21:48:25Z
    operatorgroups.operators.coreos.com                    2022-05-10T21:48:26Z
    operators.operators.coreos.com                         2022-05-10T21:48:26Z
    streams.oci.oracle.com                                 2022-05-10T21:50:24Z
    subscriptions.operators.coreos.com                     2022-05-10T21:48:27Z
    virtualdeploymentbindings.servicemesh.oci.oracle.com   2022-05-10T21:50:24Z
    virtualdeployments.servicemesh.oci.oracle.com          2022-05-10T21:50:25Z
    virtualserviceroutetables.servicemesh.oci.oracle.com   2022-05-10T21:50:24Z
    virtualservices.servicemesh.oci.oracle.com             2022-05-10T21:50:24Z
  2. List all objects in custom resource definition by replacing the name of the custom resource in <service-mesh-crd-name> and the namespace where the custom resource is located (<crd-namespace>).
    kubectl get <service-mesh-crd-name> -n <crd-namespace>
    NAME                                            ACTIVE   AGE
    app-namespace/app-name                          True     1h

With these steps completed, your service mesh resources are available in the console.

Service Mesh Binding Resources

The next step is to bind the Service Mesh control plane resources with your infrastructure, in this case the pods in the Kubernetes Cluster. This binding resource enables automatic sidecar injection and pod discovery for proxy software. For more information on binding resources, see: Architecture and Concepts.

The following are the binding resources.

  1. Virtual Deployment Binding: Create four virtual deployment binding resources to associate each of the four virtual deployments in the control plane to the corresponding pods representing those virtual deployments.
  2. Ingress Gateway Deployment: Create one ingress gateway deployment to deploy the ingress gateway defined in the control plane.

Ensure you have enabled sidecar injection in your Kubernetes namespace by running the following command. If you do not enable sidecar injection, the proxies are not injected in your application pods.

kubectl label namespace app-namespace servicemesh.oci.oracle.com/sidecar-injection=enabled
Next, bind the Kubernetes services and deployment to the service mesh with the command:
kubectl apply -f bind.yaml
Sample bind.yaml File

The following is a sample YAML file with the binding configuration for the resources.

---
kind: VirtualDeploymentBinding
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ms1-binding
  namespace: app-namespace
spec:
  virtualDeployment:
    ref:
      name: ms1
      namespace: app-namespace
 
  target:
    service:
      ref:
        name: ms1
        namespace: app-namespace
---
kind: VirtualDeploymentBinding
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ms2-v1-binding
  namespace: app-namespace
spec:
  virtualDeployment:
    ref:
      name: ms2-v1
      namespace: app-namespace
 
  target:
    service:
      ref:
        name: ms2
        namespace: app-namespace
      matchLabels:
        version: v1
---
kind: VirtualDeploymentBinding
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ms2-v2-binding
  namespace: app-namespace
spec:
  virtualDeployment:
    ref:
      name: ms2-v2
      namespace: app-namespace
 
  target:
    service:
      ref:
        name: ms2
        namespace: app-namespace
      matchLabels:
        version: v2
---
kind: VirtualDeploymentBinding
apiVersion: servicemesh.oci.oracle.com/v1beta1
metadata:
  name: ui-binding
  namespace: app-namespace
spec:
  virtualDeployment:
    ref:
      name: ui
      namespace: app-namespace
 
  target:
    service:
      ref:
        name: ui
        namespace: app-namespace
---
apiVersion: servicemesh.oci.oracle.com/v1beta1
kind: IngressGatewayDeployment
metadata:
  name: app-name-ingress-gateway-deployment
  namespace: app-namespace
spec:
  ingressGateway:
    ref:
      name: app-name-ingress-gateway
      namespace: app-namespace
  deployment:
    autoscaling:
      minPods: 1
      maxPods: 1
  ports:
    - protocol: TCP
      port: 9080
      serviceport: 443
  service:
    type: ClusterIP
---

For more information on mTLS and routing policies, see:

Use Service Mesh Ingress Gateway

So far we have setup and deployed the ingress gateway but the incoming traffic must be redirected to it. Assuming you have an ingress service of type LoadBalancer in your Kubernetes Cluster you have to update it to point to the ingress gateway.

kubectl apply -f ingress.yaml
Sample ingress.yaml File

The following is a sample YAML file ingress gateway configuration information.

apiVersion: v1
kind: Service
metadata:
  name: application-ingress
  namespace: app-namespace
spec:
  ports:
  - port: 80
    targetPort: 9080
    name: http
  selector:
    servicemesh.oci.oracle.com/ingress-gateway-deployment: app-name-ingress-gateway-deployment
  type: LoadBalancer
---

Enabling Egress Traffic for your Service Mesh

To allow outgoing egress traffic from your service mesh, an access policy must be configured. To create an egress access policy, use the kubectl apply command. For example:

kubectl apply -f egress-access-policy.yaml

The following sample yaml configuration file creates an egress access policy. The policy defines two egress rules, one for HTTP and one for HTTPS. For an external service, three protocols are supported: HTTP, HTTPS, and TCP. The protocols correlate with the httpExternalService, httpsExternalService, and the tcpExternalService keys in Kubernetes. Host names and ports can be specified to for each entry.

Sample egress-access-policy.yaml File

The following is a sample YAML file to configure egress with an access policy.

apiVersion: servicemesh.oci.oracle.com/v1beta1
kind: AccessPolicy
metadata:
  name: <sample-access-policy>      # Access Policy name
  namespace: <sample-namespace>
spec:
  compartmentId: ocid1.compartment.oc1..aaa...
  name: sample-ap     # Access Policy name inside the mesh
  description: This Access Policy
  mesh:
    ref:
      name: <sample-mesh>
  rules:
     - action: ALLOW
      source:
        virtualService:
          ref:
            name: <vs-sample-page>
      destination:
        externalService:
          httpExternalService:
            hostnames: ["example.com"]
            ports: [80]
    - action: ALLOW
      source:
        virtualService:
          ref:
            name: <vs-sample-page>
      destination:
        externalService:
          httpsExternalService:
            hostnames: ["example.com"]
            ports: [443]

For more information on creating an access policy with the console, see: Creating an Access Policy.

For more information on creating an access policy with the kubectl, see: Managing Access Policies with kubectl.

Add Logging Support to your Mesh

Now that your application has Service Mesh support, you can add logging features. After adding logging features, you can see your logs in the OCI Logging Service.

Note

To create the policy that allows your instances to support logging, follow the instructions in Set up Policies required for Service Mesh.

Next, set up the OCI Logging service to store your access logs. Set up log scraping by creating a log group and custom Log.

  1. Create the log group:
    oci logging log-group create --compartment-id <your-compartment-ocid> --display-name <your-app-name>
  2. Get the OCID for your new log group.
    • From the console, go to Observability & Management under Logging select Log Groups.
    • Click the name of the log group you created in the preceding step.
    • Locate the OCID field and click Copy. Save the OCID in a text file.
  3. Create a custom log in the log group:
    oci logging log create --log-group-id <your-log-group-ocid> --display-name <your-app-name>-logs --log-type custom
  4. Get the OCID for your new log group.
    • From the console, go to Observability & Management under Logging select Logs.
    • Click the name of the log you created in the preceding step.
    • Locate the OCID field and click Copy. Save the OCID in a text file.
  5. On your system, create the logconfig.json configuration file using the following sample file. Ensure to put in the OCID for your custom log in the logObjectId field.
    {
      "configurationType": "LOGGING",
        "destination": {
          "logObjectId": "<your-custom-log-ocid>"
        },
        "sources": [
          {
            "name": "proxylogs",
            "parser": {
              "fieldTimeKey": null,
              "isEstimateCurrentEvent": null,
              "isKeepTimeKey": null,
              "isNullEmptyString": null,
              "messageKey": null,
              "nullValuePattern": null,
              "parserType": "NONE",
              "timeoutInMilliseconds": null,
              "types": null
            },
            "paths": [
              "/var/log/containers/*<app-namespace>*oci-sm-proxy*.log"
            ],
            "source-type": "LOG_TAIL"
          }
        ]
    }
  6. Create a custom agent-configuration to scrape the log files for the proxy containers:
    oci logging agent-configuration create --compartment-id <your-compartment-ocid> --is-enabled true --service-configuration file://your-log-config.json --display-name <your-app-name>LoggingAgent --description "Custom agent config for mesh" --group-association '{"groupList": ["<your-dynamic-group-ocid>"]}'
Note

For information on how to configure your log, see: Agent Management: Managing Agent Configurations

Add Application Monitoring and Graphing Support

To add Kubernetes monitoring and graphing support for your application, you need to have Prometheus and Grafana installed as specified in the prerequisites. In addition, you need to configure Prometheus to enable scraping metrics from the Service Mesh proxies.

The service mesh proxies expose the metrics on the /stats/prometheus endpoint. When creating the ClusterRole for the prometheus service, include /stats/prometheus in the "nonResourceURLs." See the following ClusterRole configuration example.

Sample ClusterRole
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
  - apiGroups:
      - ""
    resources:
      - nodes
      - nodes/proxy
      - nodes/metrics
      - services
      - endpoints
      - pods
      - ingresses
      - configmaps
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
      - ingresses
    verbs:
      - get
      - list
      - watch
  - nonResourceURLs:
      - "/stats/prometheus"
    verbs:
      - get
---

Add Scrape Job

As a part of the prometheus scrape config you need to add a job to scrape metrics from the Service Mesh proxy endpoints. See the following scrape_config example.

Sample scrape_config
scrape_configs:
  - job_name: 'kubernetes-pods'
 
    metrics_path: /stats/prometheus
    kubernetes_sd_configs:
    - role: pod
 
    relabel_configs:
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name

More Information

For more information on managing mesh resources, see:

Next: Configure OCI Service Operator for Kubernetes Service Mesh