In this tutorial, deploy the Bookinfo application to a Kubernetes cluster. Then, add
Oracle Cloud Infrastructure Service Mesh to your application deployment.
Key tasks include how to:
Install the required software to access your application from a local machine.
Set up OCI CLI to access your cluster.
Set up a Kubernetes cluster on OCI.
Set up Service Mesh required Services.
Deploy and Configure your Application for Service Mesh.
Test your application using Service Mesh features.
Configure your application for Logging and Metrics.
The following image shows the BookInfo application on Service Mesh:
Note
The gray rectangular boxes in the picture represent virtual deployments in the
application. The named virtual deployments include: Product Page, Details, Reviews v1 to
v3, and Ratings.
If you want to use an OCI Free Tier Linux compute instance to manage your deployment,
the following sections provide information to get the required software installed.
Install a Linux VM with an Always Free compute shape, on Oracle Cloud
Infrastructure. You need a machine with ssh support to connect to
your Linux instance.
Repeat for Scope:<second-availability-domain> and
<third-availability-domain>. Each region must have
at least 50 GB of block volume available.
Find out how many Flexible Load Balancers you have available:
Filter for the following options:
Service: LbaaS
Scope:<your-region>. Example:
us-ashburn-1
Resource:<blank>
Compartment:<tenancy-name> (root)
Find the number of available flexible load balancers:
Limit Name:lb-flexible-count
Available: minimum 1
Note
This tutorial creates three compute instances with a VM.Standard.E3.Flex shape
for the cluster nodes. To use another shape, filter for its core count. For
example, for VM.Standard2.4, filter for Cores for Standard2 based VM and BM
Instances and get the count.
This tutorial uses a 'Quick Create' workflow to create a cluster with a public
regional subnet that hosts a flexible load balancer. To use a different load
balancer, you can use a custom workflow to explicitly specify which existing network
resources to use, including the existing subnets in which to create the load
balancers.
To use another bandwidth for the load balancer, filter for its count, for
example 100-Mbps bandwidth or 400-Mbps bandwidth.
The Python virtualenv creates a folder that contains all the
executables and libraries for your project.
The virtualenvwrapper is an extension to virtualenv.
It provides a set of commands, which makes working with virtual environments much
more pleasant. It also places all your virtual environments in one place. The
virtualenvwrapper provides tab-completion on environment
names.
Install
virtualenv.
Copy
pip3 install --user virtualenv
Install
virtualenvwrapper.
Copy
pip3 install --user virtualenvwrapper
Find the location of the virtualenvwrapper.sh
script.
Copy
grep -R virtualenvwrapper.sh
Example
paths:
Linux example:
/home/ubuntu/.local/bin/virtualenvwrapper.sh
Create a compartment for the resources that you create in this tutorial.
Note
For simplicity, the application, Service Mesh, and required resources are created in
the same compartment. In production, all these components might be in different
compartments.
Log in to the Oracle Cloud Infrastructure Console.
Open the navigation menu and click
Identity & Security. Under Identity, click
Compartments.
Set up dynamic groups for cluster worker nodes and for the Certificate
Service.
Create Dynamic Group for Worker Nodes π
The cluster runs three processes that are essential to the working of Service Mesh:
Mesh Kubernetes Operator, Mesh Proxies, and Logging Agent. These processes require
permissions to required resources for proper functioning. The Service Mesh processes
use the Instance Principals of the worker nodes in your cluster. Define a dynamic
group consisting of those instances.
Note
Assume you create your cluster in
<your-service-mesh-compartment>.
From the Console, go to Identity & Security under Identity select Dynamic Groups.
Click Create Dynamic Group.
Name your dynamic group: <your-dynamic-group>.
Create your dynamic group using a compartment.
Copy
ANY {instance.compartment.id = '<your-service-mesh-compartment-id>'}
Click Create.
Set up a Dynamic Group for the Certificate Authority
The Service Mesh service natively uses the Certificates Service to manage
certificates. The Certificates Service needs permissions to use the key and vault
services in your compartment. Define a dynamic group to enable Certificate Service
permissions for your tenancy.
Open the navigation menu and click Identity &
Security. Under Identity, click
Dynamic Groups.
Click Create Dynamic Group.
The Create
Dynamic Group dialog is displayed.
Fill in the information in Create Dynamic Group.
Name: <your-certs-dynamic-group>
Description: <your-description>
Matching Rules
Add the following to Rule 1:
Copy
ANY {resource.type='certificateauthority', resource.type='certificate'}
Add policies needed for your application, Service Mesh, and your resources. This
policy approach defines an administrator group which gives administrator rights to a user
for a specific compartment. Only use this approach for development scenarios.
Have your administrator add the following policies to your tenancy:
Copy
allow group <the-group-your-username-belongs> to manage all-resources in compartment <your-service-mesh-compartment-name>
allow dynamic-group <your-dynamic-group> to manage all-resources in compartment <your-service-mesh-compartment>
allow dynamic-group <your-certs-dynamic-group> to manage all-resources in compartment <your-service-mesh-compartment>
With this privilege, you can manage all resources in your compartment.
Essentially, you have administrative rights in that compartment including all Kubernetes
and Service Mesh resources.
Note
Setting the policies in this manner is for development purposes only, not
production.
Perform the following steps to add the policies for your compartment.
From the console, go to Identity & Security under
Identity select Policies.
Click Create Policy.
Name your policy:
<your-compartment-manage-all-resources-policy-name>.
Ensure that your compartment is selected.
Enter the following policies into the Policy Builder.
Copy
allow group <the-group-your-username-belongs> to manage all-resources in compartment <your-service-mesh-compartment-name>
allow dynamic-group <your-dynamic-group> to manage all-resources in compartment <your-service-mesh-compartment>
allow dynamic-group <your-certs-dynamic-group> to manage all-resources in compartment <your-service-mesh-compartment>
Add policies needed for your application, Service Mesh, and your resources using a
resource approach. The approach defines policies for resources used in Service Mesh. This
approach allows resources to be stored in multiple compartments. Use this approach for
production environments.
Note
The steps described in this section, use a four compartment approach to setting up
Service Mesh: cluster, certificates, service mesh, and vault. The preceding option 1
section sets up everything in a single compartment.
Create Policies for Certificates Service π
Give permissions to the Certificates Service to use your keys and vault. Assume you
created your key and vault in <your-vault-compartment>.
From the console, go to Identity & Security under
Identity select Policies.
Click Create Policy.
Name your policy: <your-certificate-policy-name>.
Ensure that your compartment is selected.
Enter the following policies into the Policy Builder.
Copy
Allow dynamic-group <your-certs-dynamic-group> to use keys in compartment <your-vault-compartment>
Allow dynamic-group <your-certs-dynamic-group> to manage objects in compartment <your-vault-compartment>
To save your policy, click Create.
Create Policies for Service Mesh Kubernetes Operator and Mesh Proxies π
Assume that your certificate authority is created in
<your-certificate-compartment>. Using
<your-dynamic-group>, create the policies that give
<your-certficate-compartment> the required access for
Service Mesh.
From the console, go to Identity & Security under
Identity select Policies.
Click Create Policy.
Name your policy: <your-mesh-proxies-policy-name>.
Ensure that your compartment is selected.
Enter the following policies into the Policy Builder to
enable Service Mesh access for the Mesh Kubernetes Operator and Mesh Proxies.
Copy
Allow dynamic-group <your-dynamic-group> to manage service-mesh-family in compartment <your-mesh-compartment>
To enable the Certificates access for the Service Mesh Kubernetes operator,
enter the following policies into the Policy Builder.
Copy
Allow dynamic-group <your-dynamic-group> to read certificate-authority-family in compartment <your-certificate-compartment>
Allow dynamic-group <your-dynamic-group> to use certificate-authority-delegates in compartment <your-certificate-compartment>
Allow dynamic-group <your-dynamic-group> to manage leaf-certificate-family in compartment <your-certificate-compartment>
Allow dynamic-group <your-dynamic-group> to manage certificate-authority-associations in compartment <your-certificate-compartment>
Allow dynamic-group <your-dynamic-group> to manage certificate-associations in compartment <your-certificate-compartment>
Allow dynamic-group <your-dynamic-group> to manage cabundle-associations in compartment <your-certificate-compartment>
To save your policy, click Create.
Create Policies for Observability π
To enable the logging agent to publish logs to OCI Logging, create the following
policy.
From the console, go to Identity & Security under
Identity select Policies.
Click Create Policy.
Name your policy: <your-mesh-observe-policy-name>.
Ensure that your compartment is selected.
Enter the following policies into the Policy Builder.
Copy
Allow dynamic-group <your-dynamic-group> to use metrics in compartment <your-cluster-compartment>
Allow dynamic-group <your-dynamic-group> to use log-content in compartment <your-cluster-compartment>
To save your policy, click Create.
For More IAM Policy Information
For more information on IAM policies related to Service Mesh, see:
After you create a Kubernetes cluster, set up your local system to access the
cluster.
Note
To set up local access to your Kubernetes cluster, the OCI CLI must be installed
and configured to access your tenancy. For example, run the following command to get
you tenancy name:
Install the OCI Service Operator for Kubernetes so you can create, manage, and
connect to OCI resources from a Kubernetes environment.
Install the Operator SDK required to install OCI Service Operator for
Kubernetes into your cluster.
Go to the Operator SDK installation page
and follow the installation instructions for your operating system to
install the Operator SDK CLI.
To verify that the Operator SDK CLI is installed, run the following
command.
Copy
operator-sdk version
The output is similar to:
operator-sdk version: "v1.20.0"...
Install the Operator Lifecycle Manager (OLM).
Note
The OLM helps users install, update, and manage the lifecycle of
Kubernetes native applications (Operators) and their associated services
running in clusters.
To install OLM, run:
Copy
operator-sdk olm install --version 0.20.0
Note
Local access to your Kubernetes cluster must be set up on your
machine before you can perform this step.
To verify your OLM installation, run the following command:
Copy
operator-sdk olm status
The command output displays all the necessary Customer Resource
Definitions (CRDs) in the cluster. The output is similar to the
following:
Create a Kubernetes namespace for your operator. Run the following
command:
Copy
kubectl create ns oci-service-operator-system
Note
As an alternative to creating an operator namespace, you can deploy to
your application namespace. The operator still functions normally in this
scenario.
Install the OCI Service Operator for Kubernetes Operator the namespace
(oci-service-operator-system) in the Kubernetes cluster in
your namespace. Run the following command.
Copy
operator-sdk run bundle iad.ocir.io/oracle/oci-service-operator-bundle:X.X.X -n oci-service-operator-system --timeout 5m
Users must be logged into the Oracle Registry at iad.ocir.io in Docker to run the command. To ensure you are logged in, see Pulling Images Using the Docker CLI.
The command produces output similar to the following:
INFO[0036] Successfully created registry pod: iad-ocir-io-oracle-oci-service-operator-bundle-X-X-X
INFO[0036] Created CatalogSource: oci-service-operator-catalog
INFO[0037] OperatorGroup "operator-sdk-og" created
INFO[0037] Created Subscription: oci-service-operator-vX-X-X-sub
INFO[0040] Approved InstallPlan install-tzk5f for the Subscription: oci-service-operator-vX-X-X-sub
INFO[0040] Waiting for ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" to reach 'Succeeded' phase
INFO[0040] Waiting for ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" to appear
INFO[0048] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" phase: Pending
INFO[0049] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" phase: InstallReady
INFO[0053] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" phase: Installing
INFO[0066] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" phase: Succeeded
INFO[0067] OLM has successfully installed "oci-service-operator.vX.X.X"
Install metrics server to enable Ingress Gateway autoscaling. To install the
metrics server, run the following command:
In this tutorial, the Bookinfo app is deployed to the Kubernetes cluster. The Bookinfo
sample app is distributed as part of the Istio open source project. You can download the
source code from the Istio sample page on GitHub.
Note
In this tutorial, the deployment file (bookinfo-v1.yaml) points
to the Bookinfo image on Docker Hub. Downloading and building a Docker image is
optional.
Reviewing the Bookinfo App π
The following picture displays the Bookinfo application components along with the
Service Mesh resources. Bookinfo is a book store application composed of four
microservices.
Product Page Service: The main UI service. Information is pulled from the
other services to display a book's information.
Details Service: This service provides details about each book.
Reviews Service: This service provides the reviews associated with a
particular book. It calls the ratings service. The reviews service has multiple
versions.
Note
The following is a list of behaviors for each review service
version:
Version v1 doesn't call the ratings service.
Version v2 calls the ratings service, and displays each rating as
1β5 black stars.
Version v3 calls the ratings service, and displays each rating as
1β5 red stars.
Ratings Service: This service provides the ratings data for a
review.
The picture also includes the various Service Mesh resources that are included with
the application. More information is provided on Service Mesh resources in the next
section.
Note
The gray boxes represent virtual deployments in the application.
Deploy your Application π
Follow these steps to deploy the application to your cluster.
Create the bookinfo namespace for the application
Copy
kubectl create namespace bookinfo
Deploy the Bookinfo application with the Product Page, Details, Reviews, and
Ratings services using the following bookinfo-v1.yaml
file.
The Bookinfo application Docker images are precompiled and stored on Docker Hub.
Search the YAML file for image: keys for the URL of each
application component.
In this tutorial, Service Mesh Control Plane resources are managed with
kubectl. To enable Service Mesh for your application, you need
to create two sets of resources:
Service Mesh Control Plane resources
Service Mesh binding resources
The required control plane Service Mesh resources created and their names are
summarized as follows.
Mesh:bookinfo-mesh.
Virtual Services:
Details Virtual Service:details
Virtual Deployment:details-v1
Virtual Service Route Table:details-route-table
Ratings Virtual Service:ratings
Virtual Deployment:ratings-v1
Virtual Service Route Table:ratings-route-table
Reviews Virtual Service:reviews
Virtual Deployment:reviews-v1
Virtual Deployment:reviews-v2
Virtual Deployment:reviews-v3
Virtual Service Route Table:reviews-route-table
Product Page Virtual Service:productpage
Virtual Deployment:productpage-v1
Virtual Service Route Table:productpage-route-table
From the console, go to Observability & Management under
Logging select Logs.
Click the name of the log you created in the preceding step.
Locate the OCID field and click Copy. Save the OCID in a text file.
On your system, create the logconfig.json configuration
file using the following sample file. Ensure to put in the OCID for your custom
log in the logObjectId field. Also update
<app-namespace> with your application
namespace.
With your logging configuration created, repeat the application test in one of the following ways.
View app in browser at http://bookinfo.example.com
curl http://bookinfo.example.com
Pick one of the user accounts and reload the page repeatedly. The ratings for the
book switch between no stars, black stars, or red stars. After making sufficient
calls to the app, you are ready to view the logs.
View the Log Data in the Console
To view the log data in the console, perform the following steps.
From the console, go to Observability & Management under
Logging select Logs.
Click the name of the log you created previously.
Click Explore Log in the left navigation.
Set the time filters to see all the current log entries.
Click Explore with Log Search to create detailed filters
to search the log data.
To see logging details, click individual log entries. The
tailed_path field shows the version of the virtual
deployment used in that entry. This field shows the version of the reviews service
used (v1, v2, or v3).
To accumulate data from Service Mesh, install Prometheus and Grafana. Create the
monitoring namespace for the applications.
Copy
kubectl create namespace monitoring
Next, the system adds the following prometheus features to your application.
Add Application Monitoring and Graphing Support
The Service Mesh proxies expose the metrics on the
/stats/prometheus endpoint. When creating the
ClusterRole for the Prometheus service, include
/stats/prometheus in the "nonResourceURLs." See the
following deployment yaml for the ClusterRole configuration
example.
Add Scrape Job
As a part of the Prometheus scrape config you need to add a job to scrape
metrics from the Service Mesh proxy endpoints. See the following
prometheus.yaml file for a
scrape_config example.
Install Prometheus
To install Prometheus, perform the following steps:
Save the following sample prometheus.yaml file to your
local system. The yaml file provides an example of deploying Prometheus
including the /stats/prometheus endpoint and
scrape_config for accumulating metrics data.
As a best practice, install the latest Grafana version. In the
following sample file grafana.yaml, you need to replace the
variable X.Y.Z with the specific Grafana version you
selected.
The following is an example deployment grafana.yaml file that
sets up a Grafana instance and creates a Load Balancer to make it accessible in the
cluster. To deploy Grafana, perform the following steps:
Save the following sample grafana.yaml file to your local
system.
Update the X.Y.Z variable in the
grafana.yaml file with the Grafana version that you
installed.
After deployment, get the external IP to access for the Grafana instance load
balancer using one of the following methods:
Use kubectl:
Copy
kubectl get svc grafana -n monitoring
Use the OCI Console:
From the console, go to Networking then
Load Balancers.
In the left navigation, select your compartment.
The main window lists load balancers by date. Select the newest
load balancer.
Find the IP Address: field to get the
public IP address of your load balancer.
In a browser, go to the external IP address of the Grafana instance.
Go to Dashboards then Manage on
the left navigation bar.
Navigate into the mesh-demo folder on the page.
Click Bookinfo Dashboard.
The Bookinfo Dashboard page displays graphs for all the
Bookinfo services including ingress success rate, egress success rate, P95
latency, and traffic split of the Review service versions.
To see some data, browse to the Bookinfo page and generate some traffic. Metrics
start showing up afterward.
If you need to remove Service Mesh support from your application, follow these
steps.
Delete Ingress Gateway Deployment π
Identify your load balancers.
Before you delete your ingress gateway
deployment, take a note of the LoadBalancers currently serving traffic. The
following commands identify the LoadBalancers serving traffic for your
application.
Copy
kubectl get svc bookinfo-ingress -n bookinfo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bookinfo-ingress LoadBalancer x.y.z.w a.b.c.d 80:30018/TCP 14d
Copy
kubectl get svc bookinfo-ingress-gateway-deployment-service -n bookinfo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bookinfo-ingress-gateway-deployment-service LoadBalancer l.m.n.o p.q.r.s 80:31434/TCP 13d
The application is available through bookinfo-ingress. Note
the following information.
Kubernetes LoadBalancer's EXTERNAL-IP is
a.b.c.d.
OCI Service Mesh's ingress gateway deployment host name is
bookinfo.example.com.
Note
The host name is
bookinfo.example.com after adding an entry
similar to p.q.r.s bookinfo.example.com in the
/etc/hosts file.
Delete the ingress gateway deployment.
To delete the ingress gateway
deployment, run the following commands:
Copy
# List the IGDs
kubectl get ingressgatewaydeployments -n bookinfo
# Delete the IGDs
kubectl delete ingressgatewaydeployments/bookinfo-ingress-gateway-deployment -n bookinfo
Note
The bookinfo application continues to serve traffic before, during, and
after the deletion of IGD using the Kubernetes LoadBalancer
bookinfo-ingress with an EXTERNAL-IP
of a.b.c.d.
Disable Sidecar Injection π
Disable sidecar injection on the namespace where the application resides. Run the
following commands:
Copy
# List existing labels for the bookinfo namespace
kubectl get namespace bookinfo --show-labels
# Set the label to disabled
kubectl label namespace bookinfo servicemesh.oci.oracle.com/sidecar-injection=disabled --overwrite
Restart the Deployments π
Roll out deployment restarts to prevent downtime. Perform this step as the preceding
step (removal/disabling of the label) doesn't remove the proxy sidecars
automatically. This step removes the proxy sidecars. Run the following commands:
Copy
# List all deployments in bookinfo namespace
kubectl get deployments -n bookinfo
# Rollout restart the deployments in bookinfo namespace
kubectl rollout restart deployment/details-v1 -n bookinfo
kubectl rollout restart deployment/productpage-v1 -n bookinfo
kubectl rollout restart deployment/ratings-v1 -n bookinfo
kubectl rollout restart deployment/reviews-v1 -n bookinfo
kubectl rollout restart deployment/reviews-v2 -n bookinfo
kubectl rollout restart deployment/reviews-v3 -n bookinfo
Delete the Virtual Deployment Bindings π
To delete the virtual deployment bindings, run the following commands:
Copy
# List all VDBs in bookinfo namespace
kubectl get virtualdeploymentbindings -n bookinfo
# Delete all VDBs in bookinfo namespace
kubectl delete virtualdeploymentbindings/details-v1-binding -n bookinfo
kubectl delete virtualdeploymentbindings/productpage-v1-binding -n bookinfo
kubectl delete virtualdeploymentbindings/ratings-v1-binding -n bookinfo
kubectl delete virtualdeploymentbindings/reviews-v1-binding -n bookinfo
kubectl delete virtualdeploymentbindings/reviews-v2-binding -n bookinfo
kubectl delete virtualdeploymentbindings/reviews-v3-binding -n bookinfo
Delete the Remaining Mesh Resources π
Delete all the remaining mesh resources in the following order:
Access Policies
Virtual Service Route Tables
β¨Ingress Gateway Route Tables
Ingress Gateways
Virtual Deployments
Virtual Services
β¨Meshes
Delete Access Policies.
Copy
# List all APs in bookinfo namespace
kubectl get accesspolicies -n bookinfo
# Delete all APs in bookinfo namespace
kubectl delete accesspolicies/bookinfo-policy -n bookinfo
Delete Virtual Service Route Tables.
Copy
# List all VSRTs in bookinfo namespace
kubectl get virtualserviceroutetables -n bookinfo
# Delete all VSRTs in bookinfo namespace
kubectl delete virtualserviceroutetables/details-route-table -n bookinfo
kubectl delete virtualserviceroutetables/productpage-route-table -n bookinfo
kubectl delete virtualserviceroutetables/ratings-route-table -n bookinfo
kubectl delete virtualserviceroutetables/reviews-route-table -n bookinfo
Delete Ingress Gateway Route Tables.
Copy
# List all IGRTs in bookinfo namespace
kubectl get ingressgatewayroutetables -n bookinfo
# Delete all IGRTs in bookinfo namespace
kubectl delete ingressgatewayroutetables/bookinfo-ingress-gateway-route-table -n bookinfo
Delete Ingress Gateways.
Copy
# List all IGs in bookinfo namespace
kubectl get ingressGateways -n bookinfo
# Delete all IGs in bookinfo namespace
kubectl delete ingressGateways/bookinfo-ingress-gateway -n bookinfo
Delete Virtual Deployments.
Copy
# List all VDs in bookinfo namespace
kubectl get virtualDeployments -n bookinfo
# Delete all VDs in bookinfo namespace
kubectl delete virtualDeployments/details-v1 -n bookinfo
kubectl delete virtualDeployments/productpage-v1 -n bookinfo
kubectl delete virtualDeployments/ratings-v1 -n bookinfo
kubectl delete virtualDeployments/reviews-v1 -n bookinfo
kubectl delete virtualDeployments/reviews-v2 -n bookinfo
kubectl delete virtualDeployments/reviews-v3 -n bookinfo
Delete Virtual Services.
Copy
# List all VSs in bookinfo namespace
kubectl get virtualServices -n bookinfo
# Delete all VSs in bookinfo namespace
kubectl delete virtualServices/details -n bookinfo
kubectl delete virtualServices/productpage -n bookinfo
kubectl delete virtualServices/ratings -n bookinfo
kubectl delete virtualServices/reviews -n bookinfo
Delete Meshes.
Copy
# List all Meshes in bookinfo namespace
kubectl get meshes -n bookinfo
# Delete all Meshes in bookinfo namespace
kubectl delete meshes/bookinfo -n bookinfo
What's Next π
Congratulations! You have successfully deployed the Bookinfo app to a Kubernetes cluster and
added Service Mesh to your app.
To explore more information about development with Oracle products, check out these
sites: