Deploy WSO2 API Manager in Kubernetes using Google Cloud Platform
Now you can deploy your APIM gateway with Kubernetes in Google Cloud in very easy steps. Important URLs are linked along sections, you can refer them for more information. This article will focus on deploying API Manager 2.6.x pattern 2. Let’s get started.
Prerequisites
- Install Docker
- Install Google Cloud SDK
- Install Kubernetes command-line tool
- Install Git
- Google Cloud Account (Google provide a $300 free trial, but you need to give a credit card information)
- In order to use WSO2 Kubernetes resources, you need an active WSO2 subscription. If you do not possess an active WSO2 subscription already, you can sign up for a WSO2 Free Trial Subscription from here.
Setting up Google Cloud Project
First, create a project on Google Cloud Platform.
Setting up Single node file server
We need an NFS for persistent volume for artifact sharing and persistence. Select your project name from the dropdown menu. and go to Marketplace[figure 1] and find single node file server.
Click on “Launch on compute engine”[figure 2].
You can specify your requirements. Click “Deploy”
Setting up Kubernetes cluster
Go to Compute Engine -> kubernetes engine -> clusters in the Google cloud platform. Create a new cluster there. Choose standard cluster. Specify your requirements. Make sure to give the same zone you configured the NFS previously if you are using zonal cluster type.
Then click create.
Get git repository for Kubernetes for APIM
Clone wso2/kubernetes-apim git repository to your local machine.
git clone https://github.com/wso2/kubernetes-apim.git
Use the master branch. Go to pattern2 folder. I will refer this folder as <KUBERNETES_HOME> from now on.
Go to <KUBERNETES_HOME>/scripts/deploy.sh . This script will be used for the deployment of Kubernetes resources later in this article.
Setting up VM instance
You can SSH to the VM instance even from the terminal of your local machine using the command under “access the monitoring console” [figure 4].
gcloud compute ssh — ssh-flag=-L3000:localhost:3000 — project=apim-in-kubernates-environment — zone asia-south1-b singlefs-1-vm
You should create a Linux system user account with following;
- name: “wso2carbon”
- user id: 801
- group name: “wso2”
- group id: 802
Then add the “wso2carbon” user to the group “wso2”. Use the following command;
sudo groupadd --system -g 802 wso2
sudo useradd --system -g 802 -u 802 wso2carbon
We need to create and export unique directories within the NFS server instance for each Kubernetes Persistent Volume resource defined. If you go through the pattern 2 folder you will see <KUBERNETES_HOME>/persistent-volumes.yaml and <KUBERNETES_HOME>/extras/rdbms/volumes/persistent-volumes.yaml needs to be updated with 4 unique directory locations.
Now we will create directory locations for them.
sudo mkdir /data/km
sudo mkdir /data/apim
sudo mkdir /data/db
sudo mkdir /data/gw
Then grant ownership to wso2carbon user and wso2 group, for each of them.
sudo chown -R wso2carbon:wso2 /data/km
sudo chown -R wso2carbon:wso2 /data/apim
sudo chown -R wso2carbon:wso2 /data/db
sudo chown -R wso2carbon:wso2 /data/gw
Then grant read-write-execute permissions to the wso2carbon user, for each them
sudo chmod -R 755 /data/km
sudo chmod -R 755 /data/apim
sudo chmod -R 755 /data/db
sudo chmod -R 755 /data/gw
Then update this directory information on <KUBERNETES_HOME>/persistent-volumes.yaml and <KUBERNETES_HOME>/extras/rdbms/volumes/persistent-volumes.yaml files.
nfs:
server: <NFS_SERVER_IP>
path: “<NFS_LOCATION_PATH>”
Replace <NFS_SERVER_IP> with VM instance’s Internal IP. You can find it by going to compute engine -> VM instances. Among the list find your created VM instance and copy its internal IP here. Replace each <NFS_LOCATION_PATH> with unique directory paths we created above.
Then go to Kubernetes Engine -> clusters. Then click your cluster. Then click connect button[figure 5]. Enter that Command-line access command in your local machine terminal.
You may need to create a rolebinding for your cluster. You can do it using the following command;
kubectl --username=admin --password=<cluster_password> create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<username>
Replace <username> with your wso2 account username(/email), AND <cluster_password> with cluster password. You can find this password by going to Kubernetes engine -> clusters. Click on your cluster. You will see in the details tab and in front of endpoint “show credentials”. Click and copy the cluster password.
Go to <KUBERNETES_HOME>/scripts/. Open your terminal from there. Execute deploy.sh script;
./deploy.sh
Then check you kubernetes pod status with below command.
Kubectl get pods -n wso2
Then you will see the output as below. (It may take some time about 5 min or so).
Congratulations!!! You successfully deployed the API Manager in Google Cloud Platform.
Load Balancing with NGINX Ingress
Execute the following commands to setup NGINX Ingress in your local machine terminal. Replace <ADMIN_PASSWORD> with the cluster password.
kubectl apply --username=admin --password=<ADMIN_PASSWORD> -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.22.0/deploy/mandatory.yamlkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.22.0/deploy/provider/cloud-generic.yaml
Run the following command to see running ingress services and their ips
kubectl get ing -n wso2
You will see wso2apim-gateway-ingress and wso2apim-ingress. Copy the address of them. Then go to /etc/hosts and add them. I used nano editor here.
sudo nano /etc/hosts
Add the following lines. Replace <address> with your previously copied address.
<address> wso2apim
<address> wso2apim-gateway
Now try navigating to https://wso2apim/carbon , https://wso2apim/publisher , and https://wso2apim/store from your browser.
Cheers! :D